Taishi-N324 commited on
Commit
733b01c
1 Parent(s): 386eebb

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -8
README.md CHANGED
@@ -8,9 +8,9 @@ tag: moe
8
  license: apache-2.0
9
  ---
10
 
11
- # Swallow-MX
12
 
13
- Our Swallow-MX model has undergone continuous pre-training from the Mixtral-8x7B-Instruct-v0.1, primarily with the addition of Japanese language data.
14
 
15
  ![logo](./logo.png)
16
 
@@ -32,6 +32,8 @@ Our Swallow-MX model has undergone continuous pre-training from the Mixtral-8x7B
32
  | Swallow | 7B | 0.4808 | 0.5078 | 0.5968 | 0.8573 | 0.1830 | 0.1240 | 0.2510 | 0.1511 |
33
  | Swallow-Plus | 7B | 0.5478 | 0.5493 | 0.6030 | 0.8544 | 0.1806 | 0.1360 | 0.2568 | 0.1441 |
34
  | Swallow-NVE | 7B | 0.5433 | 0.5425 | 0.5729 | 0.8684 | 0.2117 | 0.1200 | 0.2405 | 0.1512 |
 
 
35
  | Llama 2 | 13B | 0.6997 | 0.4415 | 0.4170 | 0.8533 | 0.2139 | 0.1320 | 0.2146 | 0.1982 |
36
  | Swallow | 13B | 0.7837 | 0.5063 | 0.6398 | 0.9005 | 0.2168 | 0.2040 | 0.2720 | 0.1771 |
37
  | Swallow-NVE | 13B | 0.7712 | 0.5438 | 0.6351 | 0.9030 | 0.2294 | 0.2120 | 0.2735 | 0.1817 |
@@ -39,9 +41,7 @@ Our Swallow-MX model has undergone continuous pre-training from the Mixtral-8x7B
39
  | Swallow | 70B | 0.9348 | **0.6290** | 0.6960 | 0.9176 | 0.2266 | **0.4840** | **0.3043** | 0.2298 |
40
  | Swallow-NVE | 70B | **0.9410** | 0.5759 | **0.7024** | **0.9254** | **0.2758** | 0.4720 | 0.3042 | 0.2322 |
41
  |Mixtral-8x7B-v0.1|8x7B|0.8347|0.5335|0.3549|0.8847|0.2192|0.3120|0.1970|0.1987|
42
- |Swallow-MX-NVE|8x7B|0.9258|0.5843|0.5687|0.9148|0.2589|0.4360|0.2705|0.2074|
43
-
44
- Please note that Swallow-MX-NVE is not derived from Mixtral-8x7B-v0.1, but rather underwent continued pre-training from Mixtral-8x7B-Instruct-v0.1.
45
 
46
  ### English version
47
 
@@ -52,6 +52,8 @@ Please note that Swallow-MX-NVE is not derived from Mixtral-8x7B-v0.1, but rathe
52
  | Swallow | 7B | 0.3180 | 0.4836 | 0.5308 | 0.3125 | 0.8817 | 0.1130 |
53
  | Swallow-Plus | 7B | 0.3280 | 0.4558 | 0.5259 | 0.3134 | 0.8929 | 0.1061 |
54
  | Swallow-NVE | 7B | 0.3180 | 0.5079 | 0.5329 | 0.2919 | 0.8817 | 0.0986 |
 
 
55
  | Llama 2 | 13B | 0.3760 | 0.7255 | 0.6148 | 0.3681 | 0.9140 | 0.2403 |
56
  | Swallow | 13B | 0.3500 | 0.5852 | 0.5660 | 0.3406 | 0.9075 | 0.2039 |
57
  | Swallow-NVE | 13B | 0.3460 | 0.6025 | 0.5700 | 0.3478 | 0.9006 | 0.1751 |
@@ -59,9 +61,9 @@ Please note that Swallow-MX-NVE is not derived from Mixtral-8x7B-v0.1, but rathe
59
  | Swallow | 70B | 0.4220 | 0.7756 | 0.6458 | 0.3745 | 0.9204 | 0.4867 |
60
  | Swallow-NVE | 70B | 0.4240 | 0.7817 | 0.6439 | 0.3451 | 0.9256 | 0.4943 |
61
  |Mixtral-8x7B-v0.1|8x7B|0.3960|0.7989|0.6678|**0.3842**|0.9204|**0.5747**|
62
- |Swallow-MX-NVE|8x7B|0.3740|0.7847|0.6520|0.3801|0.9170|0.5694|
63
 
64
- Please note that Swallow-MX-NVE is not derived from Mixtral-8x7B-v0.1, but rather underwent continued pre-training from Mixtral-8x7B-Instruct-v0.1.
65
 
66
  ## Usage
67
 
@@ -76,7 +78,7 @@ pip install -r requirements.txt
76
  ```python
77
  from transformers import AutoModelForCausalLM, AutoTokenizer
78
 
79
- model_name = "tokyotech-llm/Swallow-MX-NVE-hf"
80
  tokenizer = AutoTokenizer.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
81
 
82
  model = AutoModelForCausalLM.from_pretrained(model_name)
 
8
  license: apache-2.0
9
  ---
10
 
11
+ # Swallow-MX-8x7b-NVE-v0.1
12
 
13
+ Our Swallow-MX-8x7b-NVE-v0.1 model has undergone continuous pre-training from the Mixtral-8x7B-Instruct-v0.1, primarily with the addition of Japanese language data.
14
 
15
  ![logo](./logo.png)
16
 
 
32
  | Swallow | 7B | 0.4808 | 0.5078 | 0.5968 | 0.8573 | 0.1830 | 0.1240 | 0.2510 | 0.1511 |
33
  | Swallow-Plus | 7B | 0.5478 | 0.5493 | 0.6030 | 0.8544 | 0.1806 | 0.1360 | 0.2568 | 0.1441 |
34
  | Swallow-NVE | 7B | 0.5433 | 0.5425 | 0.5729 | 0.8684 | 0.2117 | 0.1200 | 0.2405 | 0.1512 |
35
+ | Mistral-7B-v0.1 | 7B | 0.7301 | 0.4245 | 0.2722 | 0.8563 | 0.2006 | 0.1760 | 0.1405 | 0.1733 |
36
+ |Swallow-MS-7b-v0.1| 7B | 0.8570 | 0.4915 | 0.5519 | 0.8802 | 0.1988 | 0.2240 | 0.2494 | 0.1667 |
37
  | Llama 2 | 13B | 0.6997 | 0.4415 | 0.4170 | 0.8533 | 0.2139 | 0.1320 | 0.2146 | 0.1982 |
38
  | Swallow | 13B | 0.7837 | 0.5063 | 0.6398 | 0.9005 | 0.2168 | 0.2040 | 0.2720 | 0.1771 |
39
  | Swallow-NVE | 13B | 0.7712 | 0.5438 | 0.6351 | 0.9030 | 0.2294 | 0.2120 | 0.2735 | 0.1817 |
 
41
  | Swallow | 70B | 0.9348 | **0.6290** | 0.6960 | 0.9176 | 0.2266 | **0.4840** | **0.3043** | 0.2298 |
42
  | Swallow-NVE | 70B | **0.9410** | 0.5759 | **0.7024** | **0.9254** | **0.2758** | 0.4720 | 0.3042 | 0.2322 |
43
  |Mixtral-8x7B-v0.1|8x7B|0.8347|0.5335|0.3549|0.8847|0.2192|0.3120|0.1970|0.1987|
44
+ |Swallow-MX-8x7b-NVE-v0.1|8x7B|0.9258|0.5843|0.5687|0.9148|0.2589|0.4360|0.2705|0.2074|
 
 
45
 
46
  ### English version
47
 
 
52
  | Swallow | 7B | 0.3180 | 0.4836 | 0.5308 | 0.3125 | 0.8817 | 0.1130 |
53
  | Swallow-Plus | 7B | 0.3280 | 0.4558 | 0.5259 | 0.3134 | 0.8929 | 0.1061 |
54
  | Swallow-NVE | 7B | 0.3180 | 0.5079 | 0.5329 | 0.2919 | 0.8817 | 0.0986 |
55
+ | Mistral-7B-v0.1 | 7B | 0.3660 | 0.7050 | 0.6264 | 0.3799 | 0.9157 | 0.3533 | 0.3440 | 0.5976 | 0.5810 | 0.3364 | 0.9037 | 0.2623 |
56
+ |Swallow-MS-7b-v0.1| 7B | 0.3440 | 0.5976 | 0.5810 | 0.3364 | 0.9037 | 0.2623 |
57
  | Llama 2 | 13B | 0.3760 | 0.7255 | 0.6148 | 0.3681 | 0.9140 | 0.2403 |
58
  | Swallow | 13B | 0.3500 | 0.5852 | 0.5660 | 0.3406 | 0.9075 | 0.2039 |
59
  | Swallow-NVE | 13B | 0.3460 | 0.6025 | 0.5700 | 0.3478 | 0.9006 | 0.1751 |
 
61
  | Swallow | 70B | 0.4220 | 0.7756 | 0.6458 | 0.3745 | 0.9204 | 0.4867 |
62
  | Swallow-NVE | 70B | 0.4240 | 0.7817 | 0.6439 | 0.3451 | 0.9256 | 0.4943 |
63
  |Mixtral-8x7B-v0.1|8x7B|0.3960|0.7989|0.6678|**0.3842**|0.9204|**0.5747**|
64
+ |Swallow-MX-8x7b-NVE-v0.1|8x7B|0.3740|0.7847|0.6520|0.3801|0.9170|0.5694|
65
 
66
+ Please note that Swallow-MX-8x7b-NVE-v0.1 is not derived from Mixtral-8x7B-v0.1, but rather underwent continued pre-training from Mixtral-8x7B-Instruct-v0.1.
67
 
68
  ## Usage
69
 
 
78
  ```python
79
  from transformers import AutoModelForCausalLM, AutoTokenizer
80
 
81
+ model_name = "tokyotech-llm/Swallow-MX-8x7b-NVE-v0.1"
82
  tokenizer = AutoTokenizer.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
83
 
84
  model = AutoModelForCausalLM.from_pretrained(model_name)