lipcut commited on
Commit
a23d49b
1 Parent(s): 68b9a39

Upload folder using huggingface_hub

Browse files
README.md CHANGED
@@ -1,31 +1,23 @@
1
  ---
 
2
  tags:
 
 
3
  - merge
4
  - mergekit
5
  - lazymergekit
6
  - argilla/CapybaraHermes-2.5-Mistral-7B
7
- - WizardLM/WizardMath-7B-V1.1
8
  base_model:
9
  - argilla/CapybaraHermes-2.5-Mistral-7B
10
- - WizardLM/WizardMath-7B-V1.1
11
  ---
12
 
13
- # 試製-暮光-4x7B
14
-
15
- 試製-暮光-7B 是用[LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing)融合以下模型生成的:
16
- * [argilla/CapybaraHermes-2.5-Mistral-7B](https://huggingface.co/argilla/CapybaraHermes-2.5-Mistral-7B)
17
- * [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
18
-
19
- 這是一個實驗模型,目的是爲了檢驗套用在不同語言上的高品質模型調教是否能夠轉移(此模型爲英文到中文)。
20
-
21
-
22
  # shizhi-twilight-7B
23
 
24
- shizhi-twilight-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
25
  * [argilla/CapybaraHermes-2.5-Mistral-7B](https://huggingface.co/argilla/CapybaraHermes-2.5-Mistral-7B)
26
- * [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
27
-
28
- This is an experiment product on checking whether high quality fine-tuning on one language (English) could be transferred to another language (Mandarin) leveraging Slerp merge method.
29
 
30
  ## 🧩 Configuration
31
 
@@ -36,39 +28,42 @@ models:
36
  - model: argilla/CapybaraHermes-2.5-Mistral-7B
37
  parameters:
38
  density: 0.53
39
- weight: 0.65
40
- - model: WizardLM/WizardMath-7B-V1.1
41
- parameters:
42
- density: 0.53
43
- weight: 0.35
44
  merge_method: dare_ties
45
  base_model: MediaTek-Research/Breeze-7B-Instruct-v0_1
46
  parameters:
47
  int8_mask: true
 
 
 
 
 
 
 
 
48
  dtype: bfloat16
49
  ```
50
 
51
  ## 💻 Usage
52
 
53
  ```python
54
- !pip install -qU transformers accelerate
55
 
56
  from transformers import AutoTokenizer
57
  import transformers
58
  import torch
59
 
60
  model = "lipcut/shizhi-twilight-7B"
61
- messages = [{"role": "user", "content": "什麼是大型語言模型?"}]
62
 
63
  tokenizer = AutoTokenizer.from_pretrained(model)
64
- prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
65
  pipeline = transformers.pipeline(
66
  "text-generation",
67
  model=model,
68
- torch_dtype=torch.float16,
69
- device_map="auto",
70
  )
71
 
 
 
72
  outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
73
  print(outputs[0]["generated_text"])
74
  ```
 
1
  ---
2
+ license: apache-2.0
3
  tags:
4
+ - moe
5
+ - frankenmoe
6
  - merge
7
  - mergekit
8
  - lazymergekit
9
  - argilla/CapybaraHermes-2.5-Mistral-7B
10
+ - MediaTek-Research/Breeze-7B-Instruct-v0_1
11
  base_model:
12
  - argilla/CapybaraHermes-2.5-Mistral-7B
13
+ - MediaTek-Research/Breeze-7B-Instruct-v0_1
14
  ---
15
 
 
 
 
 
 
 
 
 
 
16
  # shizhi-twilight-7B
17
 
18
+ shizhi-twilight-7B is a Mixure of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
19
  * [argilla/CapybaraHermes-2.5-Mistral-7B](https://huggingface.co/argilla/CapybaraHermes-2.5-Mistral-7B)
20
+ * [MediaTek-Research/Breeze-7B-Instruct-v0_1](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0_1)
 
 
21
 
22
  ## 🧩 Configuration
23
 
 
28
  - model: argilla/CapybaraHermes-2.5-Mistral-7B
29
  parameters:
30
  density: 0.53
31
+ weight: 0.95
 
 
 
 
32
  merge_method: dare_ties
33
  base_model: MediaTek-Research/Breeze-7B-Instruct-v0_1
34
  parameters:
35
  int8_mask: true
36
+ normalize: true
37
+ experts:
38
+ - source_model: argilla/CapybaraHermes-2.5-Mistral-7B
39
+ positive_prompts:
40
+ - "Peform the following tasks with your best ability"
41
+ - source_model: MediaTek-Research/Breeze-7B-Instruct-v0_1
42
+ positive_prompts:
43
+ - "You are a helpful AI assistant built by MediaTek Research. The user you are helping speaks Traditional Chinese and comes from Taiwan."
44
  dtype: bfloat16
45
  ```
46
 
47
  ## 💻 Usage
48
 
49
  ```python
50
+ !pip install -qU transformers bitsandbytes accelerate
51
 
52
  from transformers import AutoTokenizer
53
  import transformers
54
  import torch
55
 
56
  model = "lipcut/shizhi-twilight-7B"
 
57
 
58
  tokenizer = AutoTokenizer.from_pretrained(model)
 
59
  pipeline = transformers.pipeline(
60
  "text-generation",
61
  model=model,
62
+ model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
 
63
  )
64
 
65
+ messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
66
+ prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
67
  outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
68
  print(outputs[0]["generated_text"])
69
  ```
added_tokens.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "<EOD>": 61873,
3
+ "<PAD>": 61874
4
+ }
config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "MediaTek-Research/Breeze-7B-Instruct-v0_1",
3
+ "architectures": [
4
+ "MixtralForCausalLM"
5
+ ],
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": 1,
8
+ "eos_token_id": 2,
9
+ "hidden_act": "silu",
10
+ "hidden_size": 4096,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 14336,
13
+ "max_position_embeddings": 32768,
14
+ "model_type": "mixtral",
15
+ "num_attention_heads": 32,
16
+ "num_experts_per_tok": 2,
17
+ "num_hidden_layers": 32,
18
+ "num_key_value_heads": 8,
19
+ "num_local_experts": 2,
20
+ "output_router_logits": false,
21
+ "pretraining_tp": 1,
22
+ "rms_norm_eps": 1e-05,
23
+ "rope_theta": 10000.0,
24
+ "router_aux_loss_coef": 0.001,
25
+ "sliding_window": null,
26
+ "tie_word_embeddings": false,
27
+ "torch_dtype": "bfloat16",
28
+ "transformers_version": "4.37.2",
29
+ "use_cache": false,
30
+ "vocab_size": 61952
31
+ }
mergekit_moe_config.yml ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ models:
3
+ - model: MediaTek-Research/Breeze-7B-Instruct-v0_1
4
+ # No parameters necessary for base model
5
+ - model: argilla/CapybaraHermes-2.5-Mistral-7B
6
+ parameters:
7
+ density: 0.53
8
+ weight: 0.95
9
+ merge_method: dare_ties
10
+ base_model: MediaTek-Research/Breeze-7B-Instruct-v0_1
11
+ parameters:
12
+ int8_mask: true
13
+ normalize: true
14
+ experts:
15
+ - source_model: argilla/CapybaraHermes-2.5-Mistral-7B
16
+ positive_prompts:
17
+ - "Peform the following tasks with your best ability"
18
+ - source_model: MediaTek-Research/Breeze-7B-Instruct-v0_1
19
+ positive_prompts:
20
+ - "You are a helpful AI assistant built by MediaTek Research. The user you are helping speaks Traditional Chinese and comes from Taiwan."
21
+ dtype: bfloat16
model-00001-of-00014.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a81c5fbd4284c6f8645b15834bb23aab9a2b6f90866d7a04f152b37ad3f28248
3
+ size 1954820944
model-00002-of-00014.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:50bf864222cd65eabeb49da3def8f2b4bca3a5e2a15585525b48e504b2e63c8b
3
+ size 1996490952
model-00003-of-00014.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:553b5008859133d1a7e726c80b6f7631132b95a8f2b1d71d12eea0f4366c621c
3
+ size 1996490968
model-00004-of-00014.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a558a5cb83d94bc8da3920cd256ad1864c173c0e239a9235f8baf3190a123bf6
3
+ size 1996490968
model-00005-of-00014.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:638c4af7b1e5b48becf6ecd187bbc3c8cba91bcf5eaa2f1dff185ec31e56c806
3
+ size 1996490952
model-00006-of-00014.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9f6a3c5e68cd25bf3f486266552d382e037a9ce63d53177a9ef352ba0236dab1
3
+ size 1996490960
model-00007-of-00014.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d0b8172a898ba167151d63805d3ab3450152b6ba4a81d39d8279469d5b0f6976
3
+ size 1996490968
model-00008-of-00014.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d6869259ba22f6f8b502c928bb440f5de796477e0fae5b60729fd6710d32be9f
3
+ size 1996490968
model-00009-of-00014.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c12ea8891fdfc93f85bcf95e73b6cd138b58f9ed8070acd7602389259991e551
3
+ size 1996490952
model-00010-of-00014.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67ced8955a2c5f16900a8b1dd2d80b999fb03a6c8773ca4faffa629fa10c5af8
3
+ size 1996490960
model-00011-of-00014.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5753f4ed61f8f0deb175949e95b9e03210f2334930adca85494fb7724c1dfcf6
3
+ size 1996490968
model-00012-of-00014.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:61c7e69b7c637f8b66f56bf3a4cd71f01d3e1d1c001376e81575c27440962a82
3
+ size 1979980448
model-00013-of-00014.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c90dd698aa268a71c8ac4c5f9413f00d9159d6716c2274df03d9a71b98363468
3
+ size 1979724008
model-00014-of-00014.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:06cc4118dbb761e5d3d070aa139e6ed2f32fafd854f549b34707ffa10c54f1da
3
+ size 369628008
model.safetensors.index.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"metadata": {"mergekit_version": "0.0.4"}, "weight_map": {"model.embed_tokens.weight": "model-00001-of-00014.safetensors", "model.norm.weight": "model-00001-of-00014.safetensors", "lm_head.weight": "model-00001-of-00014.safetensors", "model.layers.0.input_layernorm.weight": "model-00001-of-00014.safetensors", "model.layers.1.input_layernorm.weight": "model-00001-of-00014.safetensors", "model.layers.2.input_layernorm.weight": "model-00001-of-00014.safetensors", "model.layers.3.input_layernorm.weight": "model-00001-of-00014.safetensors", "model.layers.4.input_layernorm.weight": "model-00001-of-00014.safetensors", "model.layers.5.input_layernorm.weight": "model-00001-of-00014.safetensors", "model.layers.6.input_layernorm.weight": "model-00001-of-00014.safetensors", "model.layers.7.input_layernorm.weight": "model-00001-of-00014.safetensors", "model.layers.8.input_layernorm.weight": "model-00001-of-00014.safetensors", "model.layers.9.input_layernorm.weight": "model-00001-of-00014.safetensors", "model.layers.10.input_layernorm.weight": "model-00001-of-00014.safetensors", "model.layers.11.input_layernorm.weight": "model-00001-of-00014.safetensors", "model.layers.12.input_layernorm.weight": "model-00001-of-00014.safetensors", "model.layers.13.input_layernorm.weight": "model-00001-of-00014.safetensors", "model.layers.14.input_layernorm.weight": "model-00001-of-00014.safetensors", "model.layers.15.input_layernorm.weight": "model-00001-of-00014.safetensors", "model.layers.16.input_layernorm.weight": "model-00001-of-00014.safetensors", "model.layers.17.input_layernorm.weight": "model-00001-of-00014.safetensors", "model.layers.18.input_layernorm.weight": "model-00001-of-00014.safetensors", "model.layers.19.input_layernorm.weight": "model-00001-of-00014.safetensors", "model.layers.20.input_layernorm.weight": "model-00001-of-00014.safetensors", "model.layers.21.input_layernorm.weight": "model-00001-of-00014.safetensors", "model.layers.22.input_layernorm.weight": "model-00001-of-00014.safetensors", "model.layers.23.input_layernorm.weight": "model-00001-of-00014.safetensors", "model.layers.24.input_layernorm.weight": "model-00001-of-00014.safetensors", "model.layers.25.input_layernorm.weight": "model-00001-of-00014.safetensors", "model.layers.26.input_layernorm.weight": "model-00001-of-00014.safetensors", "model.layers.27.input_layernorm.weight": "model-00001-of-00014.safetensors", "model.layers.28.input_layernorm.weight": "model-00001-of-00014.safetensors", "model.layers.29.input_layernorm.weight": "model-00001-of-00014.safetensors", "model.layers.30.input_layernorm.weight": "model-00001-of-00014.safetensors", "model.layers.31.input_layernorm.weight": "model-00001-of-00014.safetensors", "model.layers.0.block_sparse_moe.experts.0.w3.weight": "model-00001-of-00014.safetensors", "model.layers.0.block_sparse_moe.experts.1.w3.weight": "model-00001-of-00014.safetensors", "model.layers.1.block_sparse_moe.experts.0.w3.weight": "model-00001-of-00014.safetensors", "model.layers.1.block_sparse_moe.experts.1.w3.weight": "model-00001-of-00014.safetensors", "model.layers.2.block_sparse_moe.experts.0.w3.weight": "model-00001-of-00014.safetensors", "model.layers.2.block_sparse_moe.experts.1.w3.weight": "model-00001-of-00014.safetensors", "model.layers.3.block_sparse_moe.experts.0.w3.weight": "model-00001-of-00014.safetensors", "model.layers.3.block_sparse_moe.experts.1.w3.weight": "model-00001-of-00014.safetensors", "model.layers.4.block_sparse_moe.experts.0.w3.weight": "model-00002-of-00014.safetensors", "model.layers.4.block_sparse_moe.experts.1.w3.weight": "model-00002-of-00014.safetensors", "model.layers.5.block_sparse_moe.experts.0.w3.weight": "model-00002-of-00014.safetensors", "model.layers.5.block_sparse_moe.experts.1.w3.weight": "model-00002-of-00014.safetensors", "model.layers.6.block_sparse_moe.experts.0.w3.weight": "model-00002-of-00014.safetensors", "model.layers.6.block_sparse_moe.experts.1.w3.weight": "model-00002-of-00014.safetensors", "model.layers.7.block_sparse_moe.experts.0.w3.weight": "model-00002-of-00014.safetensors", "model.layers.7.block_sparse_moe.experts.1.w3.weight": "model-00002-of-00014.safetensors", "model.layers.8.block_sparse_moe.experts.0.w3.weight": "model-00002-of-00014.safetensors", "model.layers.8.block_sparse_moe.experts.1.w3.weight": "model-00002-of-00014.safetensors", "model.layers.9.block_sparse_moe.experts.0.w3.weight": "model-00002-of-00014.safetensors", "model.layers.9.block_sparse_moe.experts.1.w3.weight": "model-00002-of-00014.safetensors", "model.layers.10.block_sparse_moe.experts.0.w3.weight": "model-00002-of-00014.safetensors", "model.layers.10.block_sparse_moe.experts.1.w3.weight": "model-00002-of-00014.safetensors", "model.layers.11.block_sparse_moe.experts.0.w3.weight": "model-00002-of-00014.safetensors", "model.layers.11.block_sparse_moe.experts.1.w3.weight": "model-00002-of-00014.safetensors", "model.layers.12.block_sparse_moe.experts.0.w3.weight": "model-00002-of-00014.safetensors", "model.layers.12.block_sparse_moe.experts.1.w3.weight": "model-00003-of-00014.safetensors", "model.layers.13.block_sparse_moe.experts.0.w3.weight": "model-00003-of-00014.safetensors", "model.layers.13.block_sparse_moe.experts.1.w3.weight": "model-00003-of-00014.safetensors", "model.layers.14.block_sparse_moe.experts.0.w3.weight": "model-00003-of-00014.safetensors", "model.layers.14.block_sparse_moe.experts.1.w3.weight": "model-00003-of-00014.safetensors", "model.layers.15.block_sparse_moe.experts.0.w3.weight": "model-00003-of-00014.safetensors", "model.layers.15.block_sparse_moe.experts.1.w3.weight": "model-00003-of-00014.safetensors", "model.layers.16.block_sparse_moe.experts.0.w3.weight": "model-00003-of-00014.safetensors", "model.layers.16.block_sparse_moe.experts.1.w3.weight": "model-00003-of-00014.safetensors", "model.layers.17.block_sparse_moe.experts.0.w3.weight": "model-00003-of-00014.safetensors", "model.layers.17.block_sparse_moe.experts.1.w3.weight": "model-00003-of-00014.safetensors", "model.layers.18.block_sparse_moe.experts.0.w3.weight": "model-00003-of-00014.safetensors", "model.layers.18.block_sparse_moe.experts.1.w3.weight": "model-00003-of-00014.safetensors", "model.layers.19.block_sparse_moe.experts.0.w3.weight": "model-00003-of-00014.safetensors", "model.layers.19.block_sparse_moe.experts.1.w3.weight": "model-00003-of-00014.safetensors", "model.layers.20.block_sparse_moe.experts.0.w3.weight": "model-00003-of-00014.safetensors", "model.layers.20.block_sparse_moe.experts.1.w3.weight": "model-00003-of-00014.safetensors", "model.layers.21.block_sparse_moe.experts.0.w3.weight": "model-00004-of-00014.safetensors", "model.layers.21.block_sparse_moe.experts.1.w3.weight": "model-00004-of-00014.safetensors", "model.layers.22.block_sparse_moe.experts.0.w3.weight": "model-00004-of-00014.safetensors", "model.layers.22.block_sparse_moe.experts.1.w3.weight": "model-00004-of-00014.safetensors", "model.layers.23.block_sparse_moe.experts.0.w3.weight": "model-00004-of-00014.safetensors", "model.layers.23.block_sparse_moe.experts.1.w3.weight": "model-00004-of-00014.safetensors", "model.layers.24.block_sparse_moe.experts.0.w3.weight": "model-00004-of-00014.safetensors", "model.layers.24.block_sparse_moe.experts.1.w3.weight": "model-00004-of-00014.safetensors", "model.layers.25.block_sparse_moe.experts.0.w3.weight": "model-00004-of-00014.safetensors", "model.layers.25.block_sparse_moe.experts.1.w3.weight": "model-00004-of-00014.safetensors", "model.layers.26.block_sparse_moe.experts.0.w3.weight": "model-00004-of-00014.safetensors", "model.layers.26.block_sparse_moe.experts.1.w3.weight": "model-00004-of-00014.safetensors", "model.layers.27.block_sparse_moe.experts.0.w3.weight": "model-00004-of-00014.safetensors", "model.layers.27.block_sparse_moe.experts.1.w3.weight": "model-00004-of-00014.safetensors", "model.layers.28.block_sparse_moe.experts.0.w3.weight": "model-00004-of-00014.safetensors", "model.layers.28.block_sparse_moe.experts.1.w3.weight": "model-00004-of-00014.safetensors", "model.layers.29.block_sparse_moe.experts.0.w3.weight": "model-00004-of-00014.safetensors", "model.layers.29.block_sparse_moe.experts.1.w3.weight": "model-00005-of-00014.safetensors", "model.layers.30.block_sparse_moe.experts.0.w3.weight": "model-00005-of-00014.safetensors", "model.layers.30.block_sparse_moe.experts.1.w3.weight": "model-00005-of-00014.safetensors", "model.layers.31.block_sparse_moe.experts.0.w3.weight": "model-00005-of-00014.safetensors", "model.layers.31.block_sparse_moe.experts.1.w3.weight": "model-00005-of-00014.safetensors", "model.layers.0.block_sparse_moe.experts.0.w2.weight": "model-00005-of-00014.safetensors", "model.layers.0.block_sparse_moe.experts.1.w2.weight": "model-00005-of-00014.safetensors", "model.layers.1.block_sparse_moe.experts.0.w2.weight": "model-00005-of-00014.safetensors", "model.layers.1.block_sparse_moe.experts.1.w2.weight": "model-00005-of-00014.safetensors", "model.layers.2.block_sparse_moe.experts.0.w2.weight": "model-00005-of-00014.safetensors", "model.layers.2.block_sparse_moe.experts.1.w2.weight": "model-00005-of-00014.safetensors", "model.layers.3.block_sparse_moe.experts.0.w2.weight": "model-00005-of-00014.safetensors", "model.layers.3.block_sparse_moe.experts.1.w2.weight": "model-00005-of-00014.safetensors", "model.layers.4.block_sparse_moe.experts.0.w2.weight": "model-00005-of-00014.safetensors", "model.layers.4.block_sparse_moe.experts.1.w2.weight": "model-00005-of-00014.safetensors", "model.layers.5.block_sparse_moe.experts.0.w2.weight": "model-00005-of-00014.safetensors", "model.layers.5.block_sparse_moe.experts.1.w2.weight": "model-00005-of-00014.safetensors", "model.layers.6.block_sparse_moe.experts.0.w2.weight": "model-00006-of-00014.safetensors", "model.layers.6.block_sparse_moe.experts.1.w2.weight": "model-00006-of-00014.safetensors", "model.layers.7.block_sparse_moe.experts.0.w2.weight": "model-00006-of-00014.safetensors", "model.layers.7.block_sparse_moe.experts.1.w2.weight": "model-00006-of-00014.safetensors", "model.layers.8.block_sparse_moe.experts.0.w2.weight": "model-00006-of-00014.safetensors", "model.layers.8.block_sparse_moe.experts.1.w2.weight": "model-00006-of-00014.safetensors", "model.layers.9.block_sparse_moe.experts.0.w2.weight": "model-00006-of-00014.safetensors", "model.layers.9.block_sparse_moe.experts.1.w2.weight": "model-00006-of-00014.safetensors", "model.layers.10.block_sparse_moe.experts.0.w2.weight": "model-00006-of-00014.safetensors", "model.layers.10.block_sparse_moe.experts.1.w2.weight": "model-00006-of-00014.safetensors", "model.layers.11.block_sparse_moe.experts.0.w2.weight": "model-00006-of-00014.safetensors", "model.layers.11.block_sparse_moe.experts.1.w2.weight": "model-00006-of-00014.safetensors", "model.layers.12.block_sparse_moe.experts.0.w2.weight": "model-00006-of-00014.safetensors", "model.layers.12.block_sparse_moe.experts.1.w2.weight": "model-00006-of-00014.safetensors", "model.layers.13.block_sparse_moe.experts.0.w2.weight": "model-00006-of-00014.safetensors", "model.layers.13.block_sparse_moe.experts.1.w2.weight": "model-00006-of-00014.safetensors", "model.layers.14.block_sparse_moe.experts.0.w2.weight": "model-00006-of-00014.safetensors", "model.layers.14.block_sparse_moe.experts.1.w2.weight": "model-00007-of-00014.safetensors", "model.layers.15.block_sparse_moe.experts.0.w2.weight": "model-00007-of-00014.safetensors", "model.layers.15.block_sparse_moe.experts.1.w2.weight": "model-00007-of-00014.safetensors", "model.layers.16.block_sparse_moe.experts.0.w2.weight": "model-00007-of-00014.safetensors", "model.layers.16.block_sparse_moe.experts.1.w2.weight": "model-00007-of-00014.safetensors", "model.layers.17.block_sparse_moe.experts.0.w2.weight": "model-00007-of-00014.safetensors", "model.layers.17.block_sparse_moe.experts.1.w2.weight": "model-00007-of-00014.safetensors", "model.layers.18.block_sparse_moe.experts.0.w2.weight": "model-00007-of-00014.safetensors", "model.layers.18.block_sparse_moe.experts.1.w2.weight": "model-00007-of-00014.safetensors", "model.layers.19.block_sparse_moe.experts.0.w2.weight": "model-00007-of-00014.safetensors", "model.layers.19.block_sparse_moe.experts.1.w2.weight": "model-00007-of-00014.safetensors", "model.layers.20.block_sparse_moe.experts.0.w2.weight": "model-00007-of-00014.safetensors", "model.layers.20.block_sparse_moe.experts.1.w2.weight": "model-00007-of-00014.safetensors", "model.layers.21.block_sparse_moe.experts.0.w2.weight": "model-00007-of-00014.safetensors", "model.layers.21.block_sparse_moe.experts.1.w2.weight": "model-00007-of-00014.safetensors", "model.layers.22.block_sparse_moe.experts.0.w2.weight": "model-00007-of-00014.safetensors", "model.layers.22.block_sparse_moe.experts.1.w2.weight": "model-00007-of-00014.safetensors", "model.layers.23.block_sparse_moe.experts.0.w2.weight": "model-00008-of-00014.safetensors", "model.layers.23.block_sparse_moe.experts.1.w2.weight": "model-00008-of-00014.safetensors", "model.layers.24.block_sparse_moe.experts.0.w2.weight": "model-00008-of-00014.safetensors", "model.layers.24.block_sparse_moe.experts.1.w2.weight": "model-00008-of-00014.safetensors", "model.layers.25.block_sparse_moe.experts.0.w2.weight": "model-00008-of-00014.safetensors", "model.layers.25.block_sparse_moe.experts.1.w2.weight": "model-00008-of-00014.safetensors", "model.layers.26.block_sparse_moe.experts.0.w2.weight": "model-00008-of-00014.safetensors", "model.layers.26.block_sparse_moe.experts.1.w2.weight": "model-00008-of-00014.safetensors", "model.layers.27.block_sparse_moe.experts.0.w2.weight": "model-00008-of-00014.safetensors", "model.layers.27.block_sparse_moe.experts.1.w2.weight": "model-00008-of-00014.safetensors", "model.layers.28.block_sparse_moe.experts.0.w2.weight": "model-00008-of-00014.safetensors", "model.layers.28.block_sparse_moe.experts.1.w2.weight": "model-00008-of-00014.safetensors", "model.layers.29.block_sparse_moe.experts.0.w2.weight": "model-00008-of-00014.safetensors", "model.layers.29.block_sparse_moe.experts.1.w2.weight": "model-00008-of-00014.safetensors", "model.layers.30.block_sparse_moe.experts.0.w2.weight": "model-00008-of-00014.safetensors", "model.layers.30.block_sparse_moe.experts.1.w2.weight": "model-00008-of-00014.safetensors", "model.layers.31.block_sparse_moe.experts.0.w2.weight": "model-00008-of-00014.safetensors", "model.layers.31.block_sparse_moe.experts.1.w2.weight": "model-00009-of-00014.safetensors", "model.layers.0.block_sparse_moe.experts.0.w1.weight": "model-00009-of-00014.safetensors", "model.layers.0.block_sparse_moe.experts.1.w1.weight": "model-00009-of-00014.safetensors", "model.layers.1.block_sparse_moe.experts.0.w1.weight": "model-00009-of-00014.safetensors", "model.layers.1.block_sparse_moe.experts.1.w1.weight": "model-00009-of-00014.safetensors", "model.layers.2.block_sparse_moe.experts.0.w1.weight": "model-00009-of-00014.safetensors", "model.layers.2.block_sparse_moe.experts.1.w1.weight": "model-00009-of-00014.safetensors", "model.layers.3.block_sparse_moe.experts.0.w1.weight": "model-00009-of-00014.safetensors", "model.layers.3.block_sparse_moe.experts.1.w1.weight": "model-00009-of-00014.safetensors", "model.layers.4.block_sparse_moe.experts.0.w1.weight": "model-00009-of-00014.safetensors", "model.layers.4.block_sparse_moe.experts.1.w1.weight": "model-00009-of-00014.safetensors", "model.layers.5.block_sparse_moe.experts.0.w1.weight": "model-00009-of-00014.safetensors", "model.layers.5.block_sparse_moe.experts.1.w1.weight": "model-00009-of-00014.safetensors", "model.layers.6.block_sparse_moe.experts.0.w1.weight": "model-00009-of-00014.safetensors", "model.layers.6.block_sparse_moe.experts.1.w1.weight": "model-00009-of-00014.safetensors", "model.layers.7.block_sparse_moe.experts.0.w1.weight": "model-00009-of-00014.safetensors", "model.layers.7.block_sparse_moe.experts.1.w1.weight": "model-00009-of-00014.safetensors", "model.layers.8.block_sparse_moe.experts.0.w1.weight": "model-00010-of-00014.safetensors", "model.layers.8.block_sparse_moe.experts.1.w1.weight": "model-00010-of-00014.safetensors", "model.layers.9.block_sparse_moe.experts.0.w1.weight": "model-00010-of-00014.safetensors", "model.layers.9.block_sparse_moe.experts.1.w1.weight": "model-00010-of-00014.safetensors", "model.layers.10.block_sparse_moe.experts.0.w1.weight": "model-00010-of-00014.safetensors", "model.layers.10.block_sparse_moe.experts.1.w1.weight": "model-00010-of-00014.safetensors", "model.layers.11.block_sparse_moe.experts.0.w1.weight": "model-00010-of-00014.safetensors", "model.layers.11.block_sparse_moe.experts.1.w1.weight": "model-00010-of-00014.safetensors", "model.layers.12.block_sparse_moe.experts.0.w1.weight": "model-00010-of-00014.safetensors", "model.layers.12.block_sparse_moe.experts.1.w1.weight": "model-00010-of-00014.safetensors", "model.layers.13.block_sparse_moe.experts.0.w1.weight": "model-00010-of-00014.safetensors", "model.layers.13.block_sparse_moe.experts.1.w1.weight": "model-00010-of-00014.safetensors", "model.layers.14.block_sparse_moe.experts.0.w1.weight": "model-00010-of-00014.safetensors", "model.layers.14.block_sparse_moe.experts.1.w1.weight": "model-00010-of-00014.safetensors", "model.layers.15.block_sparse_moe.experts.0.w1.weight": "model-00010-of-00014.safetensors", "model.layers.15.block_sparse_moe.experts.1.w1.weight": "model-00010-of-00014.safetensors", "model.layers.16.block_sparse_moe.experts.0.w1.weight": "model-00010-of-00014.safetensors", "model.layers.16.block_sparse_moe.experts.1.w1.weight": "model-00011-of-00014.safetensors", "model.layers.17.block_sparse_moe.experts.0.w1.weight": "model-00011-of-00014.safetensors", "model.layers.17.block_sparse_moe.experts.1.w1.weight": "model-00011-of-00014.safetensors", "model.layers.18.block_sparse_moe.experts.0.w1.weight": "model-00011-of-00014.safetensors", "model.layers.18.block_sparse_moe.experts.1.w1.weight": "model-00011-of-00014.safetensors", "model.layers.19.block_sparse_moe.experts.0.w1.weight": "model-00011-of-00014.safetensors", "model.layers.19.block_sparse_moe.experts.1.w1.weight": "model-00011-of-00014.safetensors", "model.layers.20.block_sparse_moe.experts.0.w1.weight": "model-00011-of-00014.safetensors", "model.layers.20.block_sparse_moe.experts.1.w1.weight": "model-00011-of-00014.safetensors", "model.layers.21.block_sparse_moe.experts.0.w1.weight": "model-00011-of-00014.safetensors", "model.layers.21.block_sparse_moe.experts.1.w1.weight": "model-00011-of-00014.safetensors", "model.layers.22.block_sparse_moe.experts.0.w1.weight": "model-00011-of-00014.safetensors", "model.layers.22.block_sparse_moe.experts.1.w1.weight": "model-00011-of-00014.safetensors", "model.layers.23.block_sparse_moe.experts.0.w1.weight": "model-00011-of-00014.safetensors", "model.layers.23.block_sparse_moe.experts.1.w1.weight": "model-00011-of-00014.safetensors", "model.layers.24.block_sparse_moe.experts.0.w1.weight": "model-00011-of-00014.safetensors", "model.layers.24.block_sparse_moe.experts.1.w1.weight": "model-00011-of-00014.safetensors", "model.layers.25.block_sparse_moe.experts.0.w1.weight": "model-00012-of-00014.safetensors", "model.layers.25.block_sparse_moe.experts.1.w1.weight": "model-00012-of-00014.safetensors", "model.layers.26.block_sparse_moe.experts.0.w1.weight": "model-00012-of-00014.safetensors", "model.layers.26.block_sparse_moe.experts.1.w1.weight": "model-00012-of-00014.safetensors", "model.layers.27.block_sparse_moe.experts.0.w1.weight": "model-00012-of-00014.safetensors", "model.layers.27.block_sparse_moe.experts.1.w1.weight": "model-00012-of-00014.safetensors", "model.layers.28.block_sparse_moe.experts.0.w1.weight": "model-00012-of-00014.safetensors", "model.layers.28.block_sparse_moe.experts.1.w1.weight": "model-00012-of-00014.safetensors", "model.layers.29.block_sparse_moe.experts.0.w1.weight": "model-00012-of-00014.safetensors", "model.layers.29.block_sparse_moe.experts.1.w1.weight": "model-00012-of-00014.safetensors", "model.layers.30.block_sparse_moe.experts.0.w1.weight": "model-00012-of-00014.safetensors", "model.layers.30.block_sparse_moe.experts.1.w1.weight": "model-00012-of-00014.safetensors", "model.layers.31.block_sparse_moe.experts.0.w1.weight": "model-00012-of-00014.safetensors", "model.layers.31.block_sparse_moe.experts.1.w1.weight": "model-00012-of-00014.safetensors", "model.layers.0.post_attention_layernorm.weight": "model-00012-of-00014.safetensors", "model.layers.1.post_attention_layernorm.weight": "model-00012-of-00014.safetensors", "model.layers.2.post_attention_layernorm.weight": "model-00012-of-00014.safetensors", "model.layers.3.post_attention_layernorm.weight": "model-00012-of-00014.safetensors", "model.layers.4.post_attention_layernorm.weight": "model-00012-of-00014.safetensors", "model.layers.5.post_attention_layernorm.weight": "model-00012-of-00014.safetensors", "model.layers.6.post_attention_layernorm.weight": "model-00012-of-00014.safetensors", "model.layers.7.post_attention_layernorm.weight": "model-00012-of-00014.safetensors", "model.layers.8.post_attention_layernorm.weight": "model-00012-of-00014.safetensors", "model.layers.9.post_attention_layernorm.weight": "model-00012-of-00014.safetensors", "model.layers.10.post_attention_layernorm.weight": "model-00012-of-00014.safetensors", "model.layers.11.post_attention_layernorm.weight": "model-00012-of-00014.safetensors", "model.layers.12.post_attention_layernorm.weight": "model-00012-of-00014.safetensors", "model.layers.13.post_attention_layernorm.weight": "model-00012-of-00014.safetensors", "model.layers.14.post_attention_layernorm.weight": "model-00012-of-00014.safetensors", "model.layers.15.post_attention_layernorm.weight": "model-00012-of-00014.safetensors", "model.layers.16.post_attention_layernorm.weight": "model-00012-of-00014.safetensors", "model.layers.17.post_attention_layernorm.weight": "model-00012-of-00014.safetensors", "model.layers.18.post_attention_layernorm.weight": "model-00012-of-00014.safetensors", "model.layers.19.post_attention_layernorm.weight": "model-00012-of-00014.safetensors", "model.layers.20.post_attention_layernorm.weight": "model-00012-of-00014.safetensors", "model.layers.21.post_attention_layernorm.weight": "model-00012-of-00014.safetensors", "model.layers.22.post_attention_layernorm.weight": "model-00012-of-00014.safetensors", "model.layers.23.post_attention_layernorm.weight": "model-00012-of-00014.safetensors", "model.layers.24.post_attention_layernorm.weight": "model-00012-of-00014.safetensors", "model.layers.25.post_attention_layernorm.weight": "model-00012-of-00014.safetensors", "model.layers.26.post_attention_layernorm.weight": "model-00012-of-00014.safetensors", "model.layers.27.post_attention_layernorm.weight": "model-00012-of-00014.safetensors", "model.layers.28.post_attention_layernorm.weight": "model-00012-of-00014.safetensors", "model.layers.29.post_attention_layernorm.weight": "model-00012-of-00014.safetensors", "model.layers.30.post_attention_layernorm.weight": "model-00012-of-00014.safetensors", "model.layers.31.post_attention_layernorm.weight": "model-00012-of-00014.safetensors", "model.layers.0.self_attn.q_proj.weight": "model-00012-of-00014.safetensors", "model.layers.1.self_attn.q_proj.weight": "model-00012-of-00014.safetensors", "model.layers.2.self_attn.q_proj.weight": "model-00012-of-00014.safetensors", "model.layers.3.self_attn.q_proj.weight": "model-00012-of-00014.safetensors", "model.layers.4.self_attn.q_proj.weight": "model-00012-of-00014.safetensors", "model.layers.5.self_attn.q_proj.weight": "model-00012-of-00014.safetensors", "model.layers.6.self_attn.q_proj.weight": "model-00012-of-00014.safetensors", "model.layers.7.self_attn.q_proj.weight": "model-00012-of-00014.safetensors", "model.layers.8.self_attn.q_proj.weight": "model-00012-of-00014.safetensors", "model.layers.9.self_attn.q_proj.weight": "model-00012-of-00014.safetensors", "model.layers.10.self_attn.q_proj.weight": "model-00013-of-00014.safetensors", "model.layers.11.self_attn.q_proj.weight": "model-00013-of-00014.safetensors", "model.layers.12.self_attn.q_proj.weight": "model-00013-of-00014.safetensors", "model.layers.13.self_attn.q_proj.weight": "model-00013-of-00014.safetensors", "model.layers.14.self_attn.q_proj.weight": "model-00013-of-00014.safetensors", "model.layers.15.self_attn.q_proj.weight": "model-00013-of-00014.safetensors", "model.layers.16.self_attn.q_proj.weight": "model-00013-of-00014.safetensors", "model.layers.17.self_attn.q_proj.weight": "model-00013-of-00014.safetensors", "model.layers.18.self_attn.q_proj.weight": "model-00013-of-00014.safetensors", "model.layers.19.self_attn.q_proj.weight": "model-00013-of-00014.safetensors", "model.layers.20.self_attn.q_proj.weight": "model-00013-of-00014.safetensors", "model.layers.21.self_attn.q_proj.weight": "model-00013-of-00014.safetensors", "model.layers.22.self_attn.q_proj.weight": "model-00013-of-00014.safetensors", "model.layers.23.self_attn.q_proj.weight": "model-00013-of-00014.safetensors", "model.layers.24.self_attn.q_proj.weight": "model-00013-of-00014.safetensors", "model.layers.25.self_attn.q_proj.weight": "model-00013-of-00014.safetensors", "model.layers.26.self_attn.q_proj.weight": "model-00013-of-00014.safetensors", "model.layers.27.self_attn.q_proj.weight": "model-00013-of-00014.safetensors", "model.layers.28.self_attn.q_proj.weight": "model-00013-of-00014.safetensors", "model.layers.29.self_attn.q_proj.weight": "model-00013-of-00014.safetensors", "model.layers.30.self_attn.q_proj.weight": "model-00013-of-00014.safetensors", "model.layers.31.self_attn.q_proj.weight": "model-00013-of-00014.safetensors", "model.layers.0.self_attn.k_proj.weight": "model-00013-of-00014.safetensors", "model.layers.1.self_attn.k_proj.weight": "model-00013-of-00014.safetensors", "model.layers.2.self_attn.k_proj.weight": "model-00013-of-00014.safetensors", "model.layers.3.self_attn.k_proj.weight": "model-00013-of-00014.safetensors", "model.layers.4.self_attn.k_proj.weight": "model-00013-of-00014.safetensors", "model.layers.5.self_attn.k_proj.weight": "model-00013-of-00014.safetensors", "model.layers.6.self_attn.k_proj.weight": "model-00013-of-00014.safetensors", "model.layers.7.self_attn.k_proj.weight": "model-00013-of-00014.safetensors", "model.layers.8.self_attn.k_proj.weight": "model-00013-of-00014.safetensors", "model.layers.9.self_attn.k_proj.weight": "model-00013-of-00014.safetensors", "model.layers.10.self_attn.k_proj.weight": "model-00013-of-00014.safetensors", "model.layers.11.self_attn.k_proj.weight": "model-00013-of-00014.safetensors", "model.layers.12.self_attn.k_proj.weight": "model-00013-of-00014.safetensors", "model.layers.13.self_attn.k_proj.weight": "model-00013-of-00014.safetensors", "model.layers.14.self_attn.k_proj.weight": "model-00013-of-00014.safetensors", "model.layers.15.self_attn.k_proj.weight": "model-00013-of-00014.safetensors", "model.layers.16.self_attn.k_proj.weight": "model-00013-of-00014.safetensors", "model.layers.17.self_attn.k_proj.weight": "model-00013-of-00014.safetensors", "model.layers.18.self_attn.k_proj.weight": "model-00013-of-00014.safetensors", "model.layers.19.self_attn.k_proj.weight": "model-00013-of-00014.safetensors", "model.layers.20.self_attn.k_proj.weight": "model-00013-of-00014.safetensors", "model.layers.21.self_attn.k_proj.weight": "model-00013-of-00014.safetensors", "model.layers.22.self_attn.k_proj.weight": "model-00013-of-00014.safetensors", "model.layers.23.self_attn.k_proj.weight": "model-00013-of-00014.safetensors", "model.layers.24.self_attn.k_proj.weight": "model-00013-of-00014.safetensors", "model.layers.25.self_attn.k_proj.weight": "model-00013-of-00014.safetensors", "model.layers.26.self_attn.k_proj.weight": "model-00013-of-00014.safetensors", "model.layers.27.self_attn.k_proj.weight": "model-00013-of-00014.safetensors", "model.layers.28.self_attn.k_proj.weight": "model-00013-of-00014.safetensors", "model.layers.29.self_attn.k_proj.weight": "model-00013-of-00014.safetensors", "model.layers.30.self_attn.k_proj.weight": "model-00013-of-00014.safetensors", "model.layers.31.self_attn.k_proj.weight": "model-00013-of-00014.safetensors", "model.layers.0.self_attn.v_proj.weight": "model-00013-of-00014.safetensors", "model.layers.1.self_attn.v_proj.weight": "model-00013-of-00014.safetensors", "model.layers.2.self_attn.v_proj.weight": "model-00013-of-00014.safetensors", "model.layers.3.self_attn.v_proj.weight": "model-00013-of-00014.safetensors", "model.layers.4.self_attn.v_proj.weight": "model-00013-of-00014.safetensors", "model.layers.5.self_attn.v_proj.weight": "model-00013-of-00014.safetensors", "model.layers.6.self_attn.v_proj.weight": "model-00013-of-00014.safetensors", "model.layers.7.self_attn.v_proj.weight": "model-00013-of-00014.safetensors", "model.layers.8.self_attn.v_proj.weight": "model-00013-of-00014.safetensors", "model.layers.9.self_attn.v_proj.weight": "model-00013-of-00014.safetensors", "model.layers.10.self_attn.v_proj.weight": "model-00013-of-00014.safetensors", "model.layers.11.self_attn.v_proj.weight": "model-00013-of-00014.safetensors", "model.layers.12.self_attn.v_proj.weight": "model-00013-of-00014.safetensors", "model.layers.13.self_attn.v_proj.weight": "model-00013-of-00014.safetensors", "model.layers.14.self_attn.v_proj.weight": "model-00013-of-00014.safetensors", "model.layers.15.self_attn.v_proj.weight": "model-00013-of-00014.safetensors", "model.layers.16.self_attn.v_proj.weight": "model-00013-of-00014.safetensors", "model.layers.17.self_attn.v_proj.weight": "model-00013-of-00014.safetensors", "model.layers.18.self_attn.v_proj.weight": "model-00013-of-00014.safetensors", "model.layers.19.self_attn.v_proj.weight": "model-00013-of-00014.safetensors", "model.layers.20.self_attn.v_proj.weight": "model-00013-of-00014.safetensors", "model.layers.21.self_attn.v_proj.weight": "model-00013-of-00014.safetensors", "model.layers.22.self_attn.v_proj.weight": "model-00013-of-00014.safetensors", "model.layers.23.self_attn.v_proj.weight": "model-00013-of-00014.safetensors", "model.layers.24.self_attn.v_proj.weight": "model-00013-of-00014.safetensors", "model.layers.25.self_attn.v_proj.weight": "model-00013-of-00014.safetensors", "model.layers.26.self_attn.v_proj.weight": "model-00013-of-00014.safetensors", "model.layers.27.self_attn.v_proj.weight": "model-00013-of-00014.safetensors", "model.layers.28.self_attn.v_proj.weight": "model-00013-of-00014.safetensors", "model.layers.29.self_attn.v_proj.weight": "model-00013-of-00014.safetensors", "model.layers.30.self_attn.v_proj.weight": "model-00013-of-00014.safetensors", "model.layers.31.self_attn.v_proj.weight": "model-00013-of-00014.safetensors", "model.layers.0.self_attn.o_proj.weight": "model-00013-of-00014.safetensors", "model.layers.1.self_attn.o_proj.weight": "model-00013-of-00014.safetensors", "model.layers.2.self_attn.o_proj.weight": "model-00013-of-00014.safetensors", "model.layers.3.self_attn.o_proj.weight": "model-00013-of-00014.safetensors", "model.layers.4.self_attn.o_proj.weight": "model-00013-of-00014.safetensors", "model.layers.5.self_attn.o_proj.weight": "model-00013-of-00014.safetensors", "model.layers.6.self_attn.o_proj.weight": "model-00013-of-00014.safetensors", "model.layers.7.self_attn.o_proj.weight": "model-00013-of-00014.safetensors", "model.layers.8.self_attn.o_proj.weight": "model-00013-of-00014.safetensors", "model.layers.9.self_attn.o_proj.weight": "model-00013-of-00014.safetensors", "model.layers.10.self_attn.o_proj.weight": "model-00013-of-00014.safetensors", "model.layers.11.self_attn.o_proj.weight": "model-00013-of-00014.safetensors", "model.layers.12.self_attn.o_proj.weight": "model-00013-of-00014.safetensors", "model.layers.13.self_attn.o_proj.weight": "model-00013-of-00014.safetensors", "model.layers.14.self_attn.o_proj.weight": "model-00013-of-00014.safetensors", "model.layers.15.self_attn.o_proj.weight": "model-00013-of-00014.safetensors", "model.layers.16.self_attn.o_proj.weight": "model-00013-of-00014.safetensors", "model.layers.17.self_attn.o_proj.weight": "model-00013-of-00014.safetensors", "model.layers.18.self_attn.o_proj.weight": "model-00013-of-00014.safetensors", "model.layers.19.self_attn.o_proj.weight": "model-00013-of-00014.safetensors", "model.layers.20.self_attn.o_proj.weight": "model-00013-of-00014.safetensors", "model.layers.21.self_attn.o_proj.weight": "model-00014-of-00014.safetensors", "model.layers.22.self_attn.o_proj.weight": "model-00014-of-00014.safetensors", "model.layers.23.self_attn.o_proj.weight": "model-00014-of-00014.safetensors", "model.layers.24.self_attn.o_proj.weight": "model-00014-of-00014.safetensors", "model.layers.25.self_attn.o_proj.weight": "model-00014-of-00014.safetensors", "model.layers.26.self_attn.o_proj.weight": "model-00014-of-00014.safetensors", "model.layers.27.self_attn.o_proj.weight": "model-00014-of-00014.safetensors", "model.layers.28.self_attn.o_proj.weight": "model-00014-of-00014.safetensors", "model.layers.29.self_attn.o_proj.weight": "model-00014-of-00014.safetensors", "model.layers.30.self_attn.o_proj.weight": "model-00014-of-00014.safetensors", "model.layers.31.self_attn.o_proj.weight": "model-00014-of-00014.safetensors", "model.layers.0.block_sparse_moe.gate.weight": "model-00014-of-00014.safetensors", "model.layers.1.block_sparse_moe.gate.weight": "model-00014-of-00014.safetensors", "model.layers.2.block_sparse_moe.gate.weight": "model-00014-of-00014.safetensors", "model.layers.3.block_sparse_moe.gate.weight": "model-00014-of-00014.safetensors", "model.layers.4.block_sparse_moe.gate.weight": "model-00014-of-00014.safetensors", "model.layers.5.block_sparse_moe.gate.weight": "model-00014-of-00014.safetensors", "model.layers.6.block_sparse_moe.gate.weight": "model-00014-of-00014.safetensors", "model.layers.7.block_sparse_moe.gate.weight": "model-00014-of-00014.safetensors", "model.layers.8.block_sparse_moe.gate.weight": "model-00014-of-00014.safetensors", "model.layers.9.block_sparse_moe.gate.weight": "model-00014-of-00014.safetensors", "model.layers.10.block_sparse_moe.gate.weight": "model-00014-of-00014.safetensors", "model.layers.11.block_sparse_moe.gate.weight": "model-00014-of-00014.safetensors", "model.layers.12.block_sparse_moe.gate.weight": "model-00014-of-00014.safetensors", "model.layers.13.block_sparse_moe.gate.weight": "model-00014-of-00014.safetensors", "model.layers.14.block_sparse_moe.gate.weight": "model-00014-of-00014.safetensors", "model.layers.15.block_sparse_moe.gate.weight": "model-00014-of-00014.safetensors", "model.layers.16.block_sparse_moe.gate.weight": "model-00014-of-00014.safetensors", "model.layers.17.block_sparse_moe.gate.weight": "model-00014-of-00014.safetensors", "model.layers.18.block_sparse_moe.gate.weight": "model-00014-of-00014.safetensors", "model.layers.19.block_sparse_moe.gate.weight": "model-00014-of-00014.safetensors", "model.layers.20.block_sparse_moe.gate.weight": "model-00014-of-00014.safetensors", "model.layers.21.block_sparse_moe.gate.weight": "model-00014-of-00014.safetensors", "model.layers.22.block_sparse_moe.gate.weight": "model-00014-of-00014.safetensors", "model.layers.23.block_sparse_moe.gate.weight": "model-00014-of-00014.safetensors", "model.layers.24.block_sparse_moe.gate.weight": "model-00014-of-00014.safetensors", "model.layers.25.block_sparse_moe.gate.weight": "model-00014-of-00014.safetensors", "model.layers.26.block_sparse_moe.gate.weight": "model-00014-of-00014.safetensors", "model.layers.27.block_sparse_moe.gate.weight": "model-00014-of-00014.safetensors", "model.layers.28.block_sparse_moe.gate.weight": "model-00014-of-00014.safetensors", "model.layers.29.block_sparse_moe.gate.weight": "model-00014-of-00014.safetensors", "model.layers.30.block_sparse_moe.gate.weight": "model-00014-of-00014.safetensors", "model.layers.31.block_sparse_moe.gate.weight": "model-00014-of-00014.safetensors"}}
special_tokens_map.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": "<s>",
17
+ "unk_token": {
18
+ "content": "<unk>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ }
24
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9298e56c094f0d30431b0e52ad53287f0cadc99ac8ca17cc2144b0eb4753f130
3
+ size 911034
tokenizer_config.json ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "61873": {
30
+ "content": "<EOD>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "61874": {
38
+ "content": "<PAD>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ }
45
+ },
46
+ "bos_token": "<s>",
47
+ "chat_template": "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'].strip() %}{% else %}{% set loop_messages = messages %}{% set system_message = 'You are a helpful AI assistant built by MediaTek Research. The user you are helping speaks Traditional Chinese and comes from Taiwan.' %}{% endif %}{{ bos_token }}{{ system_message }} {% for message in loop_messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/... or system/user/assistant/user/assistant/...') }}{% endif %}{% if message['role'] == 'user' %}{{ '[INST] ' + message['content'] + ' [/INST] ' }}{% elif message['role'] == 'assistant' %}{{ message['content'] + ' ' }}{% else %}{{ raise_exception('Only user and assistant roles are supported!') }}{% endif %}{% endfor %}",
48
+ "clean_up_tokenization_spaces": false,
49
+ "eos_token": "</s>",
50
+ "legacy": true,
51
+ "model_max_length": 1000000000000000019884624838656,
52
+ "pad_token": "<s>",
53
+ "padding_side": "left",
54
+ "sp_model_kwargs": {},
55
+ "spaces_between_special_tokens": false,
56
+ "split_special_tokens": false,
57
+ "tokenizer_class": "LlamaTokenizer",
58
+ "unk_token": "<unk>",
59
+ "use_default_system_prompt": false
60
+ }