RichardErkhov commited on
Commit
938f7a7
1 Parent(s): 23ee7a7

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +162 -0
README.md ADDED
@@ -0,0 +1,162 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ Taiwan-LLM-7B-v2.0-chat - GGUF
11
+ - Model creator: https://huggingface.co/yentinglin/
12
+ - Original model: https://huggingface.co/yentinglin/Taiwan-LLM-7B-v2.0-chat/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [Taiwan-LLM-7B-v2.0-chat.Q2_K.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0-chat.Q2_K.gguf) | Q2_K | 2.36GB |
18
+ | [Taiwan-LLM-7B-v2.0-chat.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0-chat.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
19
+ | [Taiwan-LLM-7B-v2.0-chat.IQ3_S.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0-chat.IQ3_S.gguf) | IQ3_S | 2.75GB |
20
+ | [Taiwan-LLM-7B-v2.0-chat.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0-chat.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
21
+ | [Taiwan-LLM-7B-v2.0-chat.IQ3_M.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0-chat.IQ3_M.gguf) | IQ3_M | 2.9GB |
22
+ | [Taiwan-LLM-7B-v2.0-chat.Q3_K.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0-chat.Q3_K.gguf) | Q3_K | 3.07GB |
23
+ | [Taiwan-LLM-7B-v2.0-chat.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0-chat.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
24
+ | [Taiwan-LLM-7B-v2.0-chat.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0-chat.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
25
+ | [Taiwan-LLM-7B-v2.0-chat.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0-chat.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
26
+ | [Taiwan-LLM-7B-v2.0-chat.Q4_0.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0-chat.Q4_0.gguf) | Q4_0 | 3.56GB |
27
+ | [Taiwan-LLM-7B-v2.0-chat.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0-chat.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
28
+ | [Taiwan-LLM-7B-v2.0-chat.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0-chat.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
29
+ | [Taiwan-LLM-7B-v2.0-chat.Q4_K.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0-chat.Q4_K.gguf) | Q4_K | 3.8GB |
30
+ | [Taiwan-LLM-7B-v2.0-chat.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0-chat.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
31
+ | [Taiwan-LLM-7B-v2.0-chat.Q4_1.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0-chat.Q4_1.gguf) | Q4_1 | 3.95GB |
32
+ | [Taiwan-LLM-7B-v2.0-chat.Q5_0.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0-chat.Q5_0.gguf) | Q5_0 | 4.33GB |
33
+ | [Taiwan-LLM-7B-v2.0-chat.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0-chat.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
34
+ | [Taiwan-LLM-7B-v2.0-chat.Q5_K.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0-chat.Q5_K.gguf) | Q5_K | 4.45GB |
35
+ | [Taiwan-LLM-7B-v2.0-chat.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0-chat.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
36
+ | [Taiwan-LLM-7B-v2.0-chat.Q5_1.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0-chat.Q5_1.gguf) | Q5_1 | 4.72GB |
37
+ | [Taiwan-LLM-7B-v2.0-chat.Q6_K.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0-chat.Q6_K.gguf) | Q6_K | 5.15GB |
38
+ | [Taiwan-LLM-7B-v2.0-chat.Q8_0.gguf](https://huggingface.co/RichardErkhov/yentinglin_-_Taiwan-LLM-7B-v2.0-chat-gguf/blob/main/Taiwan-LLM-7B-v2.0-chat.Q8_0.gguf) | Q8_0 | 6.67GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+
45
+ ---
46
+ # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
47
+ # Doc / guide: https://huggingface.co/docs/hub/model-cards
48
+ license: apache-2.0
49
+ language:
50
+ - zh
51
+ widget:
52
+ - text: >-
53
+ A chat between a curious user and an artificial intelligence assistant.
54
+ The assistant gives helpful, detailed, and polite answers to the user's
55
+ questions. USER: 你好,請問你可以幫我寫一封推薦信嗎? ASSISTANT:
56
+ library_name: transformers
57
+ pipeline_tag: text-generation
58
+ extra_gated_heading: Acknowledge license to accept the repository.
59
+ extra_gated_prompt: Please contact the author for access.
60
+ extra_gated_button_content: Acknowledge license 同意以上內容
61
+ extra_gated_fields:
62
+ Name: text
63
+ Mail: text
64
+ Organization: text
65
+ Country: text
66
+ Any utilization of the Taiwan LLM repository mandates the explicit acknowledgment and attribution to the original author: checkbox
67
+ 使用Taiwan LLM必須明確地承認和歸功於優必達株式會社 Ubitus 以及原始作者: checkbox
68
+ ---
69
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/5df9c78eda6d0311fd3d541f/CmusIT5OlSXvFrbTJ7l-C.png" alt="Taiwan LLM Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
70
+
71
+ # 🌟 Checkout [Taiwan-LLM Demo Chat-UI](http://www.twllm.com) 🌟
72
+
73
+ # Model Card for Taiwan LLM 7B v2.0 chat
74
+
75
+ Taiwan LLM is an advanced language model tailored for Traditional Chinese, focusing on the linguistic and cultural contexts of Taiwan.
76
+ Developed from a large base model, it's enriched with diverse Taiwanese textual sources and refined through Supervised Fine-Tuning.
77
+ This model excels in language understanding and generation, aligning closely with Taiwan's cultural nuances.
78
+ It demonstrates improved performance on various benchmarks like TC-Eval, showcasing its contextual comprehension and cultural relevance.
79
+ For detailed insights into Taiwan LLM's development and features, refer to our [technical report](https://github.com/MiuLab/Taiwan-LLaMa/blob/main/twllm_paper.pdf).
80
+
81
+
82
+ ## Model description
83
+
84
+ - **Model type:** A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
85
+ - **Language(s) (NLP):** Primarily Traditional Chinese (zh-tw)
86
+ - **Finetuned from model:** [yentinglin/Taiwan-LLM-7B-v2.0-base](https://huggingface.co/yentinglin/yentinglin/Taiwan-LLM-7B-v2.0-base)
87
+
88
+ ### Model Sources
89
+
90
+ <!-- Provide the basic links for the model. -->
91
+
92
+ - **Repository:** https://github.com/MiuLab/Taiwan-LLaMa
93
+ - **Demo:** https://twllm.com/
94
+
95
+ ## Performance
96
+
97
+
98
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5df9c78eda6d0311fd3d541f/HTwIzw6RDha2-PhuWqSuI.png)
99
+
100
+ ## Intended uses
101
+
102
+ Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
103
+
104
+ ```python
105
+ # pip install transformers>=4.34
106
+ # pip install accelerate
107
+
108
+ import torch
109
+ from transformers import pipeline
110
+
111
+ pipe = pipeline("text-generation", model="yentinglin/Taiwan-LLM-7B-v2.0-chat", torch_dtype=torch.bfloat16, device_map="auto")
112
+
113
+ # We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
114
+ messages = [
115
+ {
116
+ "role": "system",
117
+ "content": "你是一個人工智慧助理",
118
+ },
119
+ {"role": "user", "content": "東北季風如何影響台灣氣候?"},
120
+ ]
121
+ prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
122
+ outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
123
+ print(outputs[0]["generated_text"])
124
+ ```
125
+
126
+ ### Training hyperparameters
127
+
128
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5df9c78eda6d0311fd3d541f/MdvHwdUvH-c926qyRAw7K.png)
129
+
130
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5df9c78eda6d0311fd3d541f/kKpkvxDzOEyiAoTqmzRYO.png)
131
+
132
+
133
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5df9c78eda6d0311fd3d541f/FsnlJ_fkRxf7fn5RKZnjE.png)
134
+
135
+ The following hyperparameters were used during training:
136
+ - learning_rate: 5e-05
137
+ - distributed_type: multi-GPU
138
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
139
+ - lr_scheduler_type: cosine
140
+ - lr_scheduler_warmup_ratio: 0.03
141
+ - num_epochs: 5.0
142
+
143
+ ## Citation
144
+
145
+ If you find Taiwan LLM is useful in your work, please cite it with:
146
+
147
+ ```
148
+ @misc{lin2023taiwan,
149
+ title={Taiwan LLM: Bridging the Linguistic Divide with a Culturally Aligned Language Model},
150
+ author={Yen-Ting Lin and Yun-Nung Chen},
151
+ year={2023},
152
+ eprint={2311.17487},
153
+ archivePrefix={arXiv},
154
+ primaryClass={cs.CL}
155
+ }
156
+ ```
157
+
158
+ # Acknowledgement
159
+
160
+ Taiwan LLM v2 is conducted in collaboration with [Ubitus K.K.](http://ubitus.net). Ubitus provides valuable compute resources for the project.
161
+
162
+