crimsonjoo's picture
update neversleeps
ce748a0
|
raw
history blame
No virus
2.92 kB
---
license: apache-2.0
tags:
- generated_from_trainer
base_model: yanolja/EEVE-Korean-2.8B-v1.0
---
<p align="left">
<img src="https://huggingface.co/crimsonjoo/Neversleep-3B-Instruct-v0.1/resolve/main/neversleep_logo.webp" width="100%"/>
<p>
# "We must sleep, but AI Never Sleeps!"
&nbsp;
## Prompt Template
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
Human: {prompt}
Assistant:
```
## Simple-Usage
```python
from transformers import AutoTokenizer
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("yanolja/EEVE-Korean-Instruct-2.8B-v1.0", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("yanolja/EEVE-Korean-Instruct-2.8B-v1.0", trust_remote_code=True)
prompt_template = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.\nHuman: {prompt}\nAssistant:\n"
text = 'λ‹€μ΄μ–΄νŠΈμ‹ 메뉴λ₯Ό μΆ”μ²œν•΄μ£Όμ„Έμš”.\n\n(A) μƒλŸ¬λ“œ\n(B) μΉ˜ν‚¨\n(C) ν”Όμž\n(D) νŒŒμŠ€νƒ€'
model_inputs = tokenizer(prompt_template.format(prompt=text), return_tensors='pt')
outputs = model.generate(**model_inputs, max_new_tokens=256)
output_text = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
print(output_text)
```
### Example Output
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
Human: λ‹€μ΄μ–΄νŠΈμ‹ 메뉴λ₯Ό μΆ”μ²œν•΄μ£Όμ„Έμš”.
(A) μƒλŸ¬λ“œ
(B) μΉ˜ν‚¨
(C) ν”Όμž
(D) νŒŒμŠ€νƒ€
Assistant:
(A) μƒλŸ¬λ“œλ₯Ό μΆ”μ²œλ“œλ¦½λ‹ˆλ‹€. μƒλŸ¬λ“œλŠ” μ €μΉΌλ‘œλ¦¬μ΄λ©΄μ„œλ„ μ˜μ–‘μ†Œκ°€ 풍뢀해 λ‹€μ΄μ–΄νŠΈμ‹μœΌλ‘œ μ ν•©ν•©λ‹ˆλ‹€. λ‹€μ–‘ν•œ μ±„μ†Œμ™€ λ‹¨λ°±μ§ˆμ„ μΆ”κ°€ν•˜μ—¬ κ· ν˜• 작힌 식사λ₯Ό λ§Œλ“œμ‹€ 수 μžˆμŠ΅λ‹ˆλ‹€.
```
## About the Model
First of all, Overwhelming gratitude to 'yanolja/EEVE' Model & Team!
This model is a fine-tuned version of [crimsonjoo/Neversleep-3B-v0.1](https://huggingface.co/crimsonjoo/Neversleep-3B-v0.1), which is a Korean vocabulary-extended version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2). Specifically, we utilized Direct Preference Optimization (DPO) through the use of [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl).
For more details, please refer to our technical report: [Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models](https://arxiv.org/abs/2402.14714).
## Training Data
- Korean-translated version of [Open-Orca/SlimOrca-Dedup](https://huggingface.co/datasets/Open-Orca/SlimOrca-Dedup)
- Korean-translated version of [argilla/ultrafeedback-binarized-preferences-cleaned](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences-cleaned)
- No other dataset was used