sam-ezai commited on
Commit
e50396a
1 Parent(s): 2cce408

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +55 -0
README.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - moe
5
+ - frankenmoe
6
+ - merge
7
+ - mergekit
8
+ - lazymergekit
9
+ - MediaTek-Research/Breeze-7B-Instruct-v0_1
10
+ - Azure99/blossom-v4-mistral-7b
11
+ base_model:
12
+ - MediaTek-Research/Breeze-7B-Instruct-v0_1
13
+ - Azure99/blossom-v4-mistral-7b
14
+ ---
15
+
16
+ # Breezeblossom-v4-mistral-2x7B
17
+
18
+ Breezeblossom-v4-mistral-2x7B is a Mixure of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
19
+ * [MediaTek-Research/Breeze-7B-Instruct-v0_1](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0_1)
20
+ * [Azure99/blossom-v4-mistral-7b](https://huggingface.co/Azure99/blossom-v4-mistral-7b)
21
+
22
+ ## 🧩 Configuration
23
+
24
+ ```yamlbase_model: MediaTek-Research/Breeze-7B-Instruct-v0_1
25
+ gate_mode: hidden
26
+ dtype: float16
27
+ experts:
28
+ - source_model: MediaTek-Research/Breeze-7B-Instruct-v0_1
29
+ positive_prompts: [ "<s>You are a helpful AI assistant built by MediaTek Research. The user you are helping speaks Traditional Chinese and comes from Taiwan. [INST] 你好,請問你可以完成什麼任務? [/INST] "]
30
+ - source_model: Azure99/blossom-v4-mistral-7b
31
+ positive_prompts: ["A chat between a human and an artificial intelligence bot. The bot gives helpful, detailed, and polite answers to the human's questions. \n|Human|: hello\n|Bot|: "]```
32
+
33
+ ## 💻 Usage
34
+
35
+ ```python
36
+ !pip install -qU transformers bitsandbytes accelerate
37
+
38
+ from transformers import AutoTokenizer
39
+ import transformers
40
+ import torch
41
+
42
+ model = "sam-ezai/Breezeblossom-v4-mistral-2x7B"
43
+
44
+ tokenizer = AutoTokenizer.from_pretrained(model)
45
+ pipeline = transformers.pipeline(
46
+ "text-generation",
47
+ model=model,
48
+ model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
49
+ )
50
+
51
+ messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
52
+ prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
53
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
54
+ print(outputs[0]["generated_text"])
55
+ ```