BlackBeenie commited on
Commit
84f6408
1 Parent(s): 1bc7469

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +110 -0
README.md ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - nbeerbower/llama-3-stella-8B
4
+ - defog/llama-3-sqlcoder-8b
5
+ - nbeerbower/llama-3-gutenberg-8B
6
+ - openchat/openchat-3.6-8b-20240522
7
+ - Kukedlc/NeuralLLaMa-3-8b-DT-v0.1
8
+ - cstr/llama3-8b-spaetzle-v20
9
+ - mlabonne/ChimeraLlama-3-8B-v3
10
+ - flammenai/Mahou-1.1-llama3-8B
11
+ - KingNish/KingNish-Llama3-8b
12
+ tags:
13
+ - merge
14
+ - mergekit
15
+ - lazymergekit
16
+ - nbeerbower/llama-3-stella-8B
17
+ - defog/llama-3-sqlcoder-8b
18
+ - nbeerbower/llama-3-gutenberg-8B
19
+ - openchat/openchat-3.6-8b-20240522
20
+ - Kukedlc/NeuralLLaMa-3-8b-DT-v0.1
21
+ - cstr/llama3-8b-spaetzle-v20
22
+ - mlabonne/ChimeraLlama-3-8B-v3
23
+ - flammenai/Mahou-1.1-llama3-8B
24
+ - KingNish/KingNish-Llama3-8b
25
+ ---
26
+
27
+ # llama-3-luminous-merged
28
+
29
+ llama-3-luminous-merged is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
30
+ * [nbeerbower/llama-3-stella-8B](https://huggingface.co/nbeerbower/llama-3-stella-8B)
31
+ * [defog/llama-3-sqlcoder-8b](https://huggingface.co/defog/llama-3-sqlcoder-8b)
32
+ * [nbeerbower/llama-3-gutenberg-8B](https://huggingface.co/nbeerbower/llama-3-gutenberg-8B)
33
+ * [openchat/openchat-3.6-8b-20240522](https://huggingface.co/openchat/openchat-3.6-8b-20240522)
34
+ * [Kukedlc/NeuralLLaMa-3-8b-DT-v0.1](https://huggingface.co/Kukedlc/NeuralLLaMa-3-8b-DT-v0.1)
35
+ * [cstr/llama3-8b-spaetzle-v20](https://huggingface.co/cstr/llama3-8b-spaetzle-v20)
36
+ * [mlabonne/ChimeraLlama-3-8B-v3](https://huggingface.co/mlabonne/ChimeraLlama-3-8B-v3)
37
+ * [flammenai/Mahou-1.1-llama3-8B](https://huggingface.co/flammenai/Mahou-1.1-llama3-8B)
38
+ * [KingNish/KingNish-Llama3-8b](https://huggingface.co/KingNish/KingNish-Llama3-8b)
39
+
40
+ ## 🧩 Configuration
41
+
42
+ ```yaml
43
+ models:
44
+ - model: NousResearch/Meta-Llama-3-8B
45
+ # No parameters necessary for base model
46
+ - model: nbeerbower/llama-3-stella-8B
47
+ parameters:
48
+ density: 0.6
49
+ weight: 0.16
50
+ - model: defog/llama-3-sqlcoder-8b
51
+ parameters:
52
+ density: 0.56
53
+ weight: 0.1
54
+ - model: nbeerbower/llama-3-gutenberg-8B
55
+ parameters:
56
+ density: 0.6
57
+ weight: 0.18
58
+ - model: openchat/openchat-3.6-8b-20240522
59
+ parameters:
60
+ density: 0.56
61
+ weight: 0.13
62
+ - model: Kukedlc/NeuralLLaMa-3-8b-DT-v0.1
63
+ parameters:
64
+ density: 0.58
65
+ weight: 0.18
66
+ - model: cstr/llama3-8b-spaetzle-v20
67
+ parameters:
68
+ density: 0.56
69
+ weight: 0.08
70
+ - model: mlabonne/ChimeraLlama-3-8B-v3
71
+ parameters:
72
+ density: 0.56
73
+ weight: 0.07
74
+ - model: flammenai/Mahou-1.1-llama3-8B
75
+ parameters:
76
+ density: 0.55
77
+ weight: 0.05
78
+ - model: KingNish/KingNish-Llama3-8b
79
+ parameters:
80
+ density: 0.55
81
+ weight: 0.05
82
+ merge_method: dare_ties
83
+ base_model: NousResearch/Meta-Llama-3-8B
84
+ dtype: bfloat16
85
+ ```
86
+
87
+ ## 💻 Usage
88
+
89
+ ```python
90
+ !pip install -qU transformers accelerate
91
+
92
+ from transformers import AutoTokenizer
93
+ import transformers
94
+ import torch
95
+
96
+ model = "BlackBeenie/llama-3-luminous-merged"
97
+ messages = [{"role": "user", "content": "What is a large language model?"}]
98
+
99
+ tokenizer = AutoTokenizer.from_pretrained(model)
100
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
101
+ pipeline = transformers.pipeline(
102
+ "text-generation",
103
+ model=model,
104
+ torch_dtype=torch.float16,
105
+ device_map="auto",
106
+ )
107
+
108
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
109
+ print(outputs[0]["generated_text"])
110
+ ```