RichardErkhov commited on
Commit
03a8ac9
1 Parent(s): 09b4b97

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +90 -0
README.md ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ Mistral-10.7B-v0.2 - GGUF
11
+ - Model creator: https://huggingface.co/Joseph717171/
12
+ - Original model: https://huggingface.co/Joseph717171/Mistral-10.7B-v0.2/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [Mistral-10.7B-v0.2.Q2_K.gguf](https://huggingface.co/RichardErkhov/Joseph717171_-_Mistral-10.7B-v0.2-gguf/blob/main/Mistral-10.7B-v0.2.Q2_K.gguf) | Q2_K | 3.73GB |
18
+ | [Mistral-10.7B-v0.2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Joseph717171_-_Mistral-10.7B-v0.2-gguf/blob/main/Mistral-10.7B-v0.2.IQ3_XS.gguf) | IQ3_XS | 4.14GB |
19
+ | [Mistral-10.7B-v0.2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Joseph717171_-_Mistral-10.7B-v0.2-gguf/blob/main/Mistral-10.7B-v0.2.IQ3_S.gguf) | IQ3_S | 4.37GB |
20
+ | [Mistral-10.7B-v0.2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Joseph717171_-_Mistral-10.7B-v0.2-gguf/blob/main/Mistral-10.7B-v0.2.Q3_K_S.gguf) | Q3_K_S | 4.34GB |
21
+ | [Mistral-10.7B-v0.2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Joseph717171_-_Mistral-10.7B-v0.2-gguf/blob/main/Mistral-10.7B-v0.2.IQ3_M.gguf) | IQ3_M | 4.51GB |
22
+ | [Mistral-10.7B-v0.2.Q3_K.gguf](https://huggingface.co/RichardErkhov/Joseph717171_-_Mistral-10.7B-v0.2-gguf/blob/main/Mistral-10.7B-v0.2.Q3_K.gguf) | Q3_K | 4.84GB |
23
+ | [Mistral-10.7B-v0.2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Joseph717171_-_Mistral-10.7B-v0.2-gguf/blob/main/Mistral-10.7B-v0.2.Q3_K_M.gguf) | Q3_K_M | 4.84GB |
24
+ | [Mistral-10.7B-v0.2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Joseph717171_-_Mistral-10.7B-v0.2-gguf/blob/main/Mistral-10.7B-v0.2.Q3_K_L.gguf) | Q3_K_L | 5.26GB |
25
+ | [Mistral-10.7B-v0.2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Joseph717171_-_Mistral-10.7B-v0.2-gguf/blob/main/Mistral-10.7B-v0.2.IQ4_XS.gguf) | IQ4_XS | 5.43GB |
26
+ | [Mistral-10.7B-v0.2.Q4_0.gguf](https://huggingface.co/RichardErkhov/Joseph717171_-_Mistral-10.7B-v0.2-gguf/blob/main/Mistral-10.7B-v0.2.Q4_0.gguf) | Q4_0 | 5.66GB |
27
+ | [Mistral-10.7B-v0.2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Joseph717171_-_Mistral-10.7B-v0.2-gguf/blob/main/Mistral-10.7B-v0.2.IQ4_NL.gguf) | IQ4_NL | 5.72GB |
28
+ | [Mistral-10.7B-v0.2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Joseph717171_-_Mistral-10.7B-v0.2-gguf/blob/main/Mistral-10.7B-v0.2.Q4_K_S.gguf) | Q4_K_S | 5.7GB |
29
+ | [Mistral-10.7B-v0.2.Q4_K.gguf](https://huggingface.co/RichardErkhov/Joseph717171_-_Mistral-10.7B-v0.2-gguf/blob/main/Mistral-10.7B-v0.2.Q4_K.gguf) | Q4_K | 6.02GB |
30
+ | [Mistral-10.7B-v0.2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Joseph717171_-_Mistral-10.7B-v0.2-gguf/blob/main/Mistral-10.7B-v0.2.Q4_K_M.gguf) | Q4_K_M | 6.02GB |
31
+ | [Mistral-10.7B-v0.2.Q4_1.gguf](https://huggingface.co/RichardErkhov/Joseph717171_-_Mistral-10.7B-v0.2-gguf/blob/main/Mistral-10.7B-v0.2.Q4_1.gguf) | Q4_1 | 6.27GB |
32
+ | [Mistral-10.7B-v0.2.Q5_0.gguf](https://huggingface.co/RichardErkhov/Joseph717171_-_Mistral-10.7B-v0.2-gguf/blob/main/Mistral-10.7B-v0.2.Q5_0.gguf) | Q5_0 | 6.89GB |
33
+ | [Mistral-10.7B-v0.2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Joseph717171_-_Mistral-10.7B-v0.2-gguf/blob/main/Mistral-10.7B-v0.2.Q5_K_S.gguf) | Q5_K_S | 6.89GB |
34
+ | [Mistral-10.7B-v0.2.Q5_K.gguf](https://huggingface.co/RichardErkhov/Joseph717171_-_Mistral-10.7B-v0.2-gguf/blob/main/Mistral-10.7B-v0.2.Q5_K.gguf) | Q5_K | 7.08GB |
35
+ | [Mistral-10.7B-v0.2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Joseph717171_-_Mistral-10.7B-v0.2-gguf/blob/main/Mistral-10.7B-v0.2.Q5_K_M.gguf) | Q5_K_M | 7.08GB |
36
+ | [Mistral-10.7B-v0.2.Q5_1.gguf](https://huggingface.co/RichardErkhov/Joseph717171_-_Mistral-10.7B-v0.2-gguf/blob/main/Mistral-10.7B-v0.2.Q5_1.gguf) | Q5_1 | 7.51GB |
37
+ | [Mistral-10.7B-v0.2.Q6_K.gguf](https://huggingface.co/RichardErkhov/Joseph717171_-_Mistral-10.7B-v0.2-gguf/blob/main/Mistral-10.7B-v0.2.Q6_K.gguf) | Q6_K | 8.2GB |
38
+ | [Mistral-10.7B-v0.2.Q8_0.gguf](https://huggingface.co/RichardErkhov/Joseph717171_-_Mistral-10.7B-v0.2-gguf/blob/main/Mistral-10.7B-v0.2.Q8_0.gguf) | Q8_0 | 10.62GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ base_model: []
46
+ library_name: transformers
47
+ tags:
48
+ - mergekit
49
+ - merge
50
+ license: apache-2.0
51
+ ---
52
+ # Credit for the model card's description goes to ddh0 and mergekit
53
+ # Looking for [Mistral-10.7B-Instruct-v0.2?](https://huggingface.co/ddh0/Mistral-10.7B-Instruct-v0.2)
54
+ # Credit for access and conversion of Mistral-7B-v0.2 goes to alpindale (from MistalAI's weights to HF Transformers)
55
+ # Mistral-10.7B-v0.2
56
+ This is Mistral-10.7B-v0.2, a depth-upscaled version of [alpindale/Mistral-7B-v0.2-hf](https://huggingface.co/alpindale/Mistral-7B-v0.2-hf).
57
+
58
+ This model is intended to be used as a basis for further fine-tuning, or as a drop-in upgrade from the original 7 billion parameter model.
59
+
60
+ Paper detailing how Depth-Up Scaling works: [SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling](https://arxiv.org/abs/2312.15166)
61
+
62
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
63
+
64
+ ## Merge Details
65
+ ### Merge Method
66
+
67
+ This model was merged using the passthrough merge method.
68
+
69
+ ### Models Merged
70
+
71
+ The following models were included in the merge:
72
+ * /Users/jsarnecki/opt/Workspace/alpindale/Mistral-7B-v0.2-hf
73
+
74
+ ### Configuration
75
+
76
+ The following YAML configuration was used to produce this model:
77
+
78
+ ```yaml
79
+ dtype: bfloat16
80
+ merge_method: passthrough
81
+ slices:
82
+ - sources:
83
+ - layer_range: [0, 24]
84
+ model: /Users/jsarnecki/opt/Workspace/alpindale/Mistral-7B-v0.2-hf
85
+ - sources:
86
+ - layer_range: [8, 32]
87
+ model: /Users/jsarnecki/opt/Workspace/alpindale/Mistral-7B-v0.2-hf
88
+
89
+ ```
90
+