GGUF
mergekit
Merge
Not-For-All-Audiences
nsfw
Undi95 commited on
Commit
6ce0b33
1 Parent(s): 2bfb3d2

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +84 -0
README.md ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - NeverSleep/Noromaid-v0.1-mixtral-8x7b-Instruct-v3
4
+ - rombodawg/Open_Gpt4_8x7B_v0.2
5
+ - mistralai/Mixtral-8x7B-Instruct-v0.1
6
+ tags:
7
+ - mergekit
8
+ - merge
9
+ - not-for-all-audiences
10
+ - nsfw
11
+ license: cc-by-nc-4.0
12
+ ---
13
+
14
+ <!-- description start -->
15
+ ## Description
16
+
17
+ This repo contains GGUF files of NoromaidxOpenGPT4-1.
18
+
19
+ The model was created by merging Noromaid-8x7b-Instruct with Open_Gpt4_8x7B_v0.2 the exact same way [Rombodawg](https://huggingface.co/rombodawg) done his merge.
20
+
21
+ The only difference between [NoromaidxOpenGPT4-1](https://huggingface.co/NeverSleep/NoromaidxOpenGPT4-1-GGUF-iMatrix/) and [NoromaidxOpenGPT4-2](https://huggingface.co/NeverSleep/NoromaidxOpenGPT4-2-GGUF-iMatrix/) is that the first iteration use Mixtral-8x7B as a base for the merge (f16), where the second use Open_Gpt4_8x7B_v0.2 as a base (bf16).
22
+
23
+ After further testing and usage, the two model was released, because they each have their own qualities.
24
+
25
+ You can download the imatrix file to do many other quant [HERE](https://huggingface.co/NeverSleep/NoromaidxOpenGPT4-1/blob/main/imatrix-1.dat).
26
+ <!-- description end -->
27
+ <!-- prompt-template start -->
28
+ ### Prompt template:
29
+
30
+ ## Alpaca
31
+
32
+ ```
33
+ ### Instruction:
34
+ {system prompt}
35
+
36
+ ### Input:
37
+ {prompt}
38
+
39
+ ### Response:
40
+ {output}
41
+ ```
42
+
43
+ ## Mistral
44
+
45
+ ```
46
+ [INST] {prompt} [/INST]
47
+ ```
48
+
49
+ ## Merge Details
50
+ ### Merge Method
51
+
52
+ This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) as a base.
53
+
54
+ ### Models Merged
55
+
56
+ The following models were included in the merge:
57
+ * [NeverSleep/Noromaid-v0.1-mixtral-8x7b-Instruct-v3](https://huggingface.co/NeverSleep/Noromaid-v0.1-mixtral-8x7b-Instruct-v3)
58
+ * [rombodawg/Open_Gpt4_8x7B_v0.2](https://huggingface.co/rombodawg/Open_Gpt4_8x7B_v0.2)
59
+
60
+ ### Configuration
61
+
62
+ The following YAML configuration was used to produce this model:
63
+
64
+ ```yaml
65
+ models:
66
+ - model: rombodawg/Open_Gpt4_8x7B_v0.2
67
+ parameters:
68
+ density: .5
69
+ weight: 1
70
+ - model: NeverSleep/Noromaid-v0.1-mixtral-8x7b-Instruct-v3
71
+ parameters:
72
+ density: .5
73
+ weight: .7
74
+ merge_method: ties
75
+ base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
76
+ parameters:
77
+ normalize: true
78
+ int8_mask: true
79
+ dtype: float16
80
+ ```
81
+
82
+ ### Support
83
+
84
+ If you want to support us, you can [here](https://ko-fi.com/undiai).