SanjiWatsuki commited on
Commit
ec8d519
1 Parent(s): a254d95

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +67 -1
README.md CHANGED
@@ -1,3 +1,69 @@
1
  ---
2
- license: cc-by-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: cc-by-nc-4.0
3
  ---
4
+
5
+ ![image/png](https://huggingface.co/SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE/resolve/main/toppy-bruins-maid.jpg)
6
+
7
+ <!-- description start -->
8
+ ## Description
9
+
10
+ This repository hosts FP16 files for Loyal-Toppy-Bruins-Maid-7B, a 7B model aimed at engaging role-playing (RP) with complex character card adherence. Its foundation is [Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha), notable for its performance in the LMSYS Chatbot Arena, even surpassing GPT-3.5-Turbo-1106. The model incorporates [rwitz/go-bruins-v2](https://huggingface.co/rwitz/go-bruins-v2), a [Q-bert/MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling) derivative with Alpaca RP data tuning.
11
+
12
+ The other foundational model is [chargoddard/loyal-piano-m7](https://huggingface.co/chargoddard/loyal-piano-m7), chosen for its strong RP performance and Alpaca format training, with a diverse dataset including PIPPA, rpbuild, and LimaRP.
13
+
14
+ [Undi95/Toppy-M-7B](https://huggingface.co/chargoddard/loyal-piano-m7), known for its creativity, brings in useful RP data from various sources. It ranks first among 7B models on [OpenRouter](https://openrouter.ai/rankings) for a good reason.
15
+
16
+ Noromaid-7B, a Mistral finetune with unique RP data not present in other models, was also added for bringing in a unique RP dataset and being a well-regarded RP model.
17
+
18
+ The models were merged using the DARE ties method, with a targeted 1.2 absolute weight and high density (0.5-0.6), as discussed in the [MergeKit GitHub Repo](https://github.com/cg123/mergekit/issues/26).
19
+
20
+ ### The sauce
21
+ ```
22
+ models: # Top-Loyal-Bruins-Maid-DARE-7B_v2
23
+ - model: mistralai/Mistral-7B-v0.1
24
+ # no parameters necessary for base model
25
+ - model: rwitz/go-bruins-v2 # MetamathCybertronStarling base
26
+ parameters:
27
+ weight: 0.5
28
+ density: 0.6
29
+ - model: chargoddard/loyal-piano-m7 # Pull in some PIPPA/LimaRP/Orca/rpguild
30
+ parameters:
31
+ weight: 0.5
32
+ density: 0.6
33
+ - model: Undi95/Toppy-M-7B
34
+ parameters:
35
+ weight: 0.1
36
+ density: 0.5
37
+ - model: NeverSleep/Noromaid-7b-v0.1.1
38
+ parameters:
39
+ weight: 0.1
40
+ density: 0.5
41
+ merge_method: dare_ties
42
+ base_model: mistralai/Mistral-7B-v0.1
43
+ parameters:
44
+ normalize: false
45
+ int8_mask: true
46
+ dtype: bfloat16
47
+ ```
48
+
49
+ <!-- description end -->
50
+ <!-- prompt-template start -->
51
+ ## Prompt template: Custom format, or Alpaca
52
+
53
+ ### Custom format:
54
+ I found the best SillyTavern results from using the Noromaid template.
55
+
56
+ SillyTavern config files: [Context](https://files.catbox.moe/ifmhai.json), [Instruct](https://files.catbox.moe/ttw1l9.json).
57
+
58
+ Otherwise, I tried to ensure that all of the underlying merged models were Alpaca favored.
59
+
60
+ ### Alpaca:
61
+ ```
62
+ Below is an instruction that describes a task. Write a response that appropriately completes the request.
63
+
64
+ ### Instruction:
65
+ {prompt}
66
+
67
+ ### Response:
68
+
69
+ ```