SanjiWatsuki's picture
Update README.md
ec8d519
|
raw
history blame
No virus
2.9 kB
---
license: cc-by-nc-4.0
---
![image/png](https://huggingface.co/SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE/resolve/main/toppy-bruins-maid.jpg)
<!-- description start -->
## Description
This repository hosts FP16 files for Loyal-Toppy-Bruins-Maid-7B, a 7B model aimed at engaging role-playing (RP) with complex character card adherence. Its foundation is [Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha), notable for its performance in the LMSYS Chatbot Arena, even surpassing GPT-3.5-Turbo-1106. The model incorporates [rwitz/go-bruins-v2](https://huggingface.co/rwitz/go-bruins-v2), a [Q-bert/MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling) derivative with Alpaca RP data tuning.
The other foundational model is [chargoddard/loyal-piano-m7](https://huggingface.co/chargoddard/loyal-piano-m7), chosen for its strong RP performance and Alpaca format training, with a diverse dataset including PIPPA, rpbuild, and LimaRP.
[Undi95/Toppy-M-7B](https://huggingface.co/chargoddard/loyal-piano-m7), known for its creativity, brings in useful RP data from various sources. It ranks first among 7B models on [OpenRouter](https://openrouter.ai/rankings) for a good reason.
Noromaid-7B, a Mistral finetune with unique RP data not present in other models, was also added for bringing in a unique RP dataset and being a well-regarded RP model.
The models were merged using the DARE ties method, with a targeted 1.2 absolute weight and high density (0.5-0.6), as discussed in the [MergeKit GitHub Repo](https://github.com/cg123/mergekit/issues/26).
### The sauce
```
models: # Top-Loyal-Bruins-Maid-DARE-7B_v2
- model: mistralai/Mistral-7B-v0.1
# no parameters necessary for base model
- model: rwitz/go-bruins-v2 # MetamathCybertronStarling base
parameters:
weight: 0.5
density: 0.6
- model: chargoddard/loyal-piano-m7 # Pull in some PIPPA/LimaRP/Orca/rpguild
parameters:
weight: 0.5
density: 0.6
- model: Undi95/Toppy-M-7B
parameters:
weight: 0.1
density: 0.5
- model: NeverSleep/Noromaid-7b-v0.1.1
parameters:
weight: 0.1
density: 0.5
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
```
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Custom format, or Alpaca
### Custom format:
I found the best SillyTavern results from using the Noromaid template.
SillyTavern config files: [Context](https://files.catbox.moe/ifmhai.json), [Instruct](https://files.catbox.moe/ttw1l9.json).
Otherwise, I tried to ensure that all of the underlying merged models were Alpaca favored.
### Alpaca:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```