File size: 3,351 Bytes
b4b37f9
 
 
28fda92
b4b37f9
 
28fda92
 
 
 
 
 
b4b37f9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28fda92
b4b37f9
 
28fda92
 
 
 
 
b4b37f9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28fda92
b4b37f9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
---
base_model:
- KnutJaegersberg/Mistral-7B-EssayWriter
- luozhuanggary/GOAT-v0.2-Mistral-7B-Claude
- jdqwoi/TooManyMixRolePlay-7B-Story_V3.5
- Norquinal/Mistral-7B-storywriter
- ajibawa-2023/Young-Children-Storyteller-Mistral-7B
- scribis/Fantastica-7b-Instruct-0.2-Italian_merged
- MaziyarPanahi/Mistral-7B-claude-instruct-Mistral-7B-Instruct-v0.2-slerp
- kasper52786/StoryWeaver-7b-Instruct-v0.1
- ajibawa-2023/General-Stories-Mistral-7B
- tdh87/StoryTeller7b-meh
library_name: transformers
tags:
- mergekit
- merge

---
# merge

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

## Merge Details
### Merge Method

This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [Norquinal/Mistral-7B-storywriter](https://huggingface.co/Norquinal/Mistral-7B-storywriter) as a base.

### Models Merged

The following models were included in the merge:
* [KnutJaegersberg/Mistral-7B-EssayWriter](https://huggingface.co/KnutJaegersberg/Mistral-7B-EssayWriter)
* [luozhuanggary/GOAT-v0.2-Mistral-7B-Claude](https://huggingface.co/luozhuanggary/GOAT-v0.2-Mistral-7B-Claude)
* [jdqwoi/TooManyMixRolePlay-7B-Story_V3.5](https://huggingface.co/jdqwoi/TooManyMixRolePlay-7B-Story_V3.5)
* [ajibawa-2023/Young-Children-Storyteller-Mistral-7B](https://huggingface.co/ajibawa-2023/Young-Children-Storyteller-Mistral-7B)
* [scribis/Fantastica-7b-Instruct-0.2-Italian_merged](https://huggingface.co/scribis/Fantastica-7b-Instruct-0.2-Italian_merged)
* [MaziyarPanahi/Mistral-7B-claude-instruct-Mistral-7B-Instruct-v0.2-slerp](https://huggingface.co/MaziyarPanahi/Mistral-7B-claude-instruct-Mistral-7B-Instruct-v0.2-slerp)
* [kasper52786/StoryWeaver-7b-Instruct-v0.1](https://huggingface.co/kasper52786/StoryWeaver-7b-Instruct-v0.1)
* [ajibawa-2023/General-Stories-Mistral-7B](https://huggingface.co/ajibawa-2023/General-Stories-Mistral-7B)
* [tdh87/StoryTeller7b-meh](https://huggingface.co/tdh87/StoryTeller7b-meh)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:

  - model: ajibawa-2023/General-Stories-Mistral-7B
    parameters:
      weight: 0.1
      density: 0.9
  - model: ajibawa-2023/Young-Children-Storyteller-Mistral-7B
    parameters:
      weight: 0.1
      density: 0.9
  - model: scribis/Fantastica-7b-Instruct-0.2-Italian_merged
    parameters:
      weight: 0.1
      density: 0.9
  - model: KnutJaegersberg/Mistral-7B-EssayWriter
    parameters:
      weight: 0.1
      density: 0.9
  - model: luozhuanggary/GOAT-v0.2-Mistral-7B-Claude
    parameters:
      weight: 0.1
      density: 0.9
  - model: Norquinal/Mistral-7B-storywriter
    parameters:
      weight: 0.1
      density: 0.9
  - model: tdh87/StoryTeller7b-meh
    parameters:
      weight: 0.1
      density: 0.9
  - model: kasper52786/StoryWeaver-7b-Instruct-v0.1
    parameters:
      weight: 0.1
      density: 0.9
  - model: jdqwoi/TooManyMixRolePlay-7B-Story_V3.5
    parameters:
      weight: 0.1
      density: 0.9
  - model: MaziyarPanahi/Mistral-7B-claude-instruct-Mistral-7B-Instruct-v0.2-slerp
    parameters:
      weight: 0.1
      density: 0.9
      
merge_method: dare_ties
base_model: Norquinal/Mistral-7B-storywriter
parameters:
  normalize: true
  int8_mask: true
dtype: float16
```