--- base_model: grimjim/lemonade-rebase-32k-7B library_name: transformers license: cc-by-4.0 quanted_by: grimjim pipeline_tag: text-generation --- # lemonade-rebase-32k-7B This is Q8_0 GGUF quant of a rebase merge using the formula from [KatyTheCutie/LemonadeRP-4.5.3](https://huggingface.co/KatyTheCutie/LemonadeRP-4.5.3) on Mistral v0.2 7B base (instead of v0.1), for 32K context length (eliminating the 4K sliding window), with rope theta (re)set to 40K. No other changes were made. This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [alpindale/Mistral-7B-v0.2-hf](https://huggingface.co/alpindale/Mistral-7B-v0.2-hf) as a base. ### Models Merged The following models were included in the merge: * [cgato/Thespis-7b-v0.5-SFTTest-2Epoch](https://huggingface.co/cgato/Thespis-7b-v0.5-SFTTest-2Epoch) * [NeverSleep/Noromaid-7B-0.4-DPO](https://huggingface.co/NeverSleep/Noromaid-7B-0.4-DPO) * [NurtureAI/neural-chat-7b-v3-1-16k](https://huggingface.co/NurtureAI/neural-chat-7b-v3-1-16k) * [cgato/Thespis-CurtainCall-7b-v0.2.2](https://huggingface.co/cgato/Thespis-CurtainCall-7b-v0.2.2) * [tavtav/eros-7b-test](https://huggingface.co/tavtav/eros-7b-test) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: alpindale/Mistral-7B-v0.2-hf dtype: float16 merge_method: task_arithmetic slices: - sources: - layer_range: [0, 32] model: alpindale/Mistral-7B-v0.2-hf - layer_range: [0, 32] model: NeverSleep/Noromaid-7B-0.4-DPO parameters: weight: 0.37 - layer_range: [0, 32] model: cgato/Thespis-CurtainCall-7b-v0.2.2 parameters: weight: 0.32 - layer_range: [0, 32] model: NurtureAI/neural-chat-7b-v3-1-16k parameters: weight: 0.15 - layer_range: [0, 32] model: cgato/Thespis-7b-v0.5-SFTTest-2Epoch parameters: weight: 0.38 - layer_range: [0, 32] model: tavtav/eros-7b-test parameters: weight: 0.18 ```