File size: 3,418 Bytes
3f86d95
7019cbc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4b2d2b4
95da627
 
 
 
3f86d95
7019cbc
4b2d2b4
 
 
7019cbc
 
7142e9f
 
 
 
 
 
 
 
 
 
 
4b2d2b4
3f86d95
4b2d2b4
 
 
1a02016
4b2d2b4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31e3f7f
 
 
 
 
 
 
 
 
 
9822681
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
---
tags:
- finetuned
- quantized
- 4-bit
- AWQ
- transformers
- pytorch
- mistral
- instruct
- text-generation
- conversational
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- finetune
- chatml
model-index:
  - name: Noromaid-7B-0.4-DPO
    results: []
base_model: NeverSleep/Noromaid-7B-0.4-DPO
datasets:
  - Undi95/Llama2-13B-no_robots-alpaca-lora
  - NobodyExistsOnTheInternet/ToxicDPOqa
  - Undi95/toxic-dpo-v0.1-NoWarning
license: cc-by-nc-4.0
library_name: transformers
model_creator: IkariDev and Undi
model_name: Noromaid 7B v0.4 DPO
model_type: mistral
pipeline_tag: text-generation
inference: false
prompt_template: '<|im_start|>system

  {system_message}<|im_end|>

  <|im_start|>user

  {prompt}<|im_end|>

  <|im_start|>assistant

  '
quantized_by: Suparious
---
# Noromaid 7B v0.4 DPO - AWQ

- Model creator: [IkariDev and Undi](https://huggingface.co/NeverSleep)
- Original model: [Noromaid 7B v0.4 DPO](https://huggingface.co/NeverSleep/Noromaid-7B-0.4-DPO)

![image/png](https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/VKX2Z2yjZX5J8kXzgeCYO.png)

## Model description

This repo contains AWQ model files for [IkariDev and Undi's Noromaid 7B v0.4 DPO](https://huggingface.co/NeverSleep/Noromaid-7B-0.4-DPO).

These files were quantised using hardware kindly provided by [SolidRusT Networks](https://solidrust.net/).

### About AWQ

AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.

AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.

It is supported by:

- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code

## Prompt template: ChatML

```plaintext
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```

## Training data

- [no_robots dataset](https://huggingface.co/Undi95/Llama2-13B-no_robots-alpaca-lora) let the model have more human behavior, enhances the output.
- [Aesir Private RP dataset] New data from a new and never used before dataset, add fresh data, no LimaRP spam, this is 100% new. Thanks to the [MinvervaAI Team](https://huggingface.co/MinervaAI) and, in particular, [Gryphe](https://huggingface.co/Gryphe) for letting us use it!
- [Another private Aesir dataset]
- [Another private Aesir dataset]
- [limarp](https://huggingface.co/datasets/lemonilia/LimaRP)

## DPO training data

- [Intel/orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs)
- [NobodyExistsOnTheInternet/ToxicDPOqa](https://huggingface.co/datasets/NobodyExistsOnTheInternet/ToxicDPOqa)
- [Undi95/toxic-dpo-v0.1-NoWarning](https://huggingface.co/datasets/Undi95/toxic-dpo-v0.1-NoWarning)