Update README.md
Browse files
README.md
CHANGED
@@ -1,7 +1,14 @@
|
|
1 |
---
|
2 |
-
|
3 |
quantized_by: bartowski
|
4 |
pipeline_tag: text-generation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
---
|
6 |
|
7 |
## Exllama v2 Quantizations of Replete-LLM-Qwen2-7b
|
@@ -12,7 +19,7 @@ Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.1.8">turbo
|
|
12 |
|
13 |
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
|
14 |
|
15 |
-
Original model: https://huggingface.co/Replete-AI/
|
16 |
|
17 |
## Prompt format
|
18 |
|
|
|
1 |
---
|
|
|
2 |
quantized_by: bartowski
|
3 |
pipeline_tag: text-generation
|
4 |
+
license: apache-2.0
|
5 |
+
datasets:
|
6 |
+
- Replete-AI/Everything_Instruct_8k_context_filtered
|
7 |
+
tags:
|
8 |
+
- unsloth
|
9 |
+
language:
|
10 |
+
- en
|
11 |
+
base_model: Replete-AI/Replete-LLM-Qwen2-7b
|
12 |
---
|
13 |
|
14 |
## Exllama v2 Quantizations of Replete-LLM-Qwen2-7b
|
|
|
19 |
|
20 |
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
|
21 |
|
22 |
+
Original model: https://huggingface.co/Replete-AI/Replete-LLM-Qwen2-7b
|
23 |
|
24 |
## Prompt format
|
25 |
|