Svak commited on
Commit
ca0c643
1 Parent(s): d44f3f0

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +61 -0
README.md ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: tongyi-qianwen
4
+ license_link: https://huggingface.co/anthracite-org/magnum-v2-72b/blob/main/LICENSE
5
+ base_model: Qwen/Qwen2-72B-Instruct
6
+ language:
7
+ - en
8
+ - fr
9
+ - de
10
+ - es
11
+ - it
12
+ - pt
13
+ - ru
14
+ - zh
15
+ - ja
16
+ pipeline_tag: text-generation
17
+ tags:
18
+ - chat
19
+ ---
20
+
21
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6491e00e057b0928b3e07b75/u8B-5bEeroN549uxUIisV.png)
22
+
23
+ This quant was made for and by [infermatic.ai](https://infermatic.ai/)
24
+
25
+ Dynamic FP8 quant of [magnum-v2-72b](https://huggingface.co/anthracite-org/magnum-v2-72b) made with AutoFP8.
26
+
27
+ This is the seventh (Lucky!) in a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus. This model is fine-tuned on top of [Qwen-2 72B Instruct](https://huggingface.co/Qwen/Qwen2-72B-Instruct).
28
+
29
+ ## Prompting
30
+ Model has been Instruct tuned with the ChatML formatting. A typical input would look like this:
31
+
32
+ ```py
33
+ """<|im_start|>user
34
+ Hi there!<|im_end|>
35
+ <|im_start|>assistant
36
+ Nice to meet you!<|im_end|>
37
+ <|im_start|>user
38
+ Can I ask a question?<|im_end|>
39
+ <|im_start|>assistant
40
+ """
41
+ ```
42
+
43
+ ## Credits
44
+ - [anthracite-org/Stheno-Data-Filtered](https://huggingface.co/datasets/anthracite-org/Stheno-Data-Filtered)
45
+ - [anthracite-org/kalo-opus-instruct-22k-no-refusal](https://huggingface.co/datasets/anthracite-org/kalo-opus-instruct-22k-no-refusal)
46
+ - [anthracite-org/nopm_claude_writing_fixed](https://huggingface.co/datasets/anthracite-org/nopm_claude_writing_fixed)
47
+
48
+ This model has been a team effort, and the credits goes to all members of Anthracite.
49
+
50
+ ## Training
51
+ The training was done for 2 epochs. We used 8x [AMD Instinct™ MI300X Accelerators](https://www.amd.com/en/products/accelerators/instinct/mi300/mi300x.html) for the full-parameter fine-tuning of the model.
52
+
53
+ We also trained with a weight decay of 0.01 to help further stabilize the loss trajectory and mitigate catastrophic forgetting, and utilize a peak learning rate of 4e-6 to prevent the 2nd epoch loss from dropping too significantly (as it is a strong indicator of overfitting).
54
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6491e00e057b0928b3e07b75/hVd5gNqSLOlWTkUb0A7iE.png)
55
+
56
+ Sample Packing was done for 16k tokens rather than the 8k tokens used in our previous runs.
57
+
58
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
59
+
60
+ ## Safety
61
+ ...