hannahbillo commited on
Commit
2cee8f0
1 Parent(s): 200f131

End of training

Browse files
Files changed (1) hide show
  1. README.md +74 -0
README.md ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: meta-llama/Meta-Llama-3.1-8B
3
+ library_name: peft
4
+ license: llama3.1
5
+ tags:
6
+ - trl
7
+ - dpo
8
+ - generated_from_trainer
9
+ model-index:
10
+ - name: dpo-llama3-8b-sample-rules
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # dpo-llama3-8b-sample-rules
18
+
19
+ This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on the None dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.1841
22
+ - Rewards/chosen: 0.0866
23
+ - Rewards/rejected: -1.6352
24
+ - Rewards/accuracies: 1.0
25
+ - Rewards/margins: 1.7218
26
+ - Logps/rejected: -545.1958
27
+ - Logps/chosen: -202.5652
28
+ - Logits/rejected: -1.3753
29
+ - Logits/chosen: -1.2073
30
+
31
+ ## Model description
32
+
33
+ More information needed
34
+
35
+ ## Intended uses & limitations
36
+
37
+ More information needed
38
+
39
+ ## Training and evaluation data
40
+
41
+ More information needed
42
+
43
+ ## Training procedure
44
+
45
+ ### Training hyperparameters
46
+
47
+ The following hyperparameters were used during training:
48
+ - learning_rate: 5e-06
49
+ - train_batch_size: 1
50
+ - eval_batch_size: 2
51
+ - seed: 42
52
+ - gradient_accumulation_steps: 8
53
+ - total_train_batch_size: 8
54
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
55
+ - lr_scheduler_type: cosine
56
+ - lr_scheduler_warmup_ratio: 0.1
57
+ - num_epochs: 1
58
+ - mixed_precision_training: Native AMP
59
+
60
+ ### Training results
61
+
62
+ | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
63
+ |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
64
+ | 0.4281 | 0.4444 | 50 | 0.3902 | 0.1843 | -0.5815 | 1.0 | 0.7658 | -439.8294 | -192.8043 | -1.3883 | -1.2422 |
65
+ | 0.193 | 0.8889 | 100 | 0.1841 | 0.0866 | -1.6352 | 1.0 | 1.7218 | -545.1958 | -202.5652 | -1.3753 | -1.2073 |
66
+
67
+
68
+ ### Framework versions
69
+
70
+ - PEFT 0.12.0
71
+ - Transformers 4.44.0
72
+ - Pytorch 2.3.1+cu121
73
+ - Datasets 2.21.0
74
+ - Tokenizers 0.19.1