--- license: cc-by-nc-4.0 tags: - trl - dpo - generated_from_trainer base_model: HuggingFaceTB/SmolLM-360M-Instruct model-index: - name: SmolLM-1.7B-Instruct-dpo-16k results: [] language: - en --- # SmolLM-1.7B-Instruct-dpo-16k This model is a fine-tuned version of [HuggingFaceTB/SmolLM-360M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM-360M-Instruct) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8854 - Rewards/chosen: 0.0056 - Rewards/rejected: 0.3516 - Rewards/accuracies: 0.0326 - Rewards/margins: -0.3460 - Logps/rejected: -470.7809 - Logps/chosen: -546.0043 - Logits/rejected: 0.3165 - Logits/chosen: 0.6158 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 2 - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:------:|:-----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.5228 | 0.9999 | 3368 | 0.8697 | 0.0208 | 0.3405 | 0.0348 | -0.3197 | -470.8920 | -545.8519 | 0.3270 | 0.6295 | | 0.4508 | 2.0 | 6737 | 0.8870 | 0.0130 | 0.3621 | 0.0228 | -0.3491 | -470.6755 | -545.9296 | 0.2662 | 0.5778 | | 0.4451 | 2.9999 | 10105 | 0.8871 | 0.0057 | 0.3546 | 0.0337 | -0.3489 | -470.7502 | -546.0029 | 0.2855 | 0.5938 | | 0.4447 | 4.0 | 13474 | 0.8869 | 0.0098 | 0.3588 | 0.0196 | -0.3490 | -470.7085 | -545.9620 | 0.3198 | 0.6222 | | 0.4446 | 4.9999 | 16842 | 0.8870 | 0.0065 | 0.3551 | 0.0391 | -0.3486 | -470.7452 | -545.9945 | 0.3097 | 0.6124 | | 0.4448 | 5.9991 | 20208 | 0.8854 | 0.0056 | 0.3516 | 0.0326 | -0.3460 | -470.7809 | -546.0043 | 0.3165 | 0.6158 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.2.0 - Datasets 2.19.1 - Tokenizers 0.19.1