NLLB_LoRA / README.md
yasmineee's picture
finetune-NLLB-600M-on-opus100-Ar2En-with-lora
3c229b9 verified
metadata
base_model: facebook/nllb-200-distilled-600M
library_name: peft
license: cc-by-nc-4.0
metrics:
  - bleu
  - rouge
tags:
  - generated_from_trainer
model-index:
  - name: NLLB_LoRA
    results: []

NLLB_LoRA

This model is a fine-tuned version of facebook/nllb-200-distilled-600M on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3291
  • Bleu: 32.6379
  • Rouge: 0.5923
  • Gen Len: 17.375

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss Bleu Rouge Gen Len
2.714 1.0 875 1.3916 31.9042 0.5851 17.4015
1.457 2.0 1750 1.3379 32.3993 0.5916 17.4175
1.4281 3.0 2625 1.3291 32.6379 0.5923 17.375

Framework versions

  • PEFT 0.12.0
  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1