metadata
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- generated_from_trainer
datasets:
- nena_speech_1_0_test
metrics:
- wer
model-index:
- name: wav2vec2-large-mms-1b-urmi-christian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: nena_speech_1_0_test
type: nena_speech_1_0_test
config: urmi (christian)
split: test
args: urmi (christian)
metrics:
- name: Wer
type: wer
value: 1
wav2vec2-large-mms-1b-urmi-christian
This model is a fine-tuned version of facebook/mms-1b-all on the nena_speech_1_0_test dataset. It achieves the following results on the evaluation set:
- Loss: 1.3440
- Wer: 1.0
- Cer: 0.3475
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 5
Training results
Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
---|---|---|---|---|---|
11.5102 | 0.14 | 25 | 8.4191 | 1.0 | 0.9718 |
4.591 | 0.29 | 50 | 3.1910 | 1.0 | 0.9481 |
2.4884 | 0.43 | 75 | 1.3519 | 1.0 | 0.3887 |
1.7122 | 0.57 | 100 | 1.1193 | 1.0 | 0.3462 |
1.5052 | 0.72 | 125 | 1.0087 | 0.9984 | 0.3109 |
1.1576 | 0.86 | 150 | 0.9225 | 1.0 | 0.2944 |
1.5144 | 1.01 | 175 | 0.9227 | 1.0 | 0.2867 |
1.18 | 1.15 | 200 | 0.8491 | 1.0 | 0.2681 |
1.3588 | 1.29 | 225 | 0.8503 | 1.0 | 0.2880 |
1.211 | 1.44 | 250 | 0.8462 | 1.0 | 0.2655 |
1.2648 | 1.58 | 275 | 0.8971 | 0.9984 | 0.2659 |
1.3868 | 1.72 | 300 | 0.9340 | 1.0 | 0.2760 |
1.4173 | 1.87 | 325 | 1.1986 | 1.0 | 0.3431 |
1.6718 | 2.01 | 350 | 1.3107 | 1.0 | 0.3551 |
1.5767 | 2.16 | 375 | 1.2448 | 1.0 | 0.3427 |
1.7307 | 2.3 | 400 | 1.2685 | 1.0 | 0.3468 |
1.5737 | 2.44 | 425 | 1.2404 | 1.0 | 0.3142 |
1.6168 | 2.59 | 450 | 1.2884 | 1.0 | 0.3524 |
1.719 | 2.73 | 475 | 1.2525 | 1.0 | 0.3368 |
1.6365 | 2.87 | 500 | 1.3324 | 1.0 | 0.3457 |
1.8868 | 3.02 | 525 | 1.3430 | 1.0 | 0.3475 |
1.6789 | 3.16 | 550 | 1.3439 | 1.0 | 0.3476 |
1.8258 | 3.3 | 575 | 1.3440 | 1.0 | 0.3475 |
1.7519 | 3.45 | 600 | 1.3440 | 1.0 | 0.3475 |
1.6889 | 3.59 | 625 | 1.3440 | 1.0 | 0.3475 |
1.8634 | 3.74 | 650 | 1.3440 | 1.0 | 0.3475 |
1.6143 | 3.88 | 675 | 1.3440 | 1.0 | 0.3475 |
1.8549 | 4.02 | 700 | 1.3440 | 1.0 | 0.3475 |
1.7192 | 4.17 | 725 | 1.3440 | 1.0 | 0.3475 |
1.7624 | 4.31 | 750 | 1.3440 | 1.0 | 0.3475 |
1.8096 | 4.45 | 775 | 1.3440 | 1.0 | 0.3475 |
1.6896 | 4.6 | 800 | 1.3440 | 1.0 | 0.3475 |
1.727 | 4.74 | 825 | 1.3440 | 1.0 | 0.3475 |
1.7308 | 4.89 | 850 | 1.3440 | 1.0 | 0.3475 |
Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1