Masaki Eguchi commited on
Commit
4ae4344
1 Parent(s): 835132c

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -13
README.md CHANGED
@@ -14,12 +14,7 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
- - eval_loss: 1.5130
18
- - eval_runtime: 654.089
19
- - eval_samples_per_second: 8.303
20
- - eval_steps_per_second: 0.26
21
- - epoch: 4.28
22
- - step: 1450
23
 
24
  ## Model description
25
 
@@ -38,18 +33,39 @@ More information needed
38
  ### Training hyperparameters
39
 
40
  The following hyperparameters were used during training:
41
- - learning_rate: 1e-05
42
- - train_batch_size: 32
43
- - eval_batch_size: 32
44
  - seed: 42
45
- - gradient_accumulation_steps: 8
46
  - total_train_batch_size: 256
47
  - optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-08
48
- - lr_scheduler_type: linear
49
- - lr_scheduler_warmup_steps: 250
50
- - num_epochs: 10
51
  - mixed_precision_training: Native AMP
52
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53
  ### Framework versions
54
 
55
  - Transformers 4.25.1
 
14
 
15
  This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Loss: 1.4621
 
 
 
 
 
18
 
19
  ## Model description
20
 
 
33
  ### Training hyperparameters
34
 
35
  The following hyperparameters were used during training:
36
+ - learning_rate: 5e-05
37
+ - train_batch_size: 8
38
+ - eval_batch_size: 8
39
  - seed: 42
40
+ - gradient_accumulation_steps: 32
41
  - total_train_batch_size: 256
42
  - optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-08
43
+ - lr_scheduler_type: cosine
44
+ - lr_scheduler_warmup_ratio: 0.1
45
+ - num_epochs: 15
46
  - mixed_precision_training: Native AMP
47
 
48
+ ### Training results
49
+
50
+ | Training Loss | Epoch | Step | Validation Loss |
51
+ |:-------------:|:-----:|:----:|:---------------:|
52
+ | 1.668 | 1.0 | 169 | 1.5412 |
53
+ | 1.6261 | 2.0 | 338 | 1.5138 |
54
+ | 1.5967 | 3.0 | 507 | 1.4943 |
55
+ | 1.5745 | 4.0 | 676 | 1.4913 |
56
+ | 1.5549 | 5.0 | 845 | 1.4738 |
57
+ | 1.5445 | 6.0 | 1014 | 1.4671 |
58
+ | 1.536 | 7.0 | 1183 | 1.4689 |
59
+ | 1.5254 | 8.0 | 1352 | 1.4612 |
60
+ | 1.5244 | 9.0 | 1521 | 1.4560 |
61
+ | 1.5263 | 10.0 | 1690 | 1.4580 |
62
+ | 1.5249 | 11.0 | 1859 | 1.4482 |
63
+ | 1.5238 | 12.0 | 2028 | 1.4565 |
64
+ | 1.526 | 13.0 | 2197 | 1.4568 |
65
+ | 1.5232 | 14.0 | 2366 | 1.4547 |
66
+ | 1.5232 | 15.0 | 2535 | 1.4558 |
67
+
68
+
69
  ### Framework versions
70
 
71
  - Transformers 4.25.1