Mansurbek commited on
Commit
7d27e38
1 Parent(s): e51a4dc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -59
README.md CHANGED
@@ -4,6 +4,12 @@ tags:
4
  model-index:
5
  - name: UzRoBERTa-v2
6
  results: []
 
 
 
 
 
 
7
  ---
8
 
9
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -11,21 +17,9 @@ should probably proofread and complete it, then remove this comment. -->
11
 
12
  # UzRoBERTa-v2
13
 
14
- This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
15
- It achieves the following results on the evaluation set:
16
  - Loss: 1.9097
17
 
18
- ## Model description
19
-
20
- More information needed
21
-
22
- ## Intended uses & limitations
23
-
24
- More information needed
25
-
26
- ## Training and evaluation data
27
-
28
- More information needed
29
 
30
  ## Training procedure
31
 
@@ -44,55 +38,10 @@ The following hyperparameters were used during training:
44
 
45
  | Training Loss | Epoch | Step | Validation Loss |
46
  |:-------------:|:-----:|:------:|:---------------:|
47
- | 6.3907 | 0.03 | 10000 | 6.3152 |
48
- | 5.9936 | 0.05 | 20000 | 5.7963 |
49
- | 3.6265 | 0.08 | 30000 | 3.5652 |
50
- | 3.0828 | 0.1 | 40000 | 3.1101 |
51
- | 2.8485 | 0.13 | 50000 | 2.8866 |
52
- | 2.6924 | 0.15 | 60000 | 2.7570 |
53
- | 2.5799 | 0.18 | 70000 | 2.6464 |
54
- | 2.4943 | 0.2 | 80000 | 2.5785 |
55
- | 2.4246 | 0.23 | 90000 | 2.5101 |
56
  | 2.3673 | 0.25 | 100000 | 2.4588 |
57
- | 2.3233 | 0.28 | 110000 | 2.4183 |
58
- | 2.28 | 0.3 | 120000 | 2.3632 |
59
- | 2.2451 | 0.33 | 130000 | 2.3335 |
60
- | 2.2113 | 0.36 | 140000 | 2.3124 |
61
- | 2.1853 | 0.38 | 150000 | 2.2717 |
62
- | 2.1566 | 0.41 | 160000 | 2.2435 |
63
- | 2.1344 | 0.43 | 170000 | 2.2302 |
64
- | 2.1157 | 0.46 | 180000 | 2.2068 |
65
- | 2.0926 | 0.48 | 190000 | 2.1794 |
66
  | 2.0797 | 0.51 | 200000 | 2.1653 |
67
- | 2.056 | 0.53 | 210000 | 2.1497 |
68
- | 2.043 | 0.56 | 220000 | 2.1302 |
69
- | 2.0217 | 0.58 | 230000 | 2.1162 |
70
- | 2.0112 | 0.61 | 240000 | 2.1003 |
71
- | 1.9934 | 0.64 | 250000 | 2.0877 |
72
- | 1.9855 | 0.66 | 260000 | 2.0697 |
73
- | 1.9756 | 0.69 | 270000 | 2.0601 |
74
- | 1.9596 | 0.71 | 280000 | 2.0457 |
75
- | 1.9477 | 0.74 | 290000 | 2.0407 |
76
  | 1.9369 | 0.76 | 300000 | 2.0265 |
77
- | 1.9342 | 0.79 | 310000 | 2.0106 |
78
- | 1.9183 | 0.81 | 320000 | 2.0076 |
79
- | 1.9076 | 0.84 | 330000 | 1.9999 |
80
- | 1.8994 | 0.86 | 340000 | 1.9924 |
81
- | 1.8968 | 0.89 | 350000 | 1.9871 |
82
- | 1.8897 | 0.91 | 360000 | 1.9787 |
83
- | 1.8769 | 0.94 | 370000 | 1.9678 |
84
- | 1.8727 | 0.97 | 380000 | 1.9659 |
85
- | 1.8675 | 0.99 | 390000 | 1.9546 |
86
  | 1.8545 | 1.02 | 400000 | 1.9456 |
87
- | 1.8515 | 1.04 | 410000 | 1.9425 |
88
- | 1.8397 | 1.07 | 420000 | 1.9416 |
89
- | 1.8406 | 1.09 | 430000 | 1.9343 |
90
- | 1.8332 | 1.12 | 440000 | 1.9273 |
91
- | 1.8325 | 1.14 | 450000 | 1.9257 |
92
- | 1.8258 | 1.17 | 460000 | 1.9219 |
93
- | 1.8239 | 1.19 | 470000 | 1.9168 |
94
- | 1.8173 | 1.22 | 480000 | 1.9163 |
95
- | 1.8155 | 1.25 | 490000 | 1.9113 |
96
  | 1.8133 | 1.27 | 500000 | 1.9101 |
97
 
98
 
@@ -101,4 +50,4 @@ The following hyperparameters were used during training:
101
  - Transformers 4.35.2
102
  - Pytorch 2.1.1+cu121
103
  - Datasets 2.15.0
104
- - Tokenizers 0.15.0
 
4
  model-index:
5
  - name: UzRoBERTa-v2
6
  results: []
7
+ datasets:
8
+ - sinonimayzer/mixed-data
9
+ language:
10
+ - uz
11
+ library_name: transformers
12
+ pipeline_tag: fill-mask
13
  ---
14
 
15
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
17
 
18
  # UzRoBERTa-v2
19
 
20
+ This model achieves the following results on the evaluation set:
 
21
  - Loss: 1.9097
22
 
 
 
 
 
 
 
 
 
 
 
 
23
 
24
  ## Training procedure
25
 
 
38
 
39
  | Training Loss | Epoch | Step | Validation Loss |
40
  |:-------------:|:-----:|:------:|:---------------:|
 
 
 
 
 
 
 
 
 
41
  | 2.3673 | 0.25 | 100000 | 2.4588 |
 
 
 
 
 
 
 
 
 
42
  | 2.0797 | 0.51 | 200000 | 2.1653 |
 
 
 
 
 
 
 
 
 
43
  | 1.9369 | 0.76 | 300000 | 2.0265 |
 
 
 
 
 
 
 
 
 
44
  | 1.8545 | 1.02 | 400000 | 1.9456 |
 
 
 
 
 
 
 
 
 
45
  | 1.8133 | 1.27 | 500000 | 1.9101 |
46
 
47
 
 
50
  - Transformers 4.35.2
51
  - Pytorch 2.1.1+cu121
52
  - Datasets 2.15.0
53
+ - Tokenizers 0.15.0