Edit model card

best_model-yelp_polarity-16-13

This model is a fine-tuned version of albert-base-v2 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3928
  • Accuracy: 0.875

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 150

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 1 0.7228 0.5
No log 2.0 2 0.7227 0.5
No log 3.0 3 0.7227 0.5
No log 4.0 4 0.7225 0.5
No log 5.0 5 0.7224 0.5
No log 6.0 6 0.7221 0.5
No log 7.0 7 0.7219 0.5
No log 8.0 8 0.7216 0.5
No log 9.0 9 0.7213 0.5
0.7034 10.0 10 0.7209 0.5
0.7034 11.0 11 0.7205 0.5
0.7034 12.0 12 0.7200 0.5
0.7034 13.0 13 0.7195 0.5
0.7034 14.0 14 0.7189 0.5
0.7034 15.0 15 0.7183 0.5
0.7034 16.0 16 0.7177 0.5
0.7034 17.0 17 0.7170 0.5
0.7034 18.0 18 0.7163 0.5
0.7034 19.0 19 0.7156 0.5
0.6925 20.0 20 0.7148 0.5
0.6925 21.0 21 0.7140 0.5
0.6925 22.0 22 0.7132 0.5
0.6925 23.0 23 0.7123 0.5
0.6925 24.0 24 0.7113 0.5
0.6925 25.0 25 0.7104 0.5
0.6925 26.0 26 0.7093 0.5
0.6925 27.0 27 0.7082 0.5
0.6925 28.0 28 0.7071 0.5
0.6925 29.0 29 0.7059 0.5
0.6581 30.0 30 0.7047 0.5
0.6581 31.0 31 0.7034 0.5
0.6581 32.0 32 0.7021 0.5
0.6581 33.0 33 0.7007 0.5
0.6581 34.0 34 0.6991 0.5
0.6581 35.0 35 0.6975 0.5
0.6581 36.0 36 0.6958 0.5
0.6581 37.0 37 0.6941 0.5
0.6581 38.0 38 0.6923 0.5
0.6581 39.0 39 0.6904 0.5
0.6325 40.0 40 0.6883 0.5
0.6325 41.0 41 0.6862 0.5
0.6325 42.0 42 0.6841 0.5
0.6325 43.0 43 0.6818 0.5
0.6325 44.0 44 0.6794 0.5
0.6325 45.0 45 0.6770 0.5
0.6325 46.0 46 0.6745 0.5312
0.6325 47.0 47 0.6718 0.5312
0.6325 48.0 48 0.6690 0.5312
0.6325 49.0 49 0.6662 0.5625
0.573 50.0 50 0.6633 0.5625
0.573 51.0 51 0.6602 0.5625
0.573 52.0 52 0.6571 0.5625
0.573 53.0 53 0.6538 0.5625
0.573 54.0 54 0.6504 0.5625
0.573 55.0 55 0.6469 0.5625
0.573 56.0 56 0.6435 0.5625
0.573 57.0 57 0.6401 0.625
0.573 58.0 58 0.6368 0.625
0.573 59.0 59 0.6336 0.6562
0.5136 60.0 60 0.6305 0.6875
0.5136 61.0 61 0.6273 0.6562
0.5136 62.0 62 0.6240 0.6562
0.5136 63.0 63 0.6206 0.6562
0.5136 64.0 64 0.6172 0.6875
0.5136 65.0 65 0.6138 0.6875
0.5136 66.0 66 0.6105 0.6875
0.5136 67.0 67 0.6072 0.6875
0.5136 68.0 68 0.6038 0.6875
0.5136 69.0 69 0.6004 0.6875
0.4388 70.0 70 0.5968 0.6875
0.4388 71.0 71 0.5931 0.7188
0.4388 72.0 72 0.5893 0.75
0.4388 73.0 73 0.5854 0.75
0.4388 74.0 74 0.5814 0.75
0.4388 75.0 75 0.5773 0.75
0.4388 76.0 76 0.5732 0.75
0.4388 77.0 77 0.5695 0.7812
0.4388 78.0 78 0.5660 0.7812
0.4388 79.0 79 0.5626 0.7812
0.3545 80.0 80 0.5590 0.7812
0.3545 81.0 81 0.5553 0.7812
0.3545 82.0 82 0.5514 0.8125
0.3545 83.0 83 0.5476 0.7812
0.3545 84.0 84 0.5437 0.7812
0.3545 85.0 85 0.5396 0.7812
0.3545 86.0 86 0.5358 0.7812
0.3545 87.0 87 0.5316 0.7812
0.3545 88.0 88 0.5277 0.7812
0.3545 89.0 89 0.5238 0.7812
0.2725 90.0 90 0.5197 0.7812
0.2725 91.0 91 0.5159 0.7812
0.2725 92.0 92 0.5120 0.7812
0.2725 93.0 93 0.5079 0.7812
0.2725 94.0 94 0.5034 0.7812
0.2725 95.0 95 0.4983 0.7812
0.2725 96.0 96 0.4934 0.7812
0.2725 97.0 97 0.4885 0.7812
0.2725 98.0 98 0.4835 0.7812
0.2725 99.0 99 0.4790 0.8125
0.199 100.0 100 0.4751 0.8125
0.199 101.0 101 0.4714 0.8125
0.199 102.0 102 0.4677 0.8125
0.199 103.0 103 0.4634 0.8438
0.199 104.0 104 0.4585 0.8438
0.199 105.0 105 0.4532 0.875
0.199 106.0 106 0.4484 0.875
0.199 107.0 107 0.4439 0.875
0.199 108.0 108 0.4400 0.875
0.199 109.0 109 0.4363 0.875
0.1406 110.0 110 0.4329 0.875
0.1406 111.0 111 0.4296 0.875
0.1406 112.0 112 0.4259 0.875
0.1406 113.0 113 0.4219 0.8438
0.1406 114.0 114 0.4176 0.8438
0.1406 115.0 115 0.4138 0.8438
0.1406 116.0 116 0.4108 0.8438
0.1406 117.0 117 0.4077 0.8438
0.1406 118.0 118 0.4042 0.8438
0.1406 119.0 119 0.4003 0.8438
0.0921 120.0 120 0.3968 0.8438
0.0921 121.0 121 0.3936 0.8438
0.0921 122.0 122 0.3905 0.8438
0.0921 123.0 123 0.3878 0.8438
0.0921 124.0 124 0.3851 0.8438
0.0921 125.0 125 0.3823 0.8438
0.0921 126.0 126 0.3802 0.8438
0.0921 127.0 127 0.3786 0.8438
0.0921 128.0 128 0.3769 0.8125
0.0921 129.0 129 0.3748 0.8125
0.0543 130.0 130 0.3721 0.8125
0.0543 131.0 131 0.3700 0.8125
0.0543 132.0 132 0.3685 0.8125
0.0543 133.0 133 0.3687 0.8125
0.0543 134.0 134 0.3699 0.8125
0.0543 135.0 135 0.3711 0.8125
0.0543 136.0 136 0.3719 0.8125
0.0543 137.0 137 0.3716 0.8125
0.0543 138.0 138 0.3706 0.8438
0.0543 139.0 139 0.3699 0.8438
0.0313 140.0 140 0.3692 0.875
0.0313 141.0 141 0.3690 0.875
0.0313 142.0 142 0.3690 0.875
0.0313 143.0 143 0.3698 0.875
0.0313 144.0 144 0.3715 0.875
0.0313 145.0 145 0.3737 0.875
0.0313 146.0 146 0.3766 0.875
0.0313 147.0 147 0.3798 0.875
0.0313 148.0 148 0.3838 0.875
0.0313 149.0 149 0.3884 0.875
0.0183 150.0 150 0.3928 0.875

Framework versions

  • Transformers 4.32.0.dev0
  • Pytorch 2.0.1+cu118
  • Datasets 2.4.0
  • Tokenizers 0.13.3
Downloads last month
18
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for simonycl/best_model-yelp_polarity-16-13

Finetuned
this model