TomPei commited on
Commit
4a142b9
1 Parent(s): 32af9ba

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -2
README.md CHANGED
@@ -86,7 +86,7 @@ We recorded 100,000 data samples along with their scores, creating the dataset `
86
 
87
 
88
 
89
- # abaltion experiments
90
  After meticulously designed ablation studies, we aimed to contrast the effects between the Chinese-fineweb-edu dataset and traditional Chinese pre-training corpora.
91
  For this purpose, we randomly selected samples from five datasets—CCI2-Data, SkyPile-150B, TeleChat-PTD, IndustryCorpus, and MAP-CC—proportional to the Chinese-fineweb-edu dataset, constructing a comparison dataset named chinese-random-select.
92
  In our experiments, we utilized a model with 2.1 billion parameters, training it for 65k steps on both datasets respectively.
@@ -97,6 +97,10 @@ The results distinctly show that the dataset trained on Chinese-fineweb-edu sign
97
  <img width="900px" alt="experiment" src="./chinese-fineweb-benchmark.png">
98
  </p>
99
 
 
 
 
 
100
 
101
  **We warmly invite developers and researchers interested in this field to follow and engage with the community, working together to advance the technology. Stay tuned for the open-source release of the dataset!**
102
 
@@ -191,7 +195,8 @@ Chinese Fineweb Edu 数据集的原始数据来源广泛,涵盖了多个国内
191
  </p>
192
 
193
 
194
- 通过实验结果可以发现,在训练的靠后阶段,由于进入了第2个epoch,且学习率进入快速下降阶段,此时,使用chinese-fineweb-edu训练的模型,准确率有了明显的上升,而使用随机抽取的数据训练,则一直处于较低水平。这证明了chinese-fineweb-edu高质量数据对于模型训练效果有显著帮助,在同样训练时间下,能够更快的提升模型能力,节省训练资源,这个结果也和HuggingFace fineweb edu 的数据消融实验有异曲同工之妙。
 
195
 
196
 
197
 
 
86
 
87
 
88
 
89
+ # Abaltion experiments
90
  After meticulously designed ablation studies, we aimed to contrast the effects between the Chinese-fineweb-edu dataset and traditional Chinese pre-training corpora.
91
  For this purpose, we randomly selected samples from five datasets—CCI2-Data, SkyPile-150B, TeleChat-PTD, IndustryCorpus, and MAP-CC—proportional to the Chinese-fineweb-edu dataset, constructing a comparison dataset named chinese-random-select.
92
  In our experiments, we utilized a model with 2.1 billion parameters, training it for 65k steps on both datasets respectively.
 
97
  <img width="900px" alt="experiment" src="./chinese-fineweb-benchmark.png">
98
  </p>
99
 
100
+ The experimental results reveal that in the later stages of training, as it enters the second epoch and the learning rate rapidly decreases, the model trained with the chinese-fineweb-edu data shows a significant increase in accuracy,
101
+ whereas the model trained with randomly selected data remains at a lower level. This proves that the high-quality data of chinese-fineweb-edu significantly aids in training effectiveness.
102
+ With the same training duration, it can enhance model capabilities faster and save training resources.
103
+ This outcome also shares a striking similarity with the data ablation experiments conducted by HuggingFace on fineweb edu.
104
 
105
  **We warmly invite developers and researchers interested in this field to follow and engage with the community, working together to advance the technology. Stay tuned for the open-source release of the dataset!**
106
 
 
195
  </p>
196
 
197
 
198
+ 通过实验结果可以发现,在训练的靠后阶段,由于进入了第2个epoch,且学习率进入快速下降阶段此时,使用chinese-fineweb-edu训练的模型,准确率有了明显的上升,而使用随机抽取的数据训练,则一直处于较低水平
199
+ 这证明了chinese-fineweb-edu高质量数据对于模型训练效果有显著帮助,在同样训练时间下,能够更快的提升模型能力,节省训练资源,这个结果也和HuggingFace fineweb edu 的数据消融实验有异曲同工之妙。
200
 
201
 
202