yxdyc commited on
Commit
c3b68d6
1 Parent(s): d37df3a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -27,4 +27,6 @@ The model architecture is LLaMA-1.3B and we adopt the [OpenLLaMA](https://github
27
  The model is pre-trained on 100B tokens of Data-Juicer's refined RedPajama and Pile.
28
  It achieves an average score of 33.07 over 16 HELM tasks, beating LLMs trained on original RedPajama and Pile datasets.
29
 
30
- For more details, please refer to our [paper](https://arxiv.org/abs/2309.02033).
 
 
 
27
  The model is pre-trained on 100B tokens of Data-Juicer's refined RedPajama and Pile.
28
  It achieves an average score of 33.07 over 16 HELM tasks, beating LLMs trained on original RedPajama and Pile datasets.
29
 
30
+ For more details, please refer to our [paper](https://arxiv.org/abs/2309.02033).
31
+
32
+ ![exp_llama](https://img.alicdn.com/imgextra/i2/O1CN019WtUPP1uhebnDlPR8_!!6000000006069-2-tps-2530-1005.png)