calvintwr commited on
Commit
58d274b
1 Parent(s): 39f866a

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ 1.5-Pints-16K-v0.1-bf16.gguf filter=lfs diff=lfs merge=lfs -text
37
+ 1.5-Pints-16K-v0.1-fp16.gguf filter=lfs diff=lfs merge=lfs -text
38
+ 1.5-Pints-16K-v0.1-fp32.gguf filter=lfs diff=lfs merge=lfs -text
1.5-Pints-16K-v0.1-bf16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b2b6e705176d846629fdf7d9ee21b2b090917b41ff601da654435045f9976753
3
+ size 3132707424
1.5-Pints-16K-v0.1-fp16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab516262892d2db8bb8aebf4f198cddc1e1cea4dc9e2325de0ce0e20ca033dac
3
+ size 3132707424
1.5-Pints-16K-v0.1-fp32.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:228907b1c404e6ec2c71abca661006d2955f072430b85c308b13e6c6cc3df8d7
3
+ size 6264279648
LICENSE ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MIT License
2
+
3
+ Copyright (c) 2024 Pints.ai
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
README.md ADDED
@@ -0,0 +1,189 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - pints-ai/Expository-Prose-V1
5
+ - HuggingFaceH4/ultrachat_200k
6
+ - Open-Orca/SlimOrca-Dedup
7
+ - meta-math/MetaMathQA
8
+ - HuggingFaceH4/deita-10k-v0-sft
9
+ - WizardLM/WizardLM_evol_instruct_V2_196k
10
+ - togethercomputer/llama-instruct
11
+ - LDJnr/Capybara
12
+ - HuggingFaceH4/ultrafeedback_binarized
13
+ language:
14
+ - en
15
+ model-index:
16
+ - name: 1.5-Pints
17
+ results:
18
+ - task:
19
+ type: text-generation
20
+ dataset:
21
+ name: MTBench
22
+ type: ai2_arc
23
+ metrics:
24
+ - name: MTBench
25
+ type: LLM-as-a-Judge
26
+ value: 3.4
27
+ source:
28
+ name: MTBench
29
+ url: https://huggingface.co/spaces/lmsys/mt-bench
30
+ pipeline_tag: text-generation
31
+ ---
32
+
33
+ # 1.5-Pints -- A model pretrained in 9 days by using high quality data
34
+
35
+ ## How to use
36
+
37
+ **Build LlamaCPP**
38
+ Refer to https://github.com/ggerganov/llama.cpp/blob/master/docs/build.md on how to build.
39
+
40
+ **Download Model**
41
+ ```bash
42
+ git clone https://huggingface.co/pints-ai/1.5-Pints-16K-v0.1-GGUF --local-dir PATH/TO/MODEL
43
+ ```
44
+
45
+ **Usage**
46
+
47
+ ```bash
48
+ # FP32
49
+ ./llama-cli --model PATH/TO/MODEL/1.5-Pints-16K-v0.1-fp32.gguf --n-gpu-layers 999 --repeat-penalty 1.3 --prompt "Predict what life will be like 100 years from now."
50
+
51
+ # FP16
52
+ ./llama-cli --model PATH/TO/MODEL/1.5-Pints-16K-v0.1-fp16.gguf --n-gpu-layers 999 --repeat-penalty 1.3 --prompt "Predict what life will be like 100 years from now."
53
+ ```
54
+ `Note`: As at time of publish, `bf16` is slow on llama.cpp (CUDA), hence not recommended for use.
55
+
56
+ **Compute Infrastructure**<br>
57
+ This model can be served with a GPU containing at least 8GB of VRAM.
58
+ <br><br>
59
+
60
+ ## Description
61
+ 1.5 Pints is a Large Language Model that significantly advances the efficiency of LLM training by emphasizing data quality over quantity. Our [pre-training corpus](https://huggingface.co/datasets/pints-ai/Expository-Prose-V1) is a meticulously curated dataset of 57 billion tokens, thus making pre-training more accessible and environmentally-friendly.
62
+ <br><br>
63
+
64
+ ## Results
65
+ **MTBench**<br>
66
+ [MTBench](https://huggingface.co/spaces/lmsys/mt-bench) is a popular evaluation harness that uses strong LLMs like GPT-4 to act as judges and assess the quality of the models' responses./
67
+ | Model | Score | Parameter Size | Pretrain Tokens |
68
+ |:-:|:-:|:-:|:-:|
69
+ | meta-llama/Llama-2-7b-chat-hf | 6.27 | 7B | 2T |
70
+ | microsoft/phi-2 | 5.83 | 2.7B | 1.4T |
71
+ | google/gemma-2b-it | 5.44 | 2B | 3T |
72
+ | stabilityai/stablelm-2-1_6b-chat | 4.7 | 1.6B | 2T |
73
+ | **1.5-Pints-2K** | **3.73** | **1.57B** | **0.115T** |
74
+ | TinyLlama/TinyLlama-1.1B-Chat-v1.0 | 3.72 | 1.1B | 3T |
75
+ | **1.5-Pints-16K** | **3.40** | **1.57B** | **0.115T** |
76
+ | apple/OpenELM-1_1B-Instruct | 3.34 | 1B | 1.8T |
77
+ | microsoft/phi-1_5 | 3.33 | 1.3B | 0.15T |
78
+ | databricks/dolly-v2-3b | 2.33 | 3B | 0.3T |
79
+ | EleutherAI/pythia-2.8b | 1.81 | 2.8B | 0.3T |
80
+ | tiiuae/falcon-rw-1b | 1.18 | 1B | 0.35T |
81
+ <br><br>
82
+
83
+ The 2K context window version of 1.5-Pints can be found [here](https://huggingface.co/pints-ai/1.5-Pints-2K-v0.1-GGUF).
84
+
85
+ ## Technical Specifications
86
+ **Architecture**<br>
87
+ Llama 2 Autoregressive Model with **16K Context Window** and Mistral tokenizer. The model uses Float32 precision.
88
+
89
+ | Parameters | Vocab Size | Embedding Size | Context Length | Layers | Heads | Query Groups | Intermediate Hidden Size |
90
+ |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
91
+ | 1,565,886,464 | 32,064 | 2,048 | 16,384 | 24 | 32 | 4 | 8,192 |
92
+
93
+ **Context Lengths**<br>
94
+ 1.5-Pints comes in 2 context lengths - 16k (16,384) and 2k (2,048).
95
+
96
+ **Prompt template**<br>
97
+ This model has been finetuned and preference-optimized using the ChatML template.
98
+ ```
99
+ <|im_start|>system
100
+ {SYSTEM_PROMPT}<|im_end|>
101
+ <|im_start|>user
102
+ {PROMPT}<|im_end|>
103
+ <|im_start|>assistant
104
+ ```
105
+ <br><br>
106
+
107
+ ## Uses
108
+ **Direct Use**<br>
109
+ This model is meant to be an efficient and fine-tunable helpful assistant. It is designed to excel in user assistance and reasoning, and rely less on internal knowledge and factuals. Thus, for knowledge retrieval purposes, it should be used with Retrieval Augmented Generation.
110
+
111
+ **Downstream Use**<br>
112
+ Given the size of this model, it is possible to launch multiple instances of it for use in agentic context without breaking the compute bank.
113
+
114
+ **Recommendations**<br>
115
+ - It is recommended to finetune this model for domain adaption, and use it for a specialized tasks.
116
+ - To reap full performance, use a repetition penalty of 1.3 rather than 1.
117
+ <br><br>
118
+
119
+ ## Training Data
120
+ **Pre-Train Data**<br>
121
+ Dataset: [pints-ai/Expository-Prose-V1](https://huggingface.co/datasets/pints-ai/Expository-Prose-V1)
122
+
123
+
124
+ **Fine-Tune Data**<br>
125
+ Corpora:
126
+ - [HuggingFaceH4/ultrachat](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k)
127
+ - [Open-Orca/SlimOrca-Dedup](https://huggingface.co/datasets/Open-Orca/SlimOrca-Dedup)
128
+ - [meta-math/MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA)
129
+ - [HuggingFaceH4/deita-10k-v0-sft](https://huggingface.co/datasets/HuggingFaceH4/deita-10k-v0-sft)
130
+ - [WizardLM/WizardLM_evol_instruct_V2_196k](https://huggingface.co/datasets/WizardLMTeam/WizardLM_evol_instruct_V2_196k)
131
+ - [togethercomputer/llama-instruct](https://huggingface.co/togethercomputer/Llama-2-7B-32K-Instruct)
132
+ - [LDJnr/Capybara](https://huggingface.co/datasets/LDJnr/Capybara)
133
+
134
+ **DPO Data**<br>
135
+ Dataset: [HuggingFaceH4/ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized)
136
+ <br><br>
137
+
138
+ ## Training Procedure
139
+ Both Pre-Train and Finetuning used [our fork](https://github.com/Pints-AI/1.5-Pints) of the [LitGPT Framework](https://github.com/Lightning-AI/litgpt). For DPO, we used the methods set out in [The Alignment Handbook](https://github.com/huggingface/alignment-handbook/blob/main/scripts/run_dpo.py). More details can be found in our [paper](TOBEADDED).
140
+
141
+ ## Training Hyperparameters
142
+ **Pre-Train**<br>
143
+ | Hyperparameter | Value |
144
+ |:-:|:-:|
145
+ | Optimizer | AdamW(Beta1=0.9, Beta2=0.95) |
146
+ | Learning Rate Scheduler | Cosine |
147
+ | Max Learning Rate | 4.0x10-4 |
148
+ | Min Learning Rate | 4.0x10-5 |
149
+ | Warmup Steps | 2,000 |
150
+ | Batch Size | 2,097,152 |
151
+ | Weight Decay | 0.1 |
152
+ | Gradient Clipping Threshold | 1.0 |
153
+
154
+ **SFT**<br>
155
+ | Hyperparameter | Value |
156
+ |:-:|:-:|
157
+ | Optimizer | AdamW(Beta1=0.9, Beta2=0.95) |
158
+ | Warmup steps | 1,126 (10%)
159
+ | Peak learning rate | 2e-5 |
160
+ | Learning rate scheduler | Cosine |
161
+ | Weight Decay | 0.1 |
162
+
163
+ **DPO**<br>
164
+ DPO parameters used are the exact same as those specified in [The Alignment Handbook](https://github.com/huggingface/alignment-handbook).
165
+ <br><br>
166
+
167
+ ## Citation
168
+ **Attribution**
169
+ - **Developed by:** [calvintwr](https://huggingface.co/calvintwr), [lemousehunter](https://huggingface.co/lemousehunter)
170
+ - **Funded by** [PintsAI](https://pints.ai/)
171
+ - **Released by:** [PintsAI](https://pints.ai/)
172
+ - **Model type:** Large Language Model
173
+ - **Language(s) (NLP):** English
174
+ - **License:** [MIT License](https://opensource.org/license/mit)
175
+ <br><br>
176
+
177
+ **BibTeX:**
178
+ [More Information Needed]
179
+
180
+ **APA**
181
+ [More Information Needed]
182
+ <br><br>
183
+
184
+ ## Legal Warning
185
+ Though best efforts has been made to ensure, as much as possible, that all texts in the training corpora are royalty free, this does not constitute a legal guarantee that such is the case. **By using any of the models, corpora or part thereof, the user agrees to bear full responsibility to do the necessary due diligence to ensure that he / she is in compliance with their local copyright laws**.
186
+
187
+ Additionally, the **user agrees to bear any damages** arising as a direct cause (or otherwise) of using any artifacts released by the pints research team, as well as full responsibility for the consequences of his / her usage (or implementation) of any such released artifacts. The user also indemnifies Pints Research Team (and any of its members or agents) of any damage, related or unrelated, to the release or subsequent usage of any findings, artifacts or code by the team.
188
+
189
+ For the avoidance of doubt, **any artifacts released by the Pints Research team are done so in accordance with the "fair use"** clause of Copyright Law, in hopes that this will aid the research community in bringing LLMs to the next frontier.
config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "model_type": "llama"
3
+ }