Files changed (1) hide show
  1. README.md +106 -0
README.md CHANGED
@@ -107,6 +107,98 @@ model-index:
107
  source:
108
  url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AbacusResearch/Jallabi-34B
109
  name: Open LLM Leaderboard
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
110
  ---
111
 
112
  These are llama only weights of https://huggingface.co/liuhaotian/llava-v1.6-34b . The Clip encoder part is removed and and this model is llama weights only that can be loaded using
@@ -124,3 +216,17 @@ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-le
124
  |Winogrande (5-shot) |81.45|
125
  |GSM8k (5-shot) |65.20|
126
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
107
  source:
108
  url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AbacusResearch/Jallabi-34B
109
  name: Open LLM Leaderboard
110
+ - task:
111
+ type: text-generation
112
+ name: Text Generation
113
+ dataset:
114
+ name: IFEval (0-Shot)
115
+ type: HuggingFaceH4/ifeval
116
+ args:
117
+ num_few_shot: 0
118
+ metrics:
119
+ - type: inst_level_strict_acc and prompt_level_strict_acc
120
+ value: 35.29
121
+ name: strict accuracy
122
+ source:
123
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=AbacusResearch/Jallabi-34B
124
+ name: Open LLM Leaderboard
125
+ - task:
126
+ type: text-generation
127
+ name: Text Generation
128
+ dataset:
129
+ name: BBH (3-Shot)
130
+ type: BBH
131
+ args:
132
+ num_few_shot: 3
133
+ metrics:
134
+ - type: acc_norm
135
+ value: 43.62
136
+ name: normalized accuracy
137
+ source:
138
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=AbacusResearch/Jallabi-34B
139
+ name: Open LLM Leaderboard
140
+ - task:
141
+ type: text-generation
142
+ name: Text Generation
143
+ dataset:
144
+ name: MATH Lvl 5 (4-Shot)
145
+ type: hendrycks/competition_math
146
+ args:
147
+ num_few_shot: 4
148
+ metrics:
149
+ - type: exact_match
150
+ value: 3.93
151
+ name: exact match
152
+ source:
153
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=AbacusResearch/Jallabi-34B
154
+ name: Open LLM Leaderboard
155
+ - task:
156
+ type: text-generation
157
+ name: Text Generation
158
+ dataset:
159
+ name: GPQA (0-shot)
160
+ type: Idavidrein/gpqa
161
+ args:
162
+ num_few_shot: 0
163
+ metrics:
164
+ - type: acc_norm
165
+ value: 11.86
166
+ name: acc_norm
167
+ source:
168
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=AbacusResearch/Jallabi-34B
169
+ name: Open LLM Leaderboard
170
+ - task:
171
+ type: text-generation
172
+ name: Text Generation
173
+ dataset:
174
+ name: MuSR (0-shot)
175
+ type: TAUR-Lab/MuSR
176
+ args:
177
+ num_few_shot: 0
178
+ metrics:
179
+ - type: acc_norm
180
+ value: 20.24
181
+ name: acc_norm
182
+ source:
183
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=AbacusResearch/Jallabi-34B
184
+ name: Open LLM Leaderboard
185
+ - task:
186
+ type: text-generation
187
+ name: Text Generation
188
+ dataset:
189
+ name: MMLU-PRO (5-shot)
190
+ type: TIGER-Lab/MMLU-Pro
191
+ config: main
192
+ split: test
193
+ args:
194
+ num_few_shot: 5
195
+ metrics:
196
+ - type: acc
197
+ value: 40.91
198
+ name: accuracy
199
+ source:
200
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=AbacusResearch/Jallabi-34B
201
+ name: Open LLM Leaderboard
202
  ---
203
 
204
  These are llama only weights of https://huggingface.co/liuhaotian/llava-v1.6-34b . The Clip encoder part is removed and and this model is llama weights only that can be loaded using
 
216
  |Winogrande (5-shot) |81.45|
217
  |GSM8k (5-shot) |65.20|
218
 
219
+
220
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
221
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_AbacusResearch__Jallabi-34B)
222
+
223
+ | Metric |Value|
224
+ |-------------------|----:|
225
+ |Avg. |25.97|
226
+ |IFEval (0-Shot) |35.29|
227
+ |BBH (3-Shot) |43.62|
228
+ |MATH Lvl 5 (4-Shot)| 3.93|
229
+ |GPQA (0-shot) |11.86|
230
+ |MuSR (0-shot) |20.24|
231
+ |MMLU-PRO (5-shot) |40.91|
232
+