Update README.md
Browse files
README.md
CHANGED
@@ -17,6 +17,16 @@ base_model:
|
|
17 |
- vanillaOVO/supermario_v3
|
18 |
---
|
19 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
# MixtureofMerges-MoE-4x7b-v3
|
21 |
|
22 |
MixtureofMerges-MoE-4x7b-v3 is a Mixure of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
|
|
17 |
- vanillaOVO/supermario_v3
|
18 |
---
|
19 |
|
20 |
+
# Open-LLM Benchmark Results:
|
21 |
+
MixtureofMerges-MoE-4x7b-v3 Open LLM Leaderboard📑
|
22 |
+
Average: 75.31
|
23 |
+
ARC: 74.40
|
24 |
+
HellaSwag: 88.62
|
25 |
+
MMLU: 64.82
|
26 |
+
TruthfulQA: 70.78
|
27 |
+
Winogrande: 85
|
28 |
+
GSM8K: 68.23
|
29 |
+
|
30 |
# MixtureofMerges-MoE-4x7b-v3
|
31 |
|
32 |
MixtureofMerges-MoE-4x7b-v3 is a Mixure of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|