Edit model card

Llama-3-11.5B-Instruct-v2

Thank you to Meta for the weights for Meta-Llama-3-8B-Instruct

image/png

This is an upscaling of the Meta-Llama-3-8B-Instruct Ai using techniques created for chargoddard/mistral-11b-slimorca. This Ai model has been upscaled from 8b parameters to 11.5b parameters without any continuous pretraining or fine-tuning.

Unlike version 1 this model has no issues at fp16 or any quantizations.

The model that was used to create this one is linked below:

https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct


  • Llama-3-11.5B-Instruct-V2
Metric Value
Avg. 63.91
AI2 Reasoning Challenge (25-Shot) 57.68
HellaSwag (10-Shot) 78.59
MMLU (5-Shot) 67.35
TruthfulQA (0-shot) 35.86
Winogrande (5-shot) 74.74
GSM8k (5-shot) 69.37
  • Original Meta-Llama-3-8B-Instruct
Metric Value
Avg. 66.87
AI2 Reasoning Challenge (25-Shot) 60.75
HellaSwag (10-Shot) 78.55
MMLU (5-Shot) 67.07
TruthfulQA (0-shot) 51.65
Winogrande (5-shot) 74.51
GSM8k (5-shot) 68.69
Downloads last month
12
Safetensors
Model size
11.5B params
Tensor type
BF16
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for Replete-AI/Llama-3-11.5B-Instruct-V2

Finetunes
8 models
Quantizations
3 models