Edit model card

Official AQLM quantization of meta-llama/Meta-Llama-3.1-8B-Instruct finetuned with PV-Tuning.

For this quantization, we used 2 codebooks of 8 bits and groupsize of 8.

Results:

Model Quantization MMLU (5-shot) ArcC ArcE Hellaswag PiQA Winogrande Model size, Gb
meta-llama/Meta-Llama-3.1-8B-Instruct None 0.6817 0.5162 0.8186 0.5909 0.8014 0.7364 16.1
2x8g8 0.5533 0.4531 0.7757 0.5459 0.7835 0.7064 3.9

Note

We used lm-eval=0.4.0 for evaluation.

Downloads last month
63
Safetensors
Model size
2.8B params
Tensor type
FP16
·
I8
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Collection including ISTA-DASLab/Meta-Llama-3.1-8B-Instruct-AQLM-PV-2Bit-2x8-hf