Edit model card
FocusMix 7B

FocusMix 7B GGUF

Using llama.cpp release b3557 for static quantization.

Original model: https://huggingface.co/Nelathan/Qwen2-7B-FocusMix

ChatML

<|im_start|>system
{system}<|im_end|>
<|im_start|>user
{user}<|im_end|>
<|im_start|>assistant
{asistant}<|im_end|>
Downloads last month
39
GGUF

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for Nelathan/Qwen2-7b-FocusMix-GGUF

Quantized
this model