--- tags: - experimental - testing - gguf - roleplay - quantized - mistral - text-generation-inference --- **These are quants for an experimental model.** "Q4_K_M", "Q4_K_S", "IQ4_XS", "Q5_K_M", "Q5_K_S", "Q6_K", "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS" Original model weights:
https://huggingface.co/Nitral-AI/Eris_PrimeV4-Vision-7B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/5_Pr7t9cD4MBZRkJ4hwpF.png) # Vision/multimodal capabilities:
Click here to see how this would work in practice in a roleplay chat. ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/qGO0nIfZVcyuio5J07sU-.jpeg)

Click here to see what your SillyTavern Image Captions extension settings should look like. ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/UpXOnVrzvsMRYeqMaSOaa.jpeg)

**If you want to use vision functionality:** * Make sure you are using the latest version of [KoboldCpp](https://github.com/LostRuins/koboldcpp). To use the multimodal capabilities of this model, such as **vision**, you also need to load the specified **mmproj** file, you can get it [here](https://huggingface.co/cjpais/llava-1.6-mistral-7b-gguf/blob/main/mmproj-model-f16.gguf), it's also hosted in this repository inside the **mmproj** folder. * You can load the **mmproj** by using the corresponding section in the interface: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/UX6Ubss2EPNAT3SKGMLe0.png) * For CLI users, you can load the **mmproj file** by adding the respective flag to your usual command: ``` --mmproj your-mmproj-file.gguf ``` # Quantization information: **Steps performed:** ``` Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants) ``` *Using the latest llama.cpp at the time.*