Edit model card
Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Big Tiger Gemma 27B v1 - EXL2 5.6bpw

This is a 5.6bpw EXL2 quant of TheDrummer/Big-Tiger-Gemma-27B-v1

This quant was made using exllamav2-0.1.8 with default dataset.

I tested this quant shortly in some random RPs and a few assistant-type tasks and it seems to work fine.

This quant fits nicely in 24GB VRAM on Windows with no KV cache quantization (using standard 8k context length)

Prompt Templates

Seems to use Gemma 2 format.

Original readme below


Big Tiger Gemma 27B v1

Decensored Gemma 27B. No refusals so far (other than some rare instances from 9B). No apparent brain damage.

In memory of Tiger (the happy street cat on the right)

image/png

Downloads last month
5
Inference API
Unable to determine this model's library. Check the docs .