Configuration Parsing
Warning:
In config.json: "quantization_config.bits" must be an integer
YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Big Tiger Gemma 27B v1 - EXL2 5.6bpw
This is a 5.6bpw EXL2 quant of TheDrummer/Big-Tiger-Gemma-27B-v1
This quant was made using exllamav2-0.1.8 with default dataset.
I tested this quant shortly in some random RPs and a few assistant-type tasks and it seems to work fine.
This quant fits nicely in 24GB VRAM on Windows with no KV cache quantization (using standard 8k context length)
Prompt Templates
Seems to use Gemma 2 format.
Original readme below
Big Tiger Gemma 27B v1
Decensored Gemma 27B. No refusals so far (other than some rare instances from 9B). No apparent brain damage.
- Original: https://huggingface.co/TheDrummer/Big-Tiger-Gemma-27B-v1
- GGUF: https://huggingface.co/TheDrummer/Big-Tiger-Gemma-27B-v1-GGUF
- iMatrix: https://huggingface.co/MarsupialAI/Big-Tiger-Gemma-27B-v1_iMatrix_GGUF (Better PPL)
- EXL2: https://huggingface.co/collections/bullerwins/big-tiger-gemma-27b-v1-exl2-669413088437b4d339c32fbe
In memory of Tiger (the happy street cat on the right)
- Downloads last month
- 5