Alternate quantizations

#1
by ZeroWw - opened

These are my own quantizations (updated almost daily).

The difference with normal quantizations is that I quantize the output and embed tensors to f16.
and the other tensors to 15_k,q6_k or q8_0.
This creates models that are little or not degraded at all and have a smaller size. They run at about 3-6 t/sec on CPU only using llama.cpp
And obviously faster on computers with potent GPUs

https://huggingface.co/ZeroWw/Yi-1.5-9B-32K-GGUF

01-ai org

thanks again ZeroWw! Just a reminder Yi-1.5-9B-32K is a base model :-)

thanks again ZeroWw! Just a reminder Yi-1.5-9B-32K is a base model :-)

if you have better ones just tell me..

Sign up or log in to comment