70b - Reflection

#273
by SabinStargem - opened

This looks to be a Llama 3.1 finetune, that is claimed to be able to self-correct errors while generating. While I am skeptical, it would be good to give it a spin and see if that holds up in practice.

mattshumer/Reflection-70B

Can't be worse than the claims openai always does. Queued :)

mradermacher changed discussion status to closed

Unfortunately, it's missing the tokenizer.model file

Unfortunately, it's missing the tokenizer.model file

The initial upload had a broken tokenizer and some bad hidden-state sizes. They re-uploaded everything ~4h ago. #6
Might want to give it another try.

Redownloading...

Seems yet another corrected version was uploaded, but this time on a separate repo: ref_70_e3.

Wow, and the drama is high. Anyway, it's queued, thanks for the notification!

There is a lot of controversy, I wouldn't bother to do a re-quant, it's the same model as https://huggingface.co/mattshumer/Reflection-Llama-3.1-70B-ep2-working
See https://x.com/TheXeophon/status/1833158588618907997

Well, too late, and I hadn't quantized the second one anyway :)

Sign up or log in to comment