model gguf not working in transformers

#6
by El-chapoo - opened

i have tried several methods to run qwen gguf models but there are alot 9f issues first if we follow as instructed in the repo the error is config.json not found or not a valid file however i used to download from other repos but says qwen2 gguf architecture not recognized even if i updated transformers and gguf

Qwen org

gguf need to be run with llama.cpp; transformers do not understand this format.

jklj077 changed discussion status to closed

Sign up or log in to comment