求助:加载本地离线model报错,KeyError: 'llama'

#3
by Helienda - opened

Load model directly

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained(local_model_dir)
model = AutoModelForCausalLM.from_pretrained(local_model_dir)

模型已经提前下载到本地loacl_model_dir。在使用上面的方法,加载模型的时候,tokenizer没有问题,加载model的时候报错,报错信息如下:
File "D:\Codes\LLM\Utility\load_model.py", line 108, in func_load_llm
base_model = AutoModelForCausalLM.from_pretrained(
File "D:\Software\Anaconda\install\envs\env_llm\lib\site-packages\transformers\models\auto\auto_factory.py", line 441, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
File "D:\Software\Anaconda\install\envs\env_llm\lib\site-packages\transformers\models\auto\configuration_auto.py", line 917, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
File "D:\Software\Anaconda\install\envs\env_llm\lib\site-packages\transformers\models\auto\configuration_auto.py", line 623, in getitem
raise KeyError(key)
KeyError: 'llama'

image.png

别试了,联通版本的中文能力太弱了。

还不如个人版本的 这个 wangshenzhi/llama3-8b-chinese-chat-ollama-q8:latest

好的,虽然我已经通过升级transformers解决了这个问题,但还是感谢提醒,听劝换模型

Helienda changed discussion status to closed

Sign up or log in to comment