Text Generation
Transformers
PyTorch
Safetensors
English
llama
conversational
text-generation-inference
Inference Endpoints

Add chat template to tokenizer config

#2
by lewtun HF staff - opened

Great work with Tulu 2 folks!

FYI it would be great to include the chat template you used to train the model directly in the tokenizer_config.json so that it's easier to run inference. See the docs here: https://huggingface.co/docs/transformers/v4.35.2/en/chat_templating#advanced-how-do-chat-templates-work

You can also see how we added the template for Zephyr 7B here: https://huggingface.co/HuggingFaceH4/zephyr-7b-beta/blob/c2339b671588a96001b9766328dd2ef4b82ef5c4/tokenizer_config.json#L34

Thanks for the tip! Added :)

hamishivi changed discussion status to closed

Sign up or log in to comment