prompt template is wrong

#2
by mirek190 - opened

looking here
https://huggingface.co/microsoft/Phi-3.5-mini-instruct
should be

<|system|>
You are a helpful assistant.<|end|>
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>

Yes but it can't be

In tokenizer_config.json they explicitly rstrip the tokens like <|system|> and <|user|> meaning the newlines disappear

I've raised it to Microsoft multiple times with no response, but I have to imagine it's intentional. Not sure why they post it with newlines or even include them in the chat template to begin with.

Interesting....have to test it more :)

Thanks

I do wonder if not rstrip-ing would result in better output, depends on how they trained i suppose

after testing with some math problems your prompting giving more accurate responses

prompt

If my BMI is 20.5 and my height is 172cm, how much would I weigh if I gained 5% of my current weight?   

microsoft prompt answer 63. kg
your prompt nswer 63.12 kg

Most accurate answer is 63.68 kg

Quite good answer for tiny model.

In what percentage is water compressed at the bottom of the ocean in the Mariana Trench?  

answer 4.99 % - very good answer., around 5 % is perfect.

I may try to make another quant without the rstrip as a test to see what happens

You can try .. I will test it :)

The correct prompt template should be

{{ if .System }}<|system|>
{{ .System }}<|end|>
{{ end }}{{ if .Prompt }}<|user|>
{{ .Prompt }}<|end|>
{{ end }}<|assistant|>
{{ .Response }}<|end|>

from https://ollama.com/library/phi3.5/blobs/c608dc615584

Sign up or log in to comment