It stops generating in the middle of sentences

#5
by Culturedniichan - opened

Using the Q6 one. I have no idea what I'm doing wrong or if it's because I literally copied the config file for the gguf from the marsupial exl2 version of Rocinante, but the quality is great, but it suddenly stops (using chatml preset).

It happens in any kind of chat, any preset settings. Also I'm not running out of context. It just stops at either a word or even at some token, and won't generate anymore from there. Even if I click on continue the generation, it won't. It's as if it was generating EOS or similar. Other than that the model feels VERY good.

Update: it looks like it was either the config files I provided or the HF loader. Using llamacpp directly seems to solve it, although I lose being able to alter sampler settings (but I used to do this with other models to bypass censorship, e.g. by using negative cfg).

It also seems to follow instructions better on sillytavern (? which is strange unless it's because of the samplers I was using, since it should use the chatml or mistral formats without taking it from the gguf?)

I just ran into the same problem using KoboldCpp 1.74 and the q6 version. q5 as well. It wouldn't let me progress over a certain (very common) start of the sentence, no matter what I tried. For me, it's the German "Ich", which equals the English "I". No idea what word should come next, which is interpreted as EOS. I switched back to q8 and the problem was gone. I never experienced this problem using the q8 version, but it happened frequently, once I switched to q6.

Sign up or log in to comment