How to do batch inference?

#1
by ppsking - opened

I'm wondering how to use this fantastic model to do batch inference, when using tokenizer to pad the batch inputs, it turns out with "Asking to pad but the tokenizer does not have a padding token."

Sign up or log in to comment