What is the accuracy of the model? Why can't I fine-tune when I set the accuracy to FP16?
#157
by
chengligen
- opened
When I was fine-tuning with whisper-large-v3, my 4090 kept running out of memory, even though I set the batch_size to 1. I then checked the accuracy of the model and it was float32, but I had already used accelerator = Accelerator(mixed_precision="fp16")