The notebooks and scripts in this examples show how to fine-tune a model with a sentiment classifier (such as lvwerra/distilbert-imdb
).
Here’s an overview of the notebooks and scripts in the trl repository:
File | Description | Colab link |
---|---|---|
gpt2-sentiment_peft.py |
Same as the sentiment analysis example, but learning a low rank adapter on a 8-bit base model | |
cm_finetune_peft_imdb.py |
Fine tuning a Low Rank Adapter on a frozen 8-bit model for text generation on the imdb dataset. | |
merge_peft_adapter.py |
Merging of the adapter layers into the base model’s weights and storing these on the hub. | |
gpt-neo-20b_sentiment_peft.py |
Sentiment fine-tuning of a Low Rank Adapter to create positive reviews. |
pip install trl[peft]
pip install bitsandbytes loralib
pip install git+https://github.com/huggingface/transformers.git@main
#optional: wandb
pip install wandb
Note: if you don’t want to log with wandb
remove log_with="wandb"
in the scripts/notebooks. You can also replace it with your favourite experiment tracker that’s supported by accelerate
.
The trl
library is powered by accelerate
. As such it is best to configure and launch trainings with the following commands:
accelerate config # will prompt you to define the training configuration
accelerate launch scripts/gpt2-sentiment_peft.py # launches training