Tuners

Each tuner (or PEFT method) has a configuration and model.

LoRA

For finetuning a model with LoRA.

class peft.LoraConfig

< >

( peft_type: typing.Union[str, peft.utils.config.PeftType] = None base_model_name_or_path: str = None task_type: typing.Union[str, peft.utils.config.TaskType] = None inference_mode: bool = False r: int = 8 target_modules: typing.Union[typing.List[str], str, NoneType] = None lora_alpha: int = None lora_dropout: float = None fan_in_fan_out: bool = False bias: str = 'none' modules_to_save: typing.Optional[typing.List[str]] = None init_lora_weights: bool = True )

Parameters

  • r (int) — Lora attention dimension.
  • target_modules (Union[List[str],str]) — The names of the modules to apply Lora to.
  • lora_alpha (float) — The alpha parameter for Lora scaling.
  • lora_dropout (float) — The dropout probability for Lora layers.
  • fan_in_fan_out (bool) — Set this to True if the layer to replace stores weight like (fan_in, fan_out).
  • For example, gpt-2 uses Conv1D which stores weights like (fan_in, fan_out) and hence this should be set to True. —
  • bias (str) — Bias type for Lora. Can be ‘none’, ‘all’ or ‘lora_only’
  • modules_to_save (List[str]) —List of modules apart from LoRA layers to be set as trainable and saved in the final checkpoint.

This is the configuration class to store the configuration of a LoraModel.

class peft.LoraModel

< >

( model config adapter_name ) torch.nn.Module

Parameters

Returns

torch.nn.Module

The Lora model.

Creates Low Rank Adapter (Lora) model from a pretrained transformers model.

Example:

>>> from transformers import AutoModelForSeq2SeqLM, LoraConfig
>>> from peft import LoraModel, LoraConfig

>>> config = LoraConfig(
...     peft_type="LORA",
...     task_type="SEQ_2_SEQ_LM",
...     r=8,
...     lora_alpha=32,
...     target_modules=["q", "v"],
...     lora_dropout=0.01,
... )

>>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
>>> lora_model = LoraModel(config, model)

Attributes:

merge_and_unload

< >

( )

This method merges the LoRa layers into the base model. This is needed if someone wants to use the base model as a standalone model.

class peft.tuners.lora.LoraLayer

< >

( in_features: int out_features: int )

class peft.tuners.lora.Linear

< >

( adapter_name: str in_features: int out_features: int r: int = 0 lora_alpha: int = 1 lora_dropout: float = 0.0 fan_in_fan_out: bool = False **kwargs )

P-tuning

class peft.PromptEncoderConfig

< >

( peft_type: typing.Union[str, peft.utils.config.PeftType] = None base_model_name_or_path: str = None task_type: typing.Union[str, peft.utils.config.TaskType] = None inference_mode: bool = False num_virtual_tokens: int = None token_dim: int = None num_transformer_submodules: typing.Optional[int] = None num_attention_heads: typing.Optional[int] = None num_layers: typing.Optional[int] = None encoder_reparameterization_type: typing.Union[str, peft.tuners.p_tuning.PromptEncoderReparameterizationType] = <PromptEncoderReparameterizationType.MLP: 'MLP'> encoder_hidden_size: int = None encoder_num_layers: int = 2 encoder_dropout: float = 0.0 )

Parameters

  • encoder_reparameterization_type (Union[PromptEncoderReparameterizationType, str]) — The type of reparameterization to use.
  • encoder_hidden_size (int) — The hidden size of the prompt encoder.
  • encoder_num_layers (int) — The number of layers of the prompt encoder.
  • encoder_dropout (float) — The dropout probability of the prompt encoder.

This is the configuration class to store the configuration of a PromptEncoder.

class peft.PromptEncoder

< >

( config )

Parameters

The prompt encoder network that is used to generate the virtual token embeddings for p-tuning.

Example:

>>> from peft import PromptEncoder, PromptEncoderConfig

>>> config = PromptEncoderConfig(
...     peft_type="P_TUNING",
...     task_type="SEQ_2_SEQ_LM",
...     num_virtual_tokens=20,
...     token_dim=768,
...     num_transformer_submodules=1,
...     num_attention_heads=12,
...     num_layers=12,
...     encoder_reparameterization_type="MLP",
...     encoder_hidden_size=768,
... )

>>> prompt_encoder = PromptEncoder(config)

Attributes:

Input shape: (batch_size, total_virtual_tokens)

Output shape: (batch_size, total_virtual_tokens, token_dim)

Prefix tuning

class peft.PrefixTuningConfig

< >

( peft_type: typing.Union[str, peft.utils.config.PeftType] = None base_model_name_or_path: str = None task_type: typing.Union[str, peft.utils.config.TaskType] = None inference_mode: bool = False num_virtual_tokens: int = None token_dim: int = None num_transformer_submodules: typing.Optional[int] = None num_attention_heads: typing.Optional[int] = None num_layers: typing.Optional[int] = None encoder_hidden_size: int = None prefix_projection: bool = False )

Parameters

  • encoder_hidden_size (int) — The hidden size of the prompt encoder.
  • prefix_projection (bool) — Whether to project the prefix embeddings.

This is the configuration class to store the configuration of a PrefixEncoder.

class peft.PrefixEncoder

< >

( config )

Parameters

The torch.nn model to encode the prefix.

Example:

>>> from peft import PrefixEncoder, PrefixTuningConfig

>>> config = PrefixTuningConfig(
...     peft_type="PREFIX_TUNING",
...     task_type="SEQ_2_SEQ_LM",
...     num_virtual_tokens=20,
...     token_dim=768,
...     num_transformer_submodules=1,
...     num_attention_heads=12,
...     num_layers=12,
...     encoder_hidden_size=768,
... )
>>> prefix_encoder = PrefixEncoder(config)

Attributes:

Input shape: (batch_size, num_virtual_tokens)

Output shape: (batch_size, num_virtual_tokens, 2*layers*hidden)

Prompt tuning

class peft.PromptTuningConfig

< >

( peft_type: typing.Union[str, peft.utils.config.PeftType] = None base_model_name_or_path: str = None task_type: typing.Union[str, peft.utils.config.TaskType] = None inference_mode: bool = False num_virtual_tokens: int = None token_dim: int = None num_transformer_submodules: typing.Optional[int] = None num_attention_heads: typing.Optional[int] = None num_layers: typing.Optional[int] = None prompt_tuning_init: typing.Union[peft.tuners.prompt_tuning.PromptTuningInit, str] = <PromptTuningInit.RANDOM: 'RANDOM'> prompt_tuning_init_text: typing.Optional[str] = None tokenizer_name_or_path: typing.Optional[str] = None )

Parameters

  • prompt_tuning_init (Union[PromptTuningInit, str]) — The initialization of the prompt embedding.
  • prompt_tuning_init_text (str, optional) — The text to initialize the prompt embedding. Only used if prompt_tuning_init is TEXT.
  • tokenizer_name_or_path (str, optional) — The name or path of the tokenizer. Only used if prompt_tuning_init is TEXT.

This is the configuration class to store the configuration of a PromptEmbedding.

class peft.PromptEmbedding

< >

( config word_embeddings )

Parameters

  • config (PromptTuningConfig) — The configuration of the prompt embedding.
  • word_embeddings (torch.nn.Module) — The word embeddings of the base transformer model.

The model to encode virtual tokens into prompt embeddings.

Attributes:

Example:

>>> from peft import PromptEmbedding, PromptTuningConfig

>>> config = PromptTuningConfig(
...     peft_type="PROMPT_TUNING",
...     task_type="SEQ_2_SEQ_LM",
...     num_virtual_tokens=20,
...     token_dim=768,
...     num_transformer_submodules=1,
...     num_attention_heads=12,
...     num_layers=12,
...     prompt_tuning_init="TEXT",
...     prompt_tuning_init_text="Predict if sentiment of this review is positive, negative or neutral",
...     tokenizer_name_or_path="t5-base",
... )

>>> # t5_model.shared is the word embeddings of the base model
>>> prompt_embedding = PromptEmbedding(config, t5_model.shared)

Input Shape: (batch_size, total_virtual_tokens)

Output Shape: (batch_size, total_virtual_tokens, token_dim)