Each tuner (or PEFT method) has a configuration and model.
For finetuning a model with LoRA.
( peft_type: typing.Union[str, peft.utils.config.PeftType] = None base_model_name_or_path: str = None task_type: typing.Union[str, peft.utils.config.TaskType] = None inference_mode: bool = False r: int = 8 target_modules: typing.Union[typing.List[str], str, NoneType] = None lora_alpha: int = None lora_dropout: float = None fan_in_fan_out: bool = False bias: str = 'none' modules_to_save: typing.Optional[typing.List[str]] = None init_lora_weights: bool = True )
Parameters
int
) — Lora attention dimension.
Union[List[str],str]
) — The names of the modules to apply Lora to.
float
) — The alpha parameter for Lora scaling.
float
) — The dropout probability for Lora layers.
bool
) — Set this to True if the layer to replace stores weight like (fan_in, fan_out).
Conv1D
which stores weights like (fan_in, fan_out) and hence this should be set to True
. —
str
) — Bias type for Lora. Can be ‘none’, ‘all’ or ‘lora_only’
List[str]
) —List of modules apart from LoRA layers to be set as trainable
and saved in the final checkpoint.
This is the configuration class to store the configuration of a LoraModel.
(
model
config
adapter_name
)
→
torch.nn.Module
Parameters
Returns
torch.nn.Module
The Lora model.
Creates Low Rank Adapter (Lora) model from a pretrained transformers model.
Example:
>>> from transformers import AutoModelForSeq2SeqLM, LoraConfig
>>> from peft import LoraModel, LoraConfig
>>> config = LoraConfig(
... peft_type="LORA",
... task_type="SEQ_2_SEQ_LM",
... r=8,
... lora_alpha=32,
... target_modules=["q", "v"],
... lora_dropout=0.01,
... )
>>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
>>> lora_model = LoraModel(config, model)
Attributes:
This method merges the LoRa layers into the base model. This is needed if someone wants to use the base model as a standalone model.
( adapter_name: str in_features: int out_features: int r: int = 0 lora_alpha: int = 1 lora_dropout: float = 0.0 fan_in_fan_out: bool = False **kwargs )
( peft_type: typing.Union[str, peft.utils.config.PeftType] = None base_model_name_or_path: str = None task_type: typing.Union[str, peft.utils.config.TaskType] = None inference_mode: bool = False num_virtual_tokens: int = None token_dim: int = None num_transformer_submodules: typing.Optional[int] = None num_attention_heads: typing.Optional[int] = None num_layers: typing.Optional[int] = None encoder_reparameterization_type: typing.Union[str, peft.tuners.p_tuning.PromptEncoderReparameterizationType] = <PromptEncoderReparameterizationType.MLP: 'MLP'> encoder_hidden_size: int = None encoder_num_layers: int = 2 encoder_dropout: float = 0.0 )
Parameters
PromptEncoderReparameterizationType
, str
]) —
The type of reparameterization to use.
int
) — The hidden size of the prompt encoder.
int
) — The number of layers of the prompt encoder.
float
) — The dropout probability of the prompt encoder.
This is the configuration class to store the configuration of a PromptEncoder.
( config )
Parameters
The prompt encoder network that is used to generate the virtual token embeddings for p-tuning.
Example:
>>> from peft import PromptEncoder, PromptEncoderConfig
>>> config = PromptEncoderConfig(
... peft_type="P_TUNING",
... task_type="SEQ_2_SEQ_LM",
... num_virtual_tokens=20,
... token_dim=768,
... num_transformer_submodules=1,
... num_attention_heads=12,
... num_layers=12,
... encoder_reparameterization_type="MLP",
... encoder_hidden_size=768,
... )
>>> prompt_encoder = PromptEncoder(config)
Attributes:
torch.nn.Embedding
) — The embedding layer of the prompt encoder.torch.nn.Sequential
) — The MLP head of the prompt encoder if inference_mode=False
.torch.nn.LSTM
) — The LSTM head of the prompt encoder if inference_mode=False
and
encoder_reparameterization_type="LSTM"
.int
) — The hidden embedding dimension of the base transformer model.int
) — The input size of the prompt encoder.int
) — The output size of the prompt encoder.int
) — The hidden size of the prompt encoder.int
): The total number of virtual tokens of the
prompt encoder.PromptEncoderReparameterizationType
, str
]): The encoder type of the prompt
encoder.Input shape: (batch_size
, total_virtual_tokens
)
Output shape: (batch_size
, total_virtual_tokens
, token_dim
)
( peft_type: typing.Union[str, peft.utils.config.PeftType] = None base_model_name_or_path: str = None task_type: typing.Union[str, peft.utils.config.TaskType] = None inference_mode: bool = False num_virtual_tokens: int = None token_dim: int = None num_transformer_submodules: typing.Optional[int] = None num_attention_heads: typing.Optional[int] = None num_layers: typing.Optional[int] = None encoder_hidden_size: int = None prefix_projection: bool = False )
This is the configuration class to store the configuration of a PrefixEncoder.
( config )
Parameters
The torch.nn
model to encode the prefix.
Example:
>>> from peft import PrefixEncoder, PrefixTuningConfig
>>> config = PrefixTuningConfig(
... peft_type="PREFIX_TUNING",
... task_type="SEQ_2_SEQ_LM",
... num_virtual_tokens=20,
... token_dim=768,
... num_transformer_submodules=1,
... num_attention_heads=12,
... num_layers=12,
... encoder_hidden_size=768,
... )
>>> prefix_encoder = PrefixEncoder(config)
Attributes:
torch.nn.Embedding
) — The embedding layer of the prefix encoder.torch.nn.Sequential
) — The two-layer MLP to transform the prefix embeddings if
prefix_projection
is True
.bool
) — Whether to project the prefix embeddings.Input shape: (batch_size
, num_virtual_tokens
)
Output shape: (batch_size
, num_virtual_tokens
, 2*layers*hidden
)
( peft_type: typing.Union[str, peft.utils.config.PeftType] = None base_model_name_or_path: str = None task_type: typing.Union[str, peft.utils.config.TaskType] = None inference_mode: bool = False num_virtual_tokens: int = None token_dim: int = None num_transformer_submodules: typing.Optional[int] = None num_attention_heads: typing.Optional[int] = None num_layers: typing.Optional[int] = None prompt_tuning_init: typing.Union[peft.tuners.prompt_tuning.PromptTuningInit, str] = <PromptTuningInit.RANDOM: 'RANDOM'> prompt_tuning_init_text: typing.Optional[str] = None tokenizer_name_or_path: typing.Optional[str] = None )
Parameters
PromptTuningInit
, str
]) — The initialization of the prompt embedding.
str
, optional) —
The text to initialize the prompt embedding. Only used if prompt_tuning_init
is TEXT
.
str
, optional) —
The name or path of the tokenizer. Only used if prompt_tuning_init
is TEXT
.
This is the configuration class to store the configuration of a PromptEmbedding.
( config word_embeddings )
Parameters
torch.nn.Module
) — The word embeddings of the base transformer model.
The model to encode virtual tokens into prompt embeddings.
Attributes:
torch.nn.Embedding
) — The embedding layer of the prompt embedding.Example:
>>> from peft import PromptEmbedding, PromptTuningConfig
>>> config = PromptTuningConfig(
... peft_type="PROMPT_TUNING",
... task_type="SEQ_2_SEQ_LM",
... num_virtual_tokens=20,
... token_dim=768,
... num_transformer_submodules=1,
... num_attention_heads=12,
... num_layers=12,
... prompt_tuning_init="TEXT",
... prompt_tuning_init_text="Predict if sentiment of this review is positive, negative or neutral",
... tokenizer_name_or_path="t5-base",
... )
>>> # t5_model.shared is the word embeddings of the base model
>>> prompt_embedding = PromptEmbedding(config, t5_model.shared)
Input Shape: (batch_size
, total_virtual_tokens
)
Output Shape: (batch_size
, total_virtual_tokens
, token_dim
)