( model: typing.Optional[ForwardRef('SetFitModel')] = None train_dataset: typing.Optional[ForwardRef('Dataset')] = None eval_dataset: typing.Optional[ForwardRef('Dataset')] = None model_init: typing.Union[typing.Callable[[], ForwardRef('SetFitModel')], NoneType] = None metric: typing.Union[str, typing.Callable[[ForwardRef('Dataset'), ForwardRef('Dataset')], typing.Dict[str, float]]] = 'accuracy' loss_class = <class 'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss'> num_iterations: int = 20 num_epochs: int = 1 learning_rate: float = 2e-05 batch_size: int = 16 seed: int = 42 column_mapping: typing.Union[typing.Dict[str, str], NoneType] = None use_amp: bool = False warmup_proportion: float = 0.1 distance_metric: typing.Callable = <function BatchHardTripletLossDistanceFunction.cosine_distance at 0x7f712cb61f70> margin: float = 0.25 samples_per_label: int = 2 )
Parameters
SetFitModel
, optional) —
The model to train. If not provided, a model_init
must be passed.
Dataset
) —
The training dataset.
Dataset
, optional) —
The evaluation dataset.
Callable[[], SetFitModel]
, optional) —
A function that instantiates the model to be used. If provided, each call to train() will start
from a new instance of the model as given by this function when a trial
is passed.
str
or Callable
, optional, defaults to "accuracy"
) —
The metric to use for evaluation. If a string is provided, we treat it as the metric name and load it with default settings.
If a callable is provided, it must take two arguments (y_pred
, y_test
).
nn.Module
, optional, defaults to CosineSimilarityLoss
) —
The loss function to use for contrastive training.
int
, optional, defaults to 20
) —
The number of iterations to generate sentence pairs for.
This argument is ignored if triplet loss is used.
It is only used in conjunction with CosineSimilarityLoss
.
int
, optional, defaults to 1
) —
The number of epochs to train the Sentence Transformer body for.
float
, optional, defaults to 2e-5
) —
The learning rate to use for contrastive training.
int
, optional, defaults to 16
) —
The batch size to use for contrastive training.
int
, optional, defaults to 42) —
Random seed that will be set at the beginning of training. To ensure reproducibility across runs, use the
~SetTrainer.model_init
function to instantiate the model if it has some randomly initialized parameters.
Dict[str, str]
, optional) —
A mapping from the column names in the dataset to the column names expected by the model. The expected format is a dictionary with the following format: {“text_column_name”: “text”, “label_column_name: “label”}.
bool
, optional, defaults to False
) —
Use Automatic Mixed Precision (AMP). Only for Pytorch >= 1.6.0
float
, optional, defaults to 0.1
) —
Proportion of the warmup in the total training steps.
Must be greater than or equal to 0.0 and less than or equal to 1.0.
Callable
, defaults to BatchHardTripletLossDistanceFunction.cosine_distance
) —
Function that returns a distance between two embeddings.
It is set for the triplet loss and
is ignored for CosineSimilarityLoss
and SupConLoss
.
float
, defaults to 0.25
) — Margin for the triplet loss.
Negative samples should be at least margin further apart from the anchor than the positive.
This is ignored for CosineSimilarityLoss
, BatchHardSoftMarginTripletLoss
and SupConLoss
.
int
, defaults to 2
) — Number of consecutive, random and unique samples drawn per label.
This is only relevant for triplet loss and ignored for CosineSimilarityLoss
.
Batch size should be a multiple of samples_per_label.
Trainer to train a SetFit model.
( params: typing.Dict[str, typing.Any] final_model: bool = False )
Applies a dictionary of hyperparameters to both the trainer and the model
Computes the metrics for a given classifier.
Freeze SetFitModel’s differentiable head. Note: call this function only when using the differentiable head.
(
hp_space: typing.Union[typing.Callable[[ForwardRef('optuna.Trial')], typing.Dict[str, float]], NoneType] = None
compute_objective: typing.Union[typing.Callable[[typing.Dict[str, float]], float], NoneType] = None
n_trials: int = 10
direction: str = 'maximize'
backend: typing.Union[ForwardRef('str'), transformers.trainer_utils.HPSearchBackend, NoneType] = None
hp_name: typing.Union[typing.Callable[[ForwardRef('optuna.Trial')], str], NoneType] = None
**kwargs
)
→
trainer_utils.BestRun
Parameters
Callable[["optuna.Trial"], Dict[str, float]]
, optional) —
A function that defines the hyperparameter search space. Will default to
~trainer_utils.default_hp_space_optuna
.
Callable[[Dict[str, float]], float]
, optional) —
A function computing the objective to minimize or maximize from the metrics returned by the evaluate
method. Will default to ~trainer_utils.default_compute_objective
which uses the sum of metrics.
int
, optional, defaults to 100) —
The number of trial runs to test.
str
, optional, defaults to "maximize"
) —
Whether to optimize greater or lower objects. Can be "minimize"
or "maximize"
, you should pick
"minimize"
when optimizing the validation loss, "maximize"
when optimizing one or several metrics.
str
or ~training_utils.HPSearchBackend
, optional) —
The backend to use for hyperparameter search. Only optuna is supported for now.
TODO: add support for ray and sigopt.
Callable[["optuna.Trial"], str]]
, optional) —
A function that defines the trial/run name. Will default to None.
Dict[str, Any]
, optional) —
Additional keyword arguments passed along to optuna.create_study
. For more
information see:
Returns
trainer_utils.BestRun
All the information about the best run.
Launch a hyperparameter search using optuna
. The optimized quantity is determined
by compute_objective
, which defaults to a function returning the evaluation loss when no metric is provided,
the sum of all metrics otherwise.
To use this method, you need to have provided a model_init
when initializing your SetFitTrainer: we need to
reinitialize the model at each new run.
( num_epochs: typing.Optional[int] = None batch_size: typing.Optional[int] = None learning_rate: typing.Optional[float] = None body_learning_rate: typing.Optional[float] = None l2_weight: typing.Optional[float] = None max_length: typing.Optional[int] = None trial: typing.Union[ForwardRef('optuna.Trial'), typing.Dict[str, typing.Any], NoneType] = None show_progress_bar: bool = True )
Parameters
int
, optional) —
Temporary change the number of epochs to train the Sentence Transformer body/head for.
If ignore, will use the value given in initialization.
int
, optional) —
Temporary change the batch size to use for contrastive training or logistic regression.
If ignore, will use the value given in initialization.
float
, optional) —
Temporary change the learning rate to use for contrastive training or SetFitModel’s head in logistic regression.
If ignore, will use the value given in initialization.
float
, optional) —
Temporary change the learning rate to use for SetFitModel’s body in logistic regression only.
If ignore, will be the same as learning_rate
.
float
, optional) —
Temporary change the weight of L2 regularization for SetFitModel’s differentiable head in logistic regression.
None
) —
The maximum number of tokens for one data sample. Currently only for training the differentiable head.
If None
, will use the maximum number of tokens the model body can accept.
If max_length
is greater than the maximum number of acceptable tokens the model body can accept, it will be set to the maximum number of acceptable tokens.
optuna.Trial
or Dict[str, Any]
, optional) —
The trial run or the hyperparameter dictionary for hyperparameter search.
bool
, optional, defaults to True
) —
Whether to show a bar that indicates training progress.
Main training entry point.
( keep_body_frozen: bool = False )
Unfreeze SetFitModel’s differentiable head. Note: call this function only when using the differentiable head.
( teacher_model: SetFitModel student_model: typing.Optional[ForwardRef('SetFitModel')] = None train_dataset: typing.Optional[ForwardRef('Dataset')] = None eval_dataset: typing.Optional[ForwardRef('Dataset')] = None model_init: typing.Union[typing.Callable[[], ForwardRef('SetFitModel')], NoneType] = None metric: typing.Union[str, typing.Callable[[ForwardRef('Dataset'), ForwardRef('Dataset')], typing.Dict[str, float]]] = 'accuracy' loss_class: Module = <class 'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss'> num_iterations: int = 20 num_epochs: int = 1 learning_rate: float = 2e-05 batch_size: int = 16 seed: int = 42 column_mapping: typing.Union[typing.Dict[str, str], NoneType] = None use_amp: bool = False warmup_proportion: float = 0.1 )
Parameters
SetFitModel
) —
The teacher model to mimic.
Dataset
) —
The training dataset.
SetFitModel
) —
The student model to train. If not provided, a model_init
must be passed.
Dataset
, optional) —
The evaluation dataset.
Callable[[], SetFitModel]
, optional) —
A function that instantiates the model to be used. If provided, each call to train() will start
from a new instance of the model as given by this function when a trial
is passed.
str
or Callable
, optional, defaults to "accuracy"
) —
The metric to use for evaluation. If a string is provided, we treat it as the metric name and load it with default settings.
If a callable is provided, it must take two arguments (y_pred
, y_test
).
nn.Module
, optional, defaults to CosineSimilarityLoss
) —
The loss function to use for contrastive training.
int
, optional, defaults to 20
) —
The number of iterations to generate sentence pairs for.
int
, optional, defaults to 1
) —
The number of epochs to train the Sentence Transformer body for.
float
, optional, defaults to 2e-5
) —
The learning rate to use for contrastive training.
int
, optional, defaults to 16
) —
The batch size to use for contrastive training.
int
, optional, defaults to 42) —
Random seed that will be set at the beginning of training. To ensure reproducibility across runs, use the
~SetTrainer.model_init
function to instantiate the model if it has some randomly initialized parameters.
Dict[str, str]
, optional) —
A mapping from the column names in the dataset to the column names expected by the model. The expected format is a dictionary with the following format: {“text_column_name”: “text”, “label_column_name: “label”}.
bool
, optional, defaults to False
) —
Use Automatic Mixed Precision (AMP). Only for Pytorch >= 1.6.0
float
, optional, defaults to 0.1
) —
Proportion of the warmup in the total training steps.
Must be greater than or equal to 0.0 and less than or equal to 1.0.
Trainer to compress a SetFit model with knowledge distillation.
( num_epochs: typing.Optional[int] = None batch_size: typing.Optional[int] = None learning_rate: typing.Optional[float] = None body_learning_rate: typing.Optional[float] = None l2_weight: typing.Optional[float] = None trial: typing.Union[ForwardRef('optuna.Trial'), typing.Dict[str, typing.Any], NoneType] = None show_progress_bar: bool = True )
Parameters
int
, optional) —
Temporary change the number of epochs to train the Sentence Transformer body/head for.
If ignore, will use the value given in initialization.
int
, optional) —
Temporary change the batch size to use for contrastive training or logistic regression.
If ignore, will use the value given in initialization.
float
, optional) —
Temporary change the learning rate to use for contrastive training or SetFitModel’s head in logistic regression.
If ignore, will use the value given in initialization.
float
, optional) —
Temporary change the learning rate to use for SetFitModel’s body in logistic regression only.
If ignore, will be the same as learning_rate
.
float
, optional) —
Temporary change the weight of L2 regularization for SetFitModel’s differentiable head in logistic regression.
optuna.Trial
or Dict[str, Any]
, optional) —
The trial run or the hyperparameter dictionary for hyperparameter search.
bool
, optional, defaults to True
) —
Whether to show a bar that indicates training progress.
Main training entry point.