Edit model card

SentenceTransformer based on BAAI/bge-small-en-v1.5

This is a sentence-transformers model finetuned from BAAI/bge-small-en-v1.5. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-small-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 384 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("SamagraDataGov/embedding_finetuned")
# Run inference
sentences = [
    'What is the credit guarantee cover for a project loan up to Rs. 1 crore?',
    "'i. The credit guarantee cover per FPO will be limited to the project loan of Rs. 2  crore. In case of project loan up to Rs. 1 crore, credit guarantee cover will be 85% of bankable project loan with ceiling of Rs. 85 lakh; while in case of project  loan above Rs.1 crore and up to Rs. 2 crore, credit guarantee cover will be 75% of bankable project loan with a maximum ceiling of Rs. 150 lakh. However, for project loan over Rs. 2 crore of bankable projet loan, credit guarantee cover will be limited maximum upto Rs.2.0 crore only.  ii. ELI shall be eligible to seek Credit Guarantee Cover for a credit facility  sanctioned in respect of a single FPO borrower for a maximum of 2 times over a period of 5 years.   iii. In case of default, claims shall be settled up to 85% or 75 % of the amount in  default subject to maximum cover as specified above.   iv. Other charges such as penal interest, commitment charge, service charge, or  any other levies/ expenses, or any  costs whatsoever debited to the account of FPO by the ELI other than the contracted interest shall not qualify for Credit Guarantee Cover.  v. The Cover shall only be granted after the ELI enters into an agreement with  NABARD or NCDC, as the case may be, and shall be granted or delivered in accordance with the Terms and Conditions decided upon by NABARD or NCDC, as the case may be, from time to time.'",
    "'7.2.1  The Scheme shall operate on the principle of \\'Area Approach\\' in the selected defined areas called Insurance Unit (IU). State Govt. /UT will notify crops and defined areas covered during the season in  accordance with decision taken in the meeting of SLCCCI. State/UT Govt. should notify Village/Village Panchayat or any other equivalent unit as an insurance unit for major crops defined at District /  Taluka or equivalent level. For **other crops** it may be a unit of size above the level of Village/village  Panchayat. For defining a crop as a major crop for deciding the Insurance Unit level, the sown area of'",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.54
cosine_accuracy@5 0.89
cosine_accuracy@10 0.92
cosine_precision@1 0.54
cosine_precision@5 0.178
cosine_precision@10 0.092
cosine_recall@1 0.54
cosine_recall@5 0.89
cosine_recall@10 0.92
cosine_ndcg@5 0.7328
cosine_ndcg@10 0.742
cosine_ndcg@100 0.7589
cosine_mrr@5 0.68
cosine_mrr@10 0.6835
cosine_mrr@100 0.6868
cosine_map@100 0.6868
dot_accuracy@1 0.55
dot_accuracy@5 0.89
dot_accuracy@10 0.92
dot_precision@1 0.55
dot_precision@5 0.178
dot_precision@10 0.092
dot_recall@1 0.55
dot_recall@5 0.89
dot_recall@10 0.92
dot_ndcg@5 0.7365
dot_ndcg@10 0.7457
dot_ndcg@100 0.7626
dot_mrr@5 0.685
dot_mrr@10 0.6885
dot_mrr@100 0.6918
dot_map@100 0.6918

Training Details

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • learning_rate: 1e-05
  • weight_decay: 0.01
  • num_train_epochs: 1.0
  • warmup_ratio: 0.1
  • load_best_model_at_end: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 1e-05
  • weight_decay: 0.01
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1.0
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • eval_use_gather_object: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss loss val_evaluator_dot_map@100
0.5172 15 1.8109 1.2075 0.6918
1.0 29 - 1.2075 0.6918
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.14
  • Sentence Transformers: 3.0.1
  • Transformers: 4.43.4
  • PyTorch: 2.4.1+cu121
  • Accelerate: 0.33.0
  • Datasets: 2.21.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

GISTEmbedLoss

@misc{solatorio2024gistembed,
    title={GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning}, 
    author={Aivin V. Solatorio},
    year={2024},
    eprint={2402.16829},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}
Downloads last month
14
Safetensors
Model size
33.4M params
Tensor type
F32
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for SamagraDataGov/embedding_finetuned

Finetuned
this model

Evaluation results