You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

SentenceTransformer based on intfloat/multilingual-e5-large

This is a sentence-transformers model finetuned from intfloat/multilingual-e5-large on the mcqa_rag dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: intfloat/multilingual-e5-large
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 1024 dimensions
  • Similarity Function: Cosine Similarity
  • Training Dataset:

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'A student is mixing a cup of hot chocolate with a spoon. How is the heat transferred between the hot chocolate and the part of the spoon that is in the hot chocolate?\nA. Conduction transfers energy from the spoon to the hot chocolate.\nB. Conduction transfers energy from the hot chocolate to the spoon.\nC. Convection transfers energy from the spoon to the hot chocolate.\nD. Convection transfers energy from the hot chocolate to the spoon.',
    '**Applications of Convection in Everyday Life**: In mixtures like hot chocolate, convection plays a role when the liquid is stirred, causing hot liquid to rise and cooler liquid to sink, helping to mix and distribute heat throughout the drink. However, convection primarily pertains to the movement of the liquid rather than direct heat transfer from one solid to another.',
    'Additionally, moral philosophy often emphasizes the concept of empathy and the impact of one’s actions on others. Dwellings on a partner’s mistakes, while it may cause strain in a relationship, is often viewed as part of the natural dynamics of personal relationships, unless it leads to emotional abuse or significant harm. This indicates that emotional responses to insignificant errors may not always be deemed morally wrong, depending on context and impact.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

mcqa_rag

  • Dataset: mcqa_rag at 025e93d
  • Size: 607,908 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 22 tokens
    • mean: 105.96 tokens
    • max: 512 tokens
    • min: 12 tokens
    • mean: 70.95 tokens
    • max: 478 tokens
  • Samples:
    anchor positive
    Find all c in Z_3 such that Z_3[x]/(x^2 + c) is a field.
    A. 0
    B. 1
    C. 2
    D. 3
    The notation Z_3 refers to the finite field with three elements, often denoted as {0, 1, 2}. This field operates under modular arithmetic, specifically modulo 3. Elements in Z_3 can be added and multiplied according to the rules of modulo 3, where any number can wrap around upon reaching 3.
    Find all c in Z_3 such that Z_3[x]/(x^2 + c) is a field.
    A. 0
    B. 1
    C. 2
    D. 3
    A field is a set equipped with two operations, addition and multiplication, satisfying certain properties: associativity, commutativity, distributivity, the existence of additive and multiplicative identities, and the existence of additive inverses and multiplicative inverses (for all elements except the zero element). In order for Z_3[x]/(f(x)) to be a field, the polynomial f(x) must be irreducible over Z_3.
    Find all c in Z_3 such that Z_3[x]/(x^2 + c) is a field.
    A. 0
    B. 1
    C. 2
    D. 3
    The expression Z_3[x] indicates the set of all polynomials with coefficients in Z_3. A polynomial is said to be irreducible over Z_3 if it cannot be factored into the product of two non-constant polynomials with coefficients in Z_3. In the case of quadratic polynomials like x^2 + c, irreducibility depends on whether it has any roots in the field Z_3.
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Evaluation Dataset

mcqa_rag

  • Dataset: mcqa_rag at 025e93d
  • Size: 14,380 evaluation samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 22 tokens
    • mean: 99.4 tokens
    • max: 512 tokens
    • min: 9 tokens
    • mean: 59.56 tokens
    • max: 501 tokens
  • Samples:
    anchor positive
    ക്രൂരകോഷ്ഠം ഉള്ള ഒരാളിൽ കോപിച്ചിരിക്കുന്ന ദോഷം താഴെപ്പറയുന്നവയിൽ ഏതാണ്?
    A. കഫം
    B. പിത്തം
    C. വാതം
    D. രക്തം
    ഓരോ ദോഷത്തിനും അതിന്റേതായ സ്വഭാവങ്ങളും ശരീരത്തിൽ അത് ഉണ്ടാക്കുന്ന ഫലങ്ങളും ഉണ്ട്.
    Melyik tényező nem befolyásolja a fagylalt keresleti függvényét?
    A. A fagylalt árának változása.
    B. Mindegyik tényező befolyásolja.
    C. A jégkrém árának változása.
    D. A fagylalttölcsér árának változása.
    A keresleti függvény negatív meredekségű, ami azt jelenti, hogy az ár növekedésével a keresett mennyiség csökken (csökkenő kereslet törvénye).
    In contrast to _______, _______ aim to reward favourable behaviour by companies. The success of such campaigns have been heightened through the use of ___________, which allow campaigns to facilitate the company in achieving _________ .
    A. Boycotts, Buyalls, Blockchain technology, Increased Sales
    B. Buycotts, Boycotts, Digital technology, Decreased Sales
    C. Boycotts, Buycotts, Digital technology, Decreased Sales
    D. Buycotts, Boycotts, Blockchain technology, Charitable donations
    E. Boycotts, Buyalls, Blockchain technology, Charitable donations
    F. Boycotts, Buycotts, Digital technology, Increased Sales
    G. Buycotts, Boycotts, Digital technology, Increased Sales
    H. Boycotts, Buycotts, Physical technology, Increased Sales
    I. Buycotts, Buyalls, Blockchain technology, Charitable donations
    J. Boycotts, Buycotts, Blockchain technology, Decreased Sales
    Consumer Activism: This term refers to the actions taken by consumers to promote social, political, or environmental causes. These actions can include boycotting certain companies or buycotting others, influencing market dynamics based on ethical considerations. The effectiveness of consumer activism can vary but has gained prominence in recent years with increased visibility through social media.
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • learning_rate: 3e-05
  • num_train_epochs: 2
  • warmup_steps: 1000
  • fp16: True
  • load_best_model_at_end: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 3e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 2
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 1000
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss
0.1000 6080 0.099 0.0513
0.2000 12160 0.0811 0.0616
0.3000 18240 0.0765 0.0683
0.4001 24320 0.0722 0.0775
0.5001 30400 0.0676 0.0759
0.6001 36480 0.0641 0.0413
0.7001 42560 0.0559 0.0437
0.8001 48640 0.052 0.0581
0.9001 54720 0.0508 0.0540
1.0001 60800 0.0433 0.0402
1.1002 66880 0.0325 0.0350
1.2002 72960 0.0301 0.0379
1.3002 79040 0.0301 0.0355
1.4002 85120 0.027 0.0424
1.5002 91200 0.0251 0.0385
1.6002 97280 0.0218 0.0313
1.7003 103360 0.0205 0.0258
1.8003 109440 0.0201 0.0260
1.9003 115520 0.0181 0.0227

Framework Versions

  • Python: 3.12.8
  • Sentence Transformers: 3.4.1
  • Transformers: 4.48.2
  • PyTorch: 2.5.1+cu124
  • Accelerate: 1.3.0
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
5
Safetensors
Model size
560M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DoDucAnh/multilingual-e5-large-tuned

Finetuned
(106)
this model

Dataset used to train DoDucAnh/multilingual-e5-large-tuned