SentenceTransformer based on intfloat/multilingual-e5-large

This is a sentence-transformers model finetuned from intfloat/multilingual-e5-large. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: intfloat/multilingual-e5-large
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 1024 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("meandyou200175/e5_large_finetune_word")
# Run inference
sentences = [
    'A long appendage protruding from the lower back. Often covered in fur or scales. A common feature of animal girls.',
    'tail',
    'stomach day',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.9073
cosine_accuracy@2 0.9739
cosine_accuracy@5 0.9942
cosine_accuracy@10 0.999
cosine_accuracy@100 1.0
cosine_precision@1 0.9073
cosine_precision@2 0.487
cosine_precision@5 0.1988
cosine_precision@10 0.0999
cosine_precision@100 0.01
cosine_recall@1 0.9073
cosine_recall@2 0.9739
cosine_recall@5 0.9942
cosine_recall@10 0.999
cosine_recall@100 1.0
cosine_ndcg@10 0.9602
cosine_mrr@1 0.9073
cosine_mrr@2 0.9406
cosine_mrr@5 0.9463
cosine_mrr@10 0.947
cosine_mrr@100 0.9471
cosine_map@100 0.9471

Training Details

Training Dataset

Unnamed Dataset

  • Size: 10,356 training samples
  • Columns: query and positive
  • Approximate statistics based on the first 1000 samples:
    query positive
    type string string
    details
    • min: 3 tokens
    • mean: 36.54 tokens
    • max: 177 tokens
    • min: 3 tokens
    • mean: 5.3 tokens
    • max: 13 tokens
  • Samples:
    query positive
    Eyewear shaped like a semicircle. semi-circular eyewear
    A handheld electric appliance used fordryingand styling hair. hair dryer
    When onebreastis exposed while the other remains covered or confined by clothing. Seebreasts outfor when both breasts are exposed. one breast out
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Evaluation Dataset

word_embedding

  • Dataset: word_embedding at af76b11
  • Size: 1,036 evaluation samples
  • Columns: query and positive
  • Approximate statistics based on the first 1000 samples:
    query positive
    type string string
    details
    • min: 4 tokens
    • mean: 35.89 tokens
    • max: 164 tokens
    • min: 3 tokens
    • mean: 5.38 tokens
    • max: 14 tokens
  • Samples:
    query positive
    A machine that manipulates data according to a list of instructions. The ability to store and execute lists of instructions called programs make computers extremely versatile. On Danbooru's images they are most often used fordrawing,playing gamesand accessing theinternet. computer
    Aplaying cardwith twoclubs. two of clubs
    Yebisu (ヱビス, Ebisu) is a beer produced bySapporo Breweries. It is one of Japan's oldest brands, first being brewed in Tokyo in 1890 by the Japan Beer Brewery Company. yebisu
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • learning_rate: 2e-05
  • num_train_epochs: 5
  • warmup_ratio: 0.1
  • fp16: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • tp_size: 0
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss cosine_ndcg@10
-1 -1 - - 0.7166
0.1543 100 0.9191 - -
0.3086 200 0.1876 - -
0.4630 300 0.1547 - -
0.6173 400 0.1556 - -
0.7716 500 0.179 - -
0.9259 600 0.1234 - -
1.0802 700 0.087 - -
1.2346 800 0.0576 - -
1.3889 900 0.0564 - -
1.5432 1000 0.0583 0.0271 0.9198
1.6975 1100 0.0764 - -
1.8519 1200 0.0493 - -
2.0062 1300 0.0481 - -
2.1605 1400 0.0222 - -
2.3148 1500 0.0234 - -
2.4691 1600 0.0283 - -
2.6235 1700 0.0236 - -
2.7778 1800 0.026 - -
2.9321 1900 0.0217 - -
3.0864 2000 0.0193 0.0061 0.9534
3.2407 2100 0.0135 - -
3.3951 2200 0.0162 - -
3.5494 2300 0.0109 - -
3.7037 2400 0.0107 - -
3.8580 2500 0.0105 - -
4.0123 2600 0.0095 - -
4.1667 2700 0.0146 - -
4.3210 2800 0.0102 - -
4.4753 2900 0.0108 - -
4.6296 3000 0.01 0.0061 0.9602
4.7840 3100 0.008 - -
4.9383 3200 0.0117 - -

Framework Versions

  • Python: 3.11.11
  • Sentence Transformers: 3.4.1
  • Transformers: 4.51.1
  • PyTorch: 2.5.1+cu124
  • Accelerate: 1.3.0
  • Datasets: 3.5.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
226
Safetensors
Model size
560M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for meandyou200175/e5_large_finetune_word

Finetuned
(106)
this model

Dataset used to train meandyou200175/e5_large_finetune_word

Evaluation results