SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-MiniLM-L6-v2
  • Maximum Sequence Length: 256 tokens
  • Output Dimensionality: 384 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("AryehRotberg/ToS-Sentence-Transformers")
# Run inference
sentences = [
    'Pexgle will need to share your information, including personal information, in order to ensure the adequate performance of our contract with you.',
    'This service gives your personal data to third parties involved in its operation',
    'Extra data may be collected about you through promotions',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Triplet

Metric Value
cosine_accuracy 0.9993

Training Details

Training Dataset

Unnamed Dataset

  • Size: 150,468 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 48.6 tokens
    • max: 256 tokens
    • min: 6 tokens
    • mean: 14.72 tokens
    • max: 29 tokens
    • min: 4 tokens
    • mean: 14.26 tokens
    • max: 29 tokens
  • Samples:
    anchor positive negative
    For all User Submissions, you hereby grant Guilded a license to translate, modify (for technical purposes, for example, making sure your content is viewable on a mobile device as well as a computer) and reproduce and otherwise act with respect to such User Submissions, in each case to enable us to operate the Services, as described in more detail below. Copyright license limited for the purposes of that same service but transferable and sublicenseable You are prohibited from sending chain letters, junk mail, spam or any unsolicited messages
    Our data is stored in the EU or USA with robust physical, digital, and procedural safeguards in place to protect your personal data, including the use of SSL encryption, redundant servers and data centers, and sophisticated perimeter security. We continuously audit for security vulnerabilities and make software patching a priority. Information is provided about security practices The service disables software that you are not licensed to use.
    No part of our Platform may be reproduced in any form or incorporated into any information retrieval system, electronic or mechanical, other than for your personal use. While using our Platform, you cannot redistribute your license (“Premium”, “Pro”, “Lite”) to anyone in any way that can make them use the features bound to your account. Unless otherwise specified, the developer tools and components, download areas, communication forums, and product information are for your personal and non-commercial use. This service is only available for use individually and non-commercially. Accessibility to this service is guaranteed at 99% or more
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 37,617 evaluation samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 3 tokens
    • mean: 45.61 tokens
    • max: 256 tokens
    • min: 6 tokens
    • mean: 14.64 tokens
    • max: 29 tokens
    • min: 6 tokens
    • mean: 14.26 tokens
    • max: 29 tokens
  • Samples:
    anchor positive negative
    non-exclusive, worldwide right and license to use, The service has non-exclusive use of your content You are not being tracked
    We also reserve the right to suspend or end the Service at any time at our discretion and without notice. For example, we may suspend or terminate your use of the Service and remove Your Content if you’re not complying with these AUP Guidelines, or using the Service in a manner that may cause us legal liability, disrupt the Service, disrupt others’ use of the Service or, in our sole opinion, reason, cause harm. Your account can be deleted or permanently suspended without prior notice and without a reason The service claims to be CCPA compliant for California users
    ExpressVPN uses mobile identifiers to generate statistics related to the marketing channels and advertising partners through which users learned about and signed up for ExpressVPN mobile apps. You are tracked via web beacons, tracking pixels, browser fingerprinting, and/or device fingerprinting Your personal data is used for advertising
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • learning_rate: 2e-05
  • num_train_epochs: 1
  • warmup_ratio: 0.1
  • fp16: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss all-nli-dev_cosine_accuracy
-1 -1 - - 0.9527
0.0106 100 1.3092 1.1396 0.9620
0.0213 200 1.0389 0.8936 0.9742
0.0319 300 0.8838 0.7500 0.9793
0.0425 400 0.7582 0.6477 0.9843
0.0532 500 0.6358 0.5727 0.9871
0.0638 600 0.6451 0.5158 0.9889
0.0744 700 0.4932 0.4715 0.9903
0.0851 800 0.4865 0.4355 0.9913
0.0957 900 0.4636 0.4035 0.9927
0.1063 1000 0.4406 0.3846 0.9930
0.1170 1100 0.3824 0.3691 0.9934
0.1276 1200 0.3967 0.3411 0.9944
0.1382 1300 0.3448 0.3264 0.9945
0.1489 1400 0.3372 0.3018 0.9955
0.1595 1500 0.3035 0.2941 0.9959
0.1701 1600 0.319 0.2864 0.9956
0.1808 1700 0.292 0.2743 0.9964
0.1914 1800 0.2647 0.2727 0.9965
0.2020 1900 0.2948 0.2517 0.9968
0.2127 2000 0.2583 0.2456 0.9971
0.2233 2100 0.2685 0.2352 0.9970
0.2339 2200 0.2879 0.2327 0.9969
0.2446 2300 0.2366 0.2271 0.9972
0.2552 2400 0.231 0.2164 0.9972
0.2658 2500 0.2639 0.2124 0.9973
0.2764 2600 0.2543 0.2078 0.9976
0.2871 2700 0.2261 0.2043 0.9972
0.2977 2800 0.2239 0.1976 0.9978
0.3083 2900 0.2271 0.1932 0.9977
0.3190 3000 0.2334 0.1845 0.9979
0.3296 3100 0.2021 0.1867 0.9981
0.3402 3200 0.2237 0.1762 0.9984
0.3509 3300 0.2109 0.1730 0.9983
0.3615 3400 0.2047 0.1663 0.9985
0.3721 3500 0.1904 0.1629 0.9984
0.3828 3600 0.1687 0.1643 0.9984
0.3934 3700 0.2071 0.1584 0.9984
0.4040 3800 0.1609 0.1543 0.9983
0.4147 3900 0.1862 0.1525 0.9984
0.4253 4000 0.1925 0.1504 0.9984
0.4359 4100 0.1714 0.1484 0.9985
0.4466 4200 0.2025 0.1472 0.9985
0.4572 4300 0.1427 0.1422 0.9986
0.4678 4400 0.1458 0.1401 0.9986
0.4785 4500 0.1796 0.1371 0.9985
0.4891 4600 0.1289 0.1317 0.9987
0.4997 4700 0.1427 0.1298 0.9988
0.5104 4800 0.1349 0.1313 0.9988
0.5210 4900 0.149 0.1293 0.9987
0.5316 5000 0.1633 0.1230 0.9988
0.5423 5100 0.1241 0.1240 0.9988
0.5529 5200 0.1532 0.1196 0.9988
0.5635 5300 0.1547 0.1173 0.9988
0.5742 5400 0.1652 0.1167 0.9990
0.5848 5500 0.1505 0.1120 0.9989
0.5954 5600 0.1309 0.1106 0.9990
0.6061 5700 0.1648 0.1089 0.9988
0.6167 5800 0.118 0.1070 0.9988
0.6273 5900 0.1207 0.1062 0.9988
0.6380 6000 0.1104 0.1046 0.9989
0.6486 6100 0.1262 0.1040 0.9989
0.6592 6200 0.1236 0.1008 0.9990
0.6699 6300 0.122 0.1005 0.9990
0.6805 6400 0.1244 0.1005 0.9991
0.6911 6500 0.1176 0.0998 0.9991
0.7018 6600 0.1215 0.0994 0.9991
0.7124 6700 0.1079 0.0983 0.9991
0.7230 6800 0.1099 0.0957 0.9991
0.7337 6900 0.1121 0.0950 0.9992
0.7443 7000 0.1137 0.0942 0.9992
0.7549 7100 0.1082 0.0929 0.9991
0.7656 7200 0.1047 0.0923 0.9991
0.7762 7300 0.1147 0.0904 0.9992
0.7868 7400 0.1336 0.0895 0.9991
0.7974 7500 0.1122 0.0889 0.9992
0.8081 7600 0.1126 0.0884 0.9993
0.8187 7700 0.116 0.0864 0.9992
0.8293 7800 0.0991 0.0857 0.9992
0.8400 7900 0.1091 0.0851 0.9992
0.8506 8000 0.1052 0.0846 0.9993
0.8612 8100 0.1105 0.0839 0.9992
0.8719 8200 0.1101 0.0836 0.9992
0.8825 8300 0.107 0.0832 0.9993
0.8931 8400 0.0867 0.0827 0.9993
0.9038 8500 0.0965 0.0823 0.9992
0.9144 8600 0.1108 0.0817 0.9993
0.9250 8700 0.1219 0.0814 0.9992
0.9357 8800 0.1169 0.0809 0.9992
0.9463 8900 0.0964 0.0805 0.9992
0.9569 9000 0.0939 0.0804 0.9992
0.9676 9100 0.0955 0.0803 0.9993
0.9782 9200 0.1076 0.0800 0.9993
0.9888 9300 0.1049 0.0798 0.9992
0.9995 9400 0.0826 0.0798 0.9993

Framework Versions

  • Python: 3.9.19
  • Sentence Transformers: 4.0.2
  • Transformers: 4.48.1
  • PyTorch: 2.4.1+cu124
  • Accelerate: 1.6.0
  • Datasets: 2.21.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
3
Safetensors
Model size
22.7M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for AryehRotberg/ToS-Sentence-Transformers

Finetuned
(322)
this model

Evaluation results