SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-MiniLM-L6-v2
  • Maximum Sequence Length: 256 tokens
  • Output Dimensionality: 384 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False, 'architecture': 'BertModel'})
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Devy1/MiniLM-cosqa-256")
# Run inference
sentences = [
    'bottom 5 rows in python',
    'def table_top_abs(self):\n        """Returns the absolute position of table top"""\n        table_height = np.array([0, 0, self.table_full_size[2]])\n        return string_to_array(self.floor.get("pos")) + table_height',
    'def refresh(self, document):\n\t\t""" Load a new copy of a document from the database.  does not\n\t\t\treplace the old one """\n\t\ttry:\n\t\t\told_cache_size = self.cache_size\n\t\t\tself.cache_size = 0\n\t\t\tobj = self.query(type(document)).filter_by(mongo_id=document.mongo_id).one()\n\t\tfinally:\n\t\t\tself.cache_size = old_cache_size\n\t\tself.cache_write(obj)\n\t\treturn obj',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000,  0.4934, -0.0548],
#         [ 0.4934,  1.0000, -0.0408],
#         [-0.0548, -0.0408,  1.0000]])

Training Details

Training Dataset

Unnamed Dataset

  • Size: 9,020 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 6 tokens
    • mean: 9.67 tokens
    • max: 21 tokens
    • min: 40 tokens
    • mean: 86.17 tokens
    • max: 256 tokens
  • Samples:
    anchor positive
    1d array in char datatype in python def _convert_to_array(array_like, dtype):
    """
    Convert Matrix attributes which are array-like or buffer to array.
    """
    if isinstance(array_like, bytes):
    return np.frombuffer(array_like, dtype=dtype)
    return np.asarray(array_like, dtype=dtype)
    python condition non none def _not(condition=None, **kwargs):
    """
    Return the opposite of input condition.

    :param condition: condition to process.

    :result: not condition.
    :rtype: bool
    """

    result = True

    if condition is not None:
    result = not run(condition, **kwargs)

    return result
    accessing a column from a matrix in python def get_column(self, X, column):
    """Return a column of the given matrix.

    Args:
    X: numpy.ndarray or pandas.DataFrame.
    column: int or str.

    Returns:
    np.ndarray: Selected column.
    """
    if isinstance(X, pd.DataFrame):
    return X[column].values

    return X[:, column]
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim",
        "gather_across_devices": false
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 256
  • fp16: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 256
  • per_device_eval_batch_size: 8
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 3
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Click to expand
Epoch Step Training Loss
0.0278 1 0.8774
0.0556 2 0.6553
0.0833 3 0.7565
0.1111 4 0.7703
0.1389 5 0.5969
0.1667 6 0.5905
0.1944 7 0.76
0.2222 8 0.6663
0.25 9 0.625
0.2778 10 0.5882
0.3056 11 0.623
0.3333 12 0.5631
0.3611 13 0.524
0.3889 14 0.7467
0.4167 15 0.6272
0.4444 16 0.5395
0.4722 17 0.6429
0.5 18 0.6462
0.5278 19 0.6576
0.5556 20 0.6333
0.5833 21 0.6013
0.6111 22 0.5671
0.6389 23 0.6835
0.6667 24 0.5734
0.6944 25 0.5969
0.7222 26 0.5446
0.75 27 0.6675
0.7778 28 0.5319
0.8056 29 0.5374
0.8333 30 0.5085
0.8611 31 0.6267
0.8889 32 0.4322
0.9167 33 0.5383
0.9444 34 0.5712
0.9722 35 0.5485
1.0 36 0.214
1.0278 37 0.515
1.0556 38 0.4593
1.0833 39 0.4891
1.1111 40 0.3927
1.1389 41 0.4909
1.1667 42 0.4875
1.1944 43 0.4611
1.2222 44 0.409
1.25 45 0.4307
1.2778 46 0.4946
1.3056 47 0.5795
1.3333 48 0.4643
1.3611 49 0.4998
1.3889 50 0.4235
1.4167 51 0.5118
1.4444 52 0.4707
1.4722 53 0.4705
1.5 54 0.4539
1.5278 55 0.5652
1.5556 56 0.404
1.5833 57 0.5273
1.6111 58 0.5888
1.6389 59 0.4139
1.6667 60 0.4815
1.6944 61 0.4656
1.7222 62 0.3471
1.75 63 0.4345
1.7778 64 0.4375
1.8056 65 0.3994
1.8333 66 0.4184
1.8611 67 0.4474
1.8889 68 0.3888
1.9167 69 0.3873
1.9444 70 0.5267
1.9722 71 0.3954
2.0 72 0.0789
2.0278 73 0.429
2.0556 74 0.4103
2.0833 75 0.3696
2.1111 76 0.426
2.1389 77 0.3726
2.1667 78 0.4097
2.1944 79 0.4385
2.2222 80 0.3634
2.25 81 0.346
2.2778 82 0.3483
2.3056 83 0.4737
2.3333 84 0.4918
2.3611 85 0.3644
2.3889 86 0.4132
2.4167 87 0.422
2.4444 88 0.5443
2.4722 89 0.4509
2.5 90 0.3926
2.5278 91 0.3734
2.5556 92 0.3753
2.5833 93 0.3722
2.6111 94 0.4094
2.6389 95 0.4425
2.6667 96 0.374
2.6944 97 0.4313
2.7222 98 0.3245
2.75 99 0.3582
2.7778 100 0.3581
2.8056 101 0.3798
2.8333 102 0.3791
2.8611 103 0.3892
2.8889 104 0.3989
2.9167 105 0.3393
2.9444 106 0.457
2.9722 107 0.3486
3.0 108 0.1888

Framework Versions

  • Python: 3.10.14
  • Sentence Transformers: 5.1.1
  • Transformers: 4.56.2
  • PyTorch: 2.8.0+cu128
  • Accelerate: 1.10.1
  • Datasets: 4.1.1
  • Tokenizers: 0.22.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
22
Safetensors
Model size
22.7M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Devy1/MiniLM-cosqa-256

Finetuned
(568)
this model

Collection including Devy1/MiniLM-cosqa-256