Fine-tune-all-mpnet-base-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-mpnet-base-v2 on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-mpnet-base-v2
  • Maximum Sequence Length: 384 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • json
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("thanhpham1/Fine-tune-all-mpnet-base-v2")
# Run inference
sentences = [
    'Option 2: Manually Create URL (slower to implement, but recommended for production environments)#\nThe second option is to manually create this URL by pattern-matching your specific use case with one of the following examples.\nThis is recommended because it provides finer-grained control over which repository branch and commit to use when generating your dependency zip file.\nThese options prevent consistency issues on Ray Clusters (see the warning above for more info).\nTo create the URL, pick a URL template below that fits your use case, and fill in all parameters in brackets (e.g. [username], [repository], etc.) with the specific values from your repository.\nFor instance, suppose your GitHub username is example_user, the repository’s name is example_repository, and the desired commit hash is abcdefg.\nIf example_repository is public and you want to retrieve the abcdefg commit (which matches the first example use case), the URL would be:',
    'How do you create the URL for Option 2?',
    'What can Ray Train and Ray Tune be used together for?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.5874
cosine_accuracy@3 0.6818
cosine_accuracy@5 0.7955
cosine_accuracy@10 0.8864
cosine_precision@1 0.5874
cosine_precision@3 0.5181
cosine_precision@5 0.3944
cosine_precision@10 0.232
cosine_recall@1 0.264
cosine_recall@3 0.6074
cosine_recall@5 0.7522
cosine_recall@10 0.8781
cosine_ndcg@10 0.7387
cosine_mrr@10 0.6636
cosine_map@100 0.6989

Information Retrieval

Metric Value
cosine_accuracy@1 0.5734
cosine_accuracy@3 0.6661
cosine_accuracy@5 0.8007
cosine_accuracy@10 0.8811
cosine_precision@1 0.5734
cosine_precision@3 0.5052
cosine_precision@5 0.3937
cosine_precision@10 0.2309
cosine_recall@1 0.2601
cosine_recall@3 0.5915
cosine_recall@5 0.7544
cosine_recall@10 0.8727
cosine_ndcg@10 0.7303
cosine_mrr@10 0.6522
cosine_map@100 0.6894

Information Retrieval

Metric Value
cosine_accuracy@1 0.5664
cosine_accuracy@3 0.6661
cosine_accuracy@5 0.7797
cosine_accuracy@10 0.8584
cosine_precision@1 0.5664
cosine_precision@3 0.5012
cosine_precision@5 0.3864
cosine_precision@10 0.2253
cosine_recall@1 0.2577
cosine_recall@3 0.5893
cosine_recall@5 0.7354
cosine_recall@10 0.8488
cosine_ndcg@10 0.7168
cosine_mrr@10 0.6433
cosine_map@100 0.6824

Information Retrieval

Metric Value
cosine_accuracy@1 0.5402
cosine_accuracy@3 0.6399
cosine_accuracy@5 0.743
cosine_accuracy@10 0.8304
cosine_precision@1 0.5402
cosine_precision@3 0.4796
cosine_precision@5 0.3678
cosine_precision@10 0.2182
cosine_recall@1 0.2452
cosine_recall@3 0.5624
cosine_recall@5 0.701
cosine_recall@10 0.8228
cosine_ndcg@10 0.6886
cosine_mrr@10 0.6147
cosine_map@100 0.6544

Information Retrieval

Metric Value
cosine_accuracy@1 0.4353
cosine_accuracy@3 0.5332
cosine_accuracy@5 0.6311
cosine_accuracy@10 0.7622
cosine_precision@1 0.4353
cosine_precision@3 0.3945
cosine_precision@5 0.3094
cosine_precision@10 0.1983
cosine_recall@1 0.1984
cosine_recall@3 0.4655
cosine_recall@5 0.5911
cosine_recall@10 0.7468
cosine_ndcg@10 0.5953
cosine_mrr@10 0.5139
cosine_map@100 0.5592

Training Details

Training Dataset

json

  • Dataset: json
  • Size: 5,146 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 8 tokens
    • mean: 17.8 tokens
    • max: 41 tokens
    • min: 66 tokens
    • mean: 225.02 tokens
    • max: 384 tokens
  • Samples:
    anchor positive
    Does Ray Train work with vanilla TensorFlow in addition to TensorFlow with Keras? Get Started with Distributed Training using TensorFlow/Keras#
    Ray Train’s TensorFlow integration enables you
    to scale your TensorFlow and Keras training functions to many machines and GPUs.
    On a technical level, Ray Train schedules your training workers
    and configures TF_CONFIG for you, allowing you to run
    your MultiWorkerMirroredStrategy training script. See Distributed
    training with TensorFlow
    for more information.
    Most of the examples in this guide use TensorFlow with Keras, but
    Ray Train also works with vanilla TensorFlow.

    Quickstart#
    import ray
    import tensorflow as tf

    from ray import train
    from ray.train import ScalingConfig
    from ray.train.tensorflow import TensorflowTrainer
    from ray.train.tensorflow.keras import ReportCheckpointCallback

    # If using GPUs, set this to True.
    use_gpu = False

    a = 5
    b = 10
    size = 100
    What type of failure can Ray automatically recover from? Ray can automatically recover from data loss but not owner failure.

    Recovering from data loss#
    When an object value is lost from the object store, such as during node
    failures, Ray will use lineage reconstruction to recover the object.
    Ray will first automatically attempt to recover the value by looking
    for copies of the same object on other nodes. If none are found, then Ray will
    automatically recover the value by re-executing
    the task that previously created the value. Arguments to the task are
    recursively reconstructed through the same mechanism.
    Lineage reconstruction currently has the following limitations:
    From which directory should you run the zip command to ensure the proper zip file structure? Suppose instead you want to host your files in your /some_path/example_dir directory remotely and provide a remote URI.
    You would need to first compress the example_dir directory into a zip file.
    There should be no other files or directories at the top level of the zip file, other than example_dir.
    You can use the following command in the Terminal to do this:
    cd /some_path
    zip -r zip_file_name.zip example_dir

    Note that this command must be run from the parent directory of the desired working_dir to ensure that the resulting zip file contains a single top-level directory.
    In general, the zip file’s name and the top-level directory’s name can be anything.
    The top-level directory’s contents will be used as the working_dir (or py_module).
    You can check that the zip file contains a single top-level directory by running the following command in the Terminal:
    zipinfo -1 zip_file_name.zip
    # example_dir/
    # example_dir/my_file_1.txt
    # example_dir/subdir/my_file_2.txt
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 4
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: False
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: False
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_768_cosine_ndcg@10 dim_512_cosine_ndcg@10 dim_256_cosine_ndcg@10 dim_128_cosine_ndcg@10 dim_64_cosine_ndcg@10
0.9938 10 44.0311 - - - - -
1.0 11 - 0.6797 0.6651 0.6439 0.6180 0.4996
0.9938 10 14.5908 - - - - -
1.0 11 - 0.7179 0.7034 0.6927 0.6658 0.5720
1.8944 20 8.5538 - - - - -
2.0 22 - 0.7295 0.7209 0.7109 0.6793 0.5942
2.7950 30 6.916 - - - - -
3.0 33 - 0.7382 0.7293 0.7149 0.6916 0.5939
3.6957 40 6.5704 - - - - -
4.0 44 - 0.7387 0.7303 0.7168 0.6886 0.5953
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.11.12
  • Sentence Transformers: 4.1.0
  • Transformers: 4.52.3
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.7.0
  • Datasets: 3.6.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
36
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for thanhpham1/Fine-tune-all-mpnet-base-v2

Finetuned
(278)
this model

Evaluation results