Fine-tune-all-mpnet-base-v2
This is a sentence-transformers model finetuned from sentence-transformers/all-mpnet-base-v2 on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: sentence-transformers/all-mpnet-base-v2
- Maximum Sequence Length: 384 tokens
- Output Dimensionality: 768 dimensions
- Similarity Function: Cosine Similarity
- Training Dataset:
- json
- Language: en
- License: apache-2.0
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("thanhpham1/Fine-tune-all-mpnet-base-v2")
# Run inference
sentences = [
'Option 2: Manually Create URL (slower to implement, but recommended for production environments)#\nThe second option is to manually create this URL by pattern-matching your specific use case with one of the following examples.\nThis is recommended because it provides finer-grained control over which repository branch and commit to use when generating your dependency zip file.\nThese options prevent consistency issues on Ray Clusters (see the warning above for more info).\nTo create the URL, pick a URL template below that fits your use case, and fill in all parameters in brackets (e.g. [username], [repository], etc.) with the specific values from your repository.\nFor instance, suppose your GitHub username is example_user, the repository’s name is example_repository, and the desired commit hash is abcdefg.\nIf example_repository is public and you want to retrieve the abcdefg commit (which matches the first example use case), the URL would be:',
'How do you create the URL for Option 2?',
'What can Ray Train and Ray Tune be used together for?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Dataset:
dim_768
- Evaluated with
InformationRetrievalEvaluator
with these parameters:{ "truncate_dim": 768 }
Metric | Value |
---|---|
cosine_accuracy@1 | 0.5874 |
cosine_accuracy@3 | 0.6818 |
cosine_accuracy@5 | 0.7955 |
cosine_accuracy@10 | 0.8864 |
cosine_precision@1 | 0.5874 |
cosine_precision@3 | 0.5181 |
cosine_precision@5 | 0.3944 |
cosine_precision@10 | 0.232 |
cosine_recall@1 | 0.264 |
cosine_recall@3 | 0.6074 |
cosine_recall@5 | 0.7522 |
cosine_recall@10 | 0.8781 |
cosine_ndcg@10 | 0.7387 |
cosine_mrr@10 | 0.6636 |
cosine_map@100 | 0.6989 |
Information Retrieval
- Dataset:
dim_512
- Evaluated with
InformationRetrievalEvaluator
with these parameters:{ "truncate_dim": 512 }
Metric | Value |
---|---|
cosine_accuracy@1 | 0.5734 |
cosine_accuracy@3 | 0.6661 |
cosine_accuracy@5 | 0.8007 |
cosine_accuracy@10 | 0.8811 |
cosine_precision@1 | 0.5734 |
cosine_precision@3 | 0.5052 |
cosine_precision@5 | 0.3937 |
cosine_precision@10 | 0.2309 |
cosine_recall@1 | 0.2601 |
cosine_recall@3 | 0.5915 |
cosine_recall@5 | 0.7544 |
cosine_recall@10 | 0.8727 |
cosine_ndcg@10 | 0.7303 |
cosine_mrr@10 | 0.6522 |
cosine_map@100 | 0.6894 |
Information Retrieval
- Dataset:
dim_256
- Evaluated with
InformationRetrievalEvaluator
with these parameters:{ "truncate_dim": 256 }
Metric | Value |
---|---|
cosine_accuracy@1 | 0.5664 |
cosine_accuracy@3 | 0.6661 |
cosine_accuracy@5 | 0.7797 |
cosine_accuracy@10 | 0.8584 |
cosine_precision@1 | 0.5664 |
cosine_precision@3 | 0.5012 |
cosine_precision@5 | 0.3864 |
cosine_precision@10 | 0.2253 |
cosine_recall@1 | 0.2577 |
cosine_recall@3 | 0.5893 |
cosine_recall@5 | 0.7354 |
cosine_recall@10 | 0.8488 |
cosine_ndcg@10 | 0.7168 |
cosine_mrr@10 | 0.6433 |
cosine_map@100 | 0.6824 |
Information Retrieval
- Dataset:
dim_128
- Evaluated with
InformationRetrievalEvaluator
with these parameters:{ "truncate_dim": 128 }
Metric | Value |
---|---|
cosine_accuracy@1 | 0.5402 |
cosine_accuracy@3 | 0.6399 |
cosine_accuracy@5 | 0.743 |
cosine_accuracy@10 | 0.8304 |
cosine_precision@1 | 0.5402 |
cosine_precision@3 | 0.4796 |
cosine_precision@5 | 0.3678 |
cosine_precision@10 | 0.2182 |
cosine_recall@1 | 0.2452 |
cosine_recall@3 | 0.5624 |
cosine_recall@5 | 0.701 |
cosine_recall@10 | 0.8228 |
cosine_ndcg@10 | 0.6886 |
cosine_mrr@10 | 0.6147 |
cosine_map@100 | 0.6544 |
Information Retrieval
- Dataset:
dim_64
- Evaluated with
InformationRetrievalEvaluator
with these parameters:{ "truncate_dim": 64 }
Metric | Value |
---|---|
cosine_accuracy@1 | 0.4353 |
cosine_accuracy@3 | 0.5332 |
cosine_accuracy@5 | 0.6311 |
cosine_accuracy@10 | 0.7622 |
cosine_precision@1 | 0.4353 |
cosine_precision@3 | 0.3945 |
cosine_precision@5 | 0.3094 |
cosine_precision@10 | 0.1983 |
cosine_recall@1 | 0.1984 |
cosine_recall@3 | 0.4655 |
cosine_recall@5 | 0.5911 |
cosine_recall@10 | 0.7468 |
cosine_ndcg@10 | 0.5953 |
cosine_mrr@10 | 0.5139 |
cosine_map@100 | 0.5592 |
Training Details
Training Dataset
json
- Dataset: json
- Size: 5,146 training samples
- Columns:
anchor
andpositive
- Approximate statistics based on the first 1000 samples:
anchor positive type string string details - min: 8 tokens
- mean: 17.8 tokens
- max: 41 tokens
- min: 66 tokens
- mean: 225.02 tokens
- max: 384 tokens
- Samples:
anchor positive Does Ray Train work with vanilla TensorFlow in addition to TensorFlow with Keras?
Get Started with Distributed Training using TensorFlow/Keras#
Ray Train’s TensorFlow integration enables you
to scale your TensorFlow and Keras training functions to many machines and GPUs.
On a technical level, Ray Train schedules your training workers
and configures TF_CONFIG for you, allowing you to run
your MultiWorkerMirroredStrategy training script. See Distributed
training with TensorFlow
for more information.
Most of the examples in this guide use TensorFlow with Keras, but
Ray Train also works with vanilla TensorFlow.
Quickstart#
import ray
import tensorflow as tf
from ray import train
from ray.train import ScalingConfig
from ray.train.tensorflow import TensorflowTrainer
from ray.train.tensorflow.keras import ReportCheckpointCallback
# If using GPUs, set this to True.
use_gpu = False
a = 5
b = 10
size = 100What type of failure can Ray automatically recover from?
Ray can automatically recover from data loss but not owner failure.
Recovering from data loss#
When an object value is lost from the object store, such as during node
failures, Ray will use lineage reconstruction to recover the object.
Ray will first automatically attempt to recover the value by looking
for copies of the same object on other nodes. If none are found, then Ray will
automatically recover the value by re-executing
the task that previously created the value. Arguments to the task are
recursively reconstructed through the same mechanism.
Lineage reconstruction currently has the following limitations:From which directory should you run the zip command to ensure the proper zip file structure?
Suppose instead you want to host your files in your /some_path/example_dir directory remotely and provide a remote URI.
You would need to first compress the example_dir directory into a zip file.
There should be no other files or directories at the top level of the zip file, other than example_dir.
You can use the following command in the Terminal to do this:
cd /some_path
zip -r zip_file_name.zip example_dir
Note that this command must be run from the parent directory of the desired working_dir to ensure that the resulting zip file contains a single top-level directory.
In general, the zip file’s name and the top-level directory’s name can be anything.
The top-level directory’s contents will be used as the working_dir (or py_module).
You can check that the zip file contains a single top-level directory by running the following command in the Terminal:
zipinfo -1 zip_file_name.zip
# example_dir/
# example_dir/my_file_1.txt
# example_dir/subdir/my_file_2.txt - Loss:
MatryoshkaLoss
with these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: epochper_device_train_batch_size
: 32per_device_eval_batch_size
: 16gradient_accumulation_steps
: 16learning_rate
: 2e-05num_train_epochs
: 4lr_scheduler_type
: cosinewarmup_ratio
: 0.1bf16
: Truetf32
: Falseload_best_model_at_end
: Trueoptim
: adamw_torch_fusedbatch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: epochprediction_loss_only
: Trueper_device_train_batch_size
: 32per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 16eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 4max_steps
: -1lr_scheduler_type
: cosinelr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Truefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Falselocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torch_fusedoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
---|---|---|---|---|---|---|---|
0.9938 | 10 | 44.0311 | - | - | - | - | - |
1.0 | 11 | - | 0.6797 | 0.6651 | 0.6439 | 0.6180 | 0.4996 |
0.9938 | 10 | 14.5908 | - | - | - | - | - |
1.0 | 11 | - | 0.7179 | 0.7034 | 0.6927 | 0.6658 | 0.5720 |
1.8944 | 20 | 8.5538 | - | - | - | - | - |
2.0 | 22 | - | 0.7295 | 0.7209 | 0.7109 | 0.6793 | 0.5942 |
2.7950 | 30 | 6.916 | - | - | - | - | - |
3.0 | 33 | - | 0.7382 | 0.7293 | 0.7149 | 0.6916 | 0.5939 |
3.6957 | 40 | 6.5704 | - | - | - | - | - |
4.0 | 44 | - | 0.7387 | 0.7303 | 0.7168 | 0.6886 | 0.5953 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.11.12
- Sentence Transformers: 4.1.0
- Transformers: 4.52.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 36
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for thanhpham1/Fine-tune-all-mpnet-base-v2
Base model
sentence-transformers/all-mpnet-base-v2Evaluation results
- Cosine Accuracy@1 on dim 768self-reported0.587
- Cosine Accuracy@3 on dim 768self-reported0.682
- Cosine Accuracy@5 on dim 768self-reported0.795
- Cosine Accuracy@10 on dim 768self-reported0.886
- Cosine Precision@1 on dim 768self-reported0.587
- Cosine Precision@3 on dim 768self-reported0.518
- Cosine Precision@5 on dim 768self-reported0.394
- Cosine Precision@10 on dim 768self-reported0.232
- Cosine Recall@1 on dim 768self-reported0.264
- Cosine Recall@3 on dim 768self-reported0.607