SentenceTransformer based on microsoft/mpnet-base
This is a sentence-transformers model finetuned from microsoft/mpnet-base. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: microsoft/mpnet-base
- Maximum Sequence Length: 256 tokens
- Output Dimensionality: 768 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("greatakela/gennlp_hw1_encoder2025")
# Run inference
sentences = [
"What happened? Where have I been? Right here, it seems. But that girl. She was so beautiful. So real. Do you remember anything else? No.[SEP]Good. Perhaps that explains why he's here. Nothing was real to him except the girl.",
'Captain, the Melkotian object.',
" It's killing you.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Triplet
- Datasets:
evaluator_enc
andevaluator_val
- Evaluated with
TripletEvaluator
Metric | evaluator_enc | evaluator_val |
---|---|---|
cosine_accuracy | 0.999 | 0.9931 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 4,893 training samples
- Columns:
sentence_0
,sentence_1
, andsentence_2
- Approximate statistics based on the first 1000 samples:
sentence_0 sentence_1 sentence_2 type string string string details - min: 2 tokens
- mean: 90.47 tokens
- max: 256 tokens
- min: 3 tokens
- mean: 18.64 tokens
- max: 98 tokens
- min: 4 tokens
- mean: 20.14 tokens
- max: 199 tokens
- Samples:
sentence_0 sentence_1 sentence_2 Oh, well, if that's all. Mister Scott, transport the glommer over to the Klingon ship. Aye, sir. You can't do this to me. Under space salvage laws, he's mine. A planetary surface is not covered by space salvage laws. But if you want the little beastie that bad, Mister Jones, we'll transport you over with it. I withdraw my claim.[SEP]Well, at least we can report the stasis field is not as effective a weapon as we thought. The power drain is too high and takes too long for the Klingon ship to recover to make it practical.
Agreed, Captain. Tribbles appear to be a much more effective weapon.
[protesting] I give him...
Do you mean that's what the Kelvans really are? Undoubtedly. Well, if they look that way normally, why did they adapt themselves to our bodies? Perhaps practicality. They chose the Enterprise as the best vessel for the trip. Immense beings with a hundred tentacles would have difficulty with the turbolift. We've got to stop them. We outnumber them. Their only hold on us is the paralysis field. Well, that's enough. One wrong move, and they jam all our neural circuits.[SEP]Jam. Spock, if you reverse the circuits on McCoy's neuro-analyser, can you set up a counter field to jam the paralysis projector?
I'm dubious of the possibilities of success, Captain. The medical equipment is not designed to put out a great deal of power. The polarized elements would burn out quickly.
The next step would be a type of brain surgery.
Well, speculation isn't much help. We have to get in there. Perhaps there is a way open on the far side. There is much less activity there. That building in the centre. It seems to be important. You stand before the Ruling Tribunal of the Aquans. I am Domar, the High Tribune. I'm Captain Kirk of the starship Enterprise. This is my first officer, Mister Spock.[SEP]You are air-breather enemies from the surface. We have expected spies for a long time.
We came here in peace, Tribune.
Which is why we need to look at the nerve that you didn't biopsy.
- Loss:
TripletLoss
with these parameters:{ "distance_metric": "TripletDistanceMetric.EUCLIDEAN", "triplet_margin": 5 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsmulti_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 8per_device_eval_batch_size
: 8per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 3max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Training Logs
Epoch | Step | Training Loss | evaluator_enc_cosine_accuracy | evaluator_val_cosine_accuracy |
---|---|---|---|---|
-1 | -1 | - | 0.5494 | - |
0.4902 | 300 | - | 0.9808 | - |
0.8170 | 500 | 1.4249 | - | - |
0.9804 | 600 | - | 0.9912 | - |
1.0 | 612 | - | 0.9931 | - |
1.4706 | 900 | - | 0.9963 | - |
1.6340 | 1000 | 0.2269 | - | - |
1.9608 | 1200 | - | 0.9990 | - |
2.0 | 1224 | - | 0.9990 | - |
2.4510 | 1500 | 0.1054 | 0.9990 | - |
2.9412 | 1800 | - | 0.9990 | - |
3.0 | 1836 | - | 0.9990 | - |
-1 | -1 | - | - | 0.9931 |
Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.48.2
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
TripletLoss
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
- Downloads last month
- 4
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for greatakela/gennlp_hw1_encoder2025
Base model
microsoft/mpnet-baseEvaluation results
- Cosine Accuracy on evaluator encself-reported0.999
- Cosine Accuracy on evaluator valself-reported0.993