YAML Metadata
Warning:
The pipeline tag "text-ranking" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, other
CrossEncoder based on microsoft/MiniLM-L12-H384-uncased
This is a Cross Encoder model finetuned from microsoft/MiniLM-L12-H384-uncased on the ms_marco dataset using the sentence-transformers library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
Model Details
Model Description
- Model Type: Cross Encoder
- Base model: microsoft/MiniLM-L12-H384-uncased
- Maximum Sequence Length: 512 tokens
- Number of Output Labels: 1 label
- Training Dataset:
- Language: en
Model Sources
- Documentation: Sentence Transformers Documentation
- Documentation: Cross Encoder Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Cross Encoders on Hugging Face
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("yjoonjang/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-plistmle-normalize-softmax")
# Get scores for pairs of texts
pairs = [
['How many calories in an egg', 'There are on average between 55 and 80 calories in an egg depending on its size.'],
['How many calories in an egg', 'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.'],
['How many calories in an egg', 'Most of the calories in an egg come from the yellow yolk in the center.'],
]
scores = model.predict(pairs)
print(scores.shape)
# (3,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'How many calories in an egg',
[
'There are on average between 55 and 80 calories in an egg depending on its size.',
'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.',
'Most of the calories in an egg come from the yellow yolk in the center.',
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
Evaluation
Metrics
Cross Encoder Reranking
- Datasets:
NanoMSMARCO_R100
,NanoNFCorpus_R100
andNanoNQ_R100
- Evaluated with
CrossEncoderRerankingEvaluator
with these parameters:{ "at_k": 10, "always_rerank_positives": true }
Metric | NanoMSMARCO_R100 | NanoNFCorpus_R100 | NanoNQ_R100 |
---|---|---|---|
map | 0.5221 (+0.0325) | 0.3426 (+0.0816) | 0.6035 (+0.1839) |
mrr@10 | 0.5129 (+0.0354) | 0.6204 (+0.1206) | 0.6111 (+0.1844) |
ndcg@10 | 0.5813 (+0.0409) | 0.3887 (+0.0637) | 0.6558 (+0.1552) |
Cross Encoder Nano BEIR
- Dataset:
NanoBEIR_R100_mean
- Evaluated with
CrossEncoderNanoBEIREvaluator
with these parameters:{ "dataset_names": [ "msmarco", "nfcorpus", "nq" ], "rerank_k": 100, "at_k": 10, "always_rerank_positives": true }
Metric | Value |
---|---|
map | 0.4894 (+0.0994) |
mrr@10 | 0.5815 (+0.1135) |
ndcg@10 | 0.5420 (+0.0866) |
Training Details
Training Dataset
ms_marco
- Dataset: ms_marco at a47ee7a
- Size: 78,704 training samples
- Columns:
query
,docs
, andlabels
- Approximate statistics based on the first 1000 samples:
query docs labels type string list list details - min: 10 characters
- mean: 33.44 characters
- max: 114 characters
- min: 2 elements
- mean: 6.00 elements
- max: 10 elements
- min: 2 elements
- mean: 6.00 elements
- max: 10 elements
- Samples:
query docs labels what do the intercostal nerves do
['The first two nerves supply fibers to the upper limb in addition to their thoracic branches; the next four are limited in their distribution to the walls of the thorax; the lower five supply the walls of the thorax and abdomen. The 7th intercostal nerve terminates at the xyphoid process, at the lower end of the sternum. The 10th intercostal nerve terminates at the navel. The twelfth (subcostal) thoracic is distributed to the abdominal wall and groin. Unlike the nerves from the autonomic nervous system that innervate the visceral pleura of the thoracic cavity, the intercostal nerves arise from the somatic nervous system', 'The intercostal nerves are distributed chiefly to the thoracic pleura and abdominal peritoneum and differ from the anterior divisions of the other spinal nerves in that each pursues an independent course without plexus formation. The 10th intercostal nerve terminates at the umbilicus. The twelfth thoracic is distributed to the abdominal wall and groin. Unlike the ne...
[1, 0, 0, 0, 0, ...]
what causes swollen tongue
['There are many possible causes of a swollen tongue. A swollen tongue can result from such abnormal processes as infection, inflammation, allergy, genetic disorders, trauma, malignancy and metabolic diseases. Some causes of a swollen tongue are serious, even life-threatening, such as a anaphylactic reaction.', 'An enlarged or swollen tongue can also occur as an allergic reaction to medications or other substances. In this case, the swelling is due to fluid accumulation in the tissues of the tongue, medically known as angioedema.', 'Swelling of the tongue can occur due to inflammation of the tongue (known as glossitis), the presence of abnormal substances (such as amyloid protein) in the tongue, the collection of fluid in the tongue as a result of different disease processes, or tumors that infiltrate the tissues of the tongue.', '1 Allergens. 2 In addition to allergic reactions to medications, allergic reactions to other substances — such as foods or bee stings — can cause swelling. ...
[1, 0, 0, 0, 0, ...]
tax on salary in india
['If their annual income was anywhere between INR 240,001 and INR 500,000 a tax rate of 10% was levied. For an annual income between 500,001 and 800,000 rupees the tax rate was 20% and for an amount more INR 800,000 a year a rate of 30% was applied. Tax slabs for women: The tax exemption limit for women in 2011-12 was INR 190,000 a year. In case their annual income was anywhere between INR 190,000 and 500,000 a tax rate of 10 percent was applied. A tax rate of 20% was applied in case of yearly earnings between 500,001 and 800,000 rupees.', 'Salary Tax Money which you earn from different sources is taxed differently. So if you are a salary earner, your salary income to be taxed will be calculated in a different way from gains. The term Salaries includes remuneration in any form for personal service, under an expressed or implied contract of employment or service. Section 17 of Income Tax Act defines salary to include:- # Wages # Pensions or Annuities # Gratuities # Advance of Salary # A...
[1, 0, 0, 0, 0, ...]
- Loss:
PListMLELoss
with these parameters:{ "lambda_weight": "sentence_transformers.cross_encoder.losses.PListMLELoss.PListMLELambdaWeight", "activation_fct": "torch.nn.modules.linear.Identity", "mini_batch_size": null, "respect_input_order": true }
Evaluation Dataset
ms_marco
- Dataset: ms_marco at a47ee7a
- Size: 1,000 evaluation samples
- Columns:
query
,docs
, andlabels
- Approximate statistics based on the first 1000 samples:
query docs labels type string list list details - min: 12 characters
- mean: 34.27 characters
- max: 144 characters
- min: 2 elements
- mean: 6.00 elements
- max: 10 elements
- min: 2 elements
- mean: 6.00 elements
- max: 10 elements
- Samples:
query docs labels what are the different vitamins
['In humans there are 13 vitamins: 4 fat-soluble (A, D, E, and K) and 9 water-soluble (8 B vitamins and vitamin C). Water-soluble vitamins dissolve easily in water and, in general, are readily excreted from the body, to the degree that urinary output is a strong predictor of vitamin consumption. Anti-vitamins are chemical compounds that inhibit the absorption or actions of vitamins. For example, avidin is a protein in egg whites that inhibits the absorption of biotin. Pyrithiamine is similar to thiamine, vitamin B1, and inhibits the enzymes that use thiamine.', "Different Types of Vitamins. Vitamins, one of the most essential nutrients required by the body, can be broadly classified into two broad categories namely, water-soluble vitamins and fat-soluble vitamins. On the contrary, fat-soluble vitamins (Vitamins A, D, E, and K) get stored in the body's fatty tissues. There are distinctive kinds of vitamins and each vitamin play a unique role in promoting health fitness.", 'Water-Soluble...
[1, 0, 0, 0, 0, ...]
does teen mom drop jennells charges
["View. comments. Domestic violence charges against Jenelle Evans have been dropped, according to a new report. The Teen Mom 2 star's former fiance Nathan Griffith told law enforcement he was no longer interested in pursuing charges against the mother of his child, TMZ reports.", 'During an interview with Radar Online on April 22, prior to his appearance in court, Griffith claimed the Teen Mom could drop the charges currently pending against him, in regard to their March 4 fight in South Carolina.', 'Jenelle Evans’ fiance Fiancé Nathan griffith is scheduled to appear in A South carolina court yet again On wednesday for a hearing about his domestic violence arrest This. March but before facing the, Judge griffith RadarOnline.radaronline com exclusively That evans wants the charges against him — dropped and that he still loves. her', "Credit: Todd DC/Splash News. Jenelle Evans can relax -- for now. The Teen Mom star's lawyer, Dustin Sullivan, confirmed to Us Weekly on Friday, July 26, th...
[1, 0, 0, 0, 0, ...]
what is dism host servicing process
['Deployment Image Servicing and Management (DISM.exe) is a command-line tool that can be used to service a Windows® image or to prepare a Windows Preinstallation Environment (Windows PE) image. In addition to the command-line tool, DISM is available by using Windows PowerShell. For more information, see Deployment Imaging Servicing Management (DISM) Cmdlets in Windows PowerShell. This topic includes: 1 Image Requirements. 2 Benefits. 3 Common Servicing and Management Scenarios. 4 Limitations.', 'Introduction to DismHost.exe. The DismHost.exe is called a Dism Host Servicing Process. This file is created by Microsoft Corporation and is an integral part of Windows® Operating System. It is an invisible system file and is typically located in the %SYSTEM% sub-folder. Its normal size is 81,920 bytes. ', "DismHost.exe is part of Microsoft® Windows® Operating System and developed by Microsoft Corporation according to the DismHost.exe version information. DismHost.exe's description is Dis...
[1, 0, 0, 0, 0, ...]
- Loss:
PListMLELoss
with these parameters:{ "lambda_weight": "sentence_transformers.cross_encoder.losses.PListMLELoss.PListMLELambdaWeight", "activation_fct": "torch.nn.modules.linear.Identity", "mini_batch_size": null, "respect_input_order": true }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 16per_device_eval_batch_size
: 16learning_rate
: 2e-05num_train_epochs
: 1warmup_ratio
: 0.1seed
: 12bf16
: Trueload_best_model_at_end
: True
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 16per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 1max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 12data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Truefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_R100_ndcg@10 | NanoNFCorpus_R100_ndcg@10 | NanoNQ_R100_ndcg@10 | NanoBEIR_R100_mean_ndcg@10 |
---|---|---|---|---|---|---|---|
-1 | -1 | - | - | 0.0181 (-0.5223) | 0.2281 (-0.0970) | 0.0469 (-0.4538) | 0.0977 (-0.3577) |
0.0002 | 1 | 2.1806 | - | - | - | - | - |
0.0508 | 250 | 2.0713 | - | - | - | - | - |
0.1016 | 500 | 1.9405 | 1.9069 | 0.0590 (-0.4814) | 0.2494 (-0.0756) | 0.1433 (-0.3573) | 0.1506 (-0.3048) |
0.1525 | 750 | 1.89 | - | - | - | - | - |
0.2033 | 1000 | 1.8543 | 1.8324 | 0.4639 (-0.0765) | 0.3532 (+0.0281) | 0.5671 (+0.0665) | 0.4614 (+0.0060) |
0.2541 | 1250 | 1.831 | - | - | - | - | - |
0.3049 | 1500 | 1.8277 | 1.8171 | 0.5395 (-0.0010) | 0.3700 (+0.0450) | 0.6294 (+0.1288) | 0.5130 (+0.0576) |
0.3558 | 1750 | 1.8204 | - | - | - | - | - |
0.4066 | 2000 | 1.8212 | 1.8073 | 0.5409 (+0.0005) | 0.3903 (+0.0652) | 0.6702 (+0.1696) | 0.5338 (+0.0784) |
0.4574 | 2250 | 1.8219 | - | - | - | - | - |
0.5082 | 2500 | 1.8002 | 1.7947 | 0.5798 (+0.0393) | 0.3782 (+0.0532) | 0.6520 (+0.1513) | 0.5367 (+0.0813) |
0.5591 | 2750 | 1.8005 | - | - | - | - | - |
0.6099 | 3000 | 1.8028 | 1.7943 | 0.5771 (+0.0367) | 0.3790 (+0.0540) | 0.6455 (+0.1448) | 0.5339 (+0.0785) |
0.6607 | 3250 | 1.8074 | - | - | - | - | - |
0.7115 | 3500 | 1.8008 | 1.7790 | 0.5627 (+0.0223) | 0.3748 (+0.0498) | 0.6278 (+0.1272) | 0.5218 (+0.0664) |
0.7624 | 3750 | 1.7991 | - | - | - | - | - |
0.8132 | 4000 | 1.802 | 1.7812 | 0.5689 (+0.0284) | 0.3733 (+0.0483) | 0.6521 (+0.1514) | 0.5314 (+0.0760) |
0.8640 | 4250 | 1.7936 | - | - | - | - | - |
0.9148 | 4500 | 1.8029 | 1.781 | 0.5813 (+0.0409) | 0.3887 (+0.0637) | 0.6558 (+0.1552) | 0.5420 (+0.0866) |
0.9656 | 4750 | 1.7964 | - | - | - | - | - |
-1 | -1 | - | - | 0.5813 (+0.0409) | 0.3887 (+0.0637) | 0.6558 (+0.1552) | 0.5420 (+0.0866) |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.5.0.dev0
- Transformers: 4.49.0
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.2
- Datasets: 3.4.0
- Tokenizers: 0.21.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
PListMLELoss
@inproceedings{lan2014position,
title={Position-Aware ListMLE: A Sequential Learning Process for Ranking.},
author={Lan, Yanyan and Zhu, Yadong and Guo, Jiafeng and Niu, Shuzi and Cheng, Xueqi},
booktitle={UAI},
volume={14},
pages={449--458},
year={2014}
}
- Downloads last month
- 8
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The HF Inference API does not support text-ranking models for sentence-transformers
library.
Model tree for yjoonjang/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-plistmle-normalize-softmax
Base model
microsoft/MiniLM-L12-H384-uncasedDataset used to train yjoonjang/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-plistmle-normalize-softmax
Evaluation results
- Map on NanoMSMARCO R100self-reported0.522
- Mrr@10 on NanoMSMARCO R100self-reported0.513
- Ndcg@10 on NanoMSMARCO R100self-reported0.581
- Map on NanoNFCorpus R100self-reported0.343
- Mrr@10 on NanoNFCorpus R100self-reported0.620
- Ndcg@10 on NanoNFCorpus R100self-reported0.389
- Map on NanoNQ R100self-reported0.604
- Mrr@10 on NanoNQ R100self-reported0.611
- Ndcg@10 on NanoNQ R100self-reported0.656
- Map on NanoBEIR R100 meanself-reported0.489