YAML Metadata
Warning:
The pipeline tag "text-ranking" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, other
CrossEncoder based on microsoft/MiniLM-L12-H384-uncased
This is a Cross Encoder model finetuned from microsoft/MiniLM-L12-H384-uncased on the ms_marco dataset using the sentence-transformers library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
Model Details
Model Description
- Model Type: Cross Encoder
- Base model: microsoft/MiniLM-L12-H384-uncased
- Maximum Sequence Length: 512 tokens
- Number of Output Labels: 1 label
- Training Dataset:
- Language: en
Model Sources
- Documentation: Sentence Transformers Documentation
- Documentation: Cross Encoder Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Cross Encoders on Hugging Face
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("yjoonjang/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-plistmle-normalize-temperature-2")
# Get scores for pairs of texts
pairs = [
['How many calories in an egg', 'There are on average between 55 and 80 calories in an egg depending on its size.'],
['How many calories in an egg', 'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.'],
['How many calories in an egg', 'Most of the calories in an egg come from the yellow yolk in the center.'],
]
scores = model.predict(pairs)
print(scores.shape)
# (3,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'How many calories in an egg',
[
'There are on average between 55 and 80 calories in an egg depending on its size.',
'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.',
'Most of the calories in an egg come from the yellow yolk in the center.',
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
Evaluation
Metrics
Cross Encoder Reranking
- Datasets:
NanoMSMARCO_R100
,NanoNFCorpus_R100
andNanoNQ_R100
- Evaluated with
CrossEncoderRerankingEvaluator
with these parameters:{ "at_k": 10, "always_rerank_positives": true }
Metric | NanoMSMARCO_R100 | NanoNFCorpus_R100 | NanoNQ_R100 |
---|---|---|---|
map | 0.5257 (+0.0362) | 0.3387 (+0.0777) | 0.5581 (+0.1385) |
mrr@10 | 0.5139 (+0.0364) | 0.5921 (+0.0923) | 0.5648 (+0.1381) |
ndcg@10 | 0.5778 (+0.0374) | 0.3660 (+0.0410) | 0.6325 (+0.1319) |
Cross Encoder Nano BEIR
- Dataset:
NanoBEIR_R100_mean
- Evaluated with
CrossEncoderNanoBEIREvaluator
with these parameters:{ "dataset_names": [ "msmarco", "nfcorpus", "nq" ], "rerank_k": 100, "at_k": 10, "always_rerank_positives": true }
Metric | Value |
---|---|
map | 0.4742 (+0.0841) |
mrr@10 | 0.5569 (+0.0889) |
ndcg@10 | 0.5254 (+0.0701) |
Training Details
Training Dataset
ms_marco
- Dataset: ms_marco at a47ee7a
- Size: 78,704 training samples
- Columns:
query
,docs
, andlabels
- Approximate statistics based on the first 1000 samples:
query docs labels type string list list details - min: 12 characters
- mean: 33.99 characters
- max: 98 characters
- min: 3 elements
- mean: 6.50 elements
- max: 10 elements
- min: 3 elements
- mean: 6.50 elements
- max: 10 elements
- Samples:
query docs labels what does a business development consultant do
['Duties and Responsibilities. An organizational development consultant is a person called in to a company, be it a large corporation or a small business, to evaluate how it operates and make recommendations for improvement.', 'Many sales businesses use business development consultants to help generate leads and show them how to do so. In a business such as sales, lead generation can make or break a company. Having someone show a business owner how to successfully acquire this key piece of information is very important.', 'Development of a marketing strategy is another area covered by a business development consultant. Many businesses struggle with devising ways to effectively market their business to prospective clients.', 'A Good Business Consultant Has Extensive Experience. A good Business Consultant has experience working in and working with a broad range of businesses. It is the accumulated business history of a Business Consultant which makes the consultant valuable.', "A busines...
[1, 0, 0, 0, 0, ...]
did soren kjeldsen ever play in the masters
["Recent News. Soeren Søren kjeldsen stalled on the back nine and finished joint-runner up in The British masters At woburn with a-2-under par-32=37. 69 The, dane who Captured'may S Irish, open looked good for his second win of the season when shooting four birdies against a single bogey on The' marquess'course s front. nine", "Latest News. Soeren Søren kjeldsen stalled on the back nine and finished joint-runner up in The British masters At woburn with a-2-under par-32=37. 69 The, dane who Captured'may S Irish, open looked good for his second win of the season when shooting four birdies against a single bogey on The' marquess'course s front. nine", "Soren Kjeldsen of Denmark (L) celebrates winning the Irish Open with Austria's Bernd Wiesberger …. It is amazing to be holding the trophy but then I felt good coming into the tournament, he said. I played well in my last two tournaments and while I was not in contention I had the chance today to change all that.", "Denmark. Soeren Søren kje...
[1, 0, 0, 0, 0, ...]
what is a sell limit order
["Sell Limit Order. A Sell Limit Order is an order to sell a specified number of shares of a stock that you own at a designated price or higher, at a price that is above the current market price. This is your limit price, in other words, the minimum price you are willing to accept to sell your shares. The main benefit of a Sell Limit Order is that you may be able to sell the shares that you own at a minimum price that you specify IF the stock's price raises to that price. Sell Limit Orders are great for maximizing profit-taking.", 'Stop-Limit Order. A stop-limit order is an order to buy or sell a stock that combines the features of a stop order and a limit order. Once the stop price is reached, a stop-limit order becomes a limit order that will be executed at a specified price (or better)', "You place a Sell Limit Order @ $50 on 100 shares of TGT. Now suppose the price trades up to $50. As long as the price remains above $50 per share, your shares would then be sold at the next best av...
[1, 0, 0, 0, 0, ...]
- Loss:
PListMLELoss
with these parameters:{ "lambda_weight": "sentence_transformers.cross_encoder.losses.PListMLELoss.PListMLELambdaWeight", "activation_fct": "torch.nn.modules.linear.Identity", "mini_batch_size": null, "respect_input_order": true }
Evaluation Dataset
ms_marco
- Dataset: ms_marco at a47ee7a
- Size: 1,000 evaluation samples
- Columns:
query
,docs
, andlabels
- Approximate statistics based on the first 1000 samples:
query docs labels type string list list details - min: 11 characters
- mean: 33.46 characters
- max: 108 characters
- min: 2 elements
- mean: 6.00 elements
- max: 10 elements
- min: 2 elements
- mean: 6.00 elements
- max: 10 elements
- Samples:
query docs labels how much to spend on wordpress hosting
['Shared server. This will cost as little as $3 per month to around $10 per month depending on how you want to pay for it (by the month or by the year). The performance of your site will suffer from shared hosting. It’s a good choice for a personal blog or for getting you started. 1 Courses – You can take courses online that are free for extremely basic information or spend up to $200 or more for mid to advanced topics. 2 Some courses cost from $20 – $50 for a monthly subscription. 3 This allows you to pay for as much training as you want. 4 This can still cost hundreds of dollars', 'You may also choose to invest in customization, SEO or other factors along the way. If your interest is simply to start a blog on WordPress, you can start with a minimal cost of $60 for unlimited hosting and free domain with Bluehost. You can learn all about which hosting service is best for WordPress here. Domain: Cost – $10. The first element you need to shop for is a domain. Having a domain name is g...
[1, 1, 0, 0, 0, ...]
what type blood is the universal donor
["At one time, type O negative blood was considered the universal blood donor type. This implied that anyone — regardless of blood type — could receive type O negative blood without risking a transfusion reaction. Even then, small samples of the recipient's and donor's blood are mixed to check compatibility in a process known as crossmatching. In an emergency, however, type O negative red blood cells may be given to anyone — especially if the situation is life-threatening or the matching blood type is in short supply.", 'People with type O Rh D negative blood are often called universal donors. O Rh D negative is the universal donor because it does not contain any antigens (markers). When you … get donated blood that has antigens that are not the same as those of the recipient the blood will clot in the body. AB is a universal acceptor because RBC (red blood cells) contain the A and B antigen (simply put, it is a marker on the cell) so the body a … ccepts any blood type because it recog...
[1, 0, 0, 0, 0, ...]
dental crown costs average
['The prices for dental crowns range from $500 to $2,500 per crown and are dependent upon the materials used, location of tooth and geographic location. The average cost of a crown is $825, with or without dental insurance coverage. The cheapest cost of a dental crown is $500 for a simple metal crown. Dental crowns are specifically shaped shells that fit over damaged or broken teeth for either cosmetic or structural purposes. 1 People with insurance typically paid $520 – $1,140 out of pocket with an average of $882 per crown. 2 Those without insurance generally paid between $830 and $2,465 per crown with an average cost of $1,350.', '1 All-porcelain crowns require a higher level of skill and take more time to install than metal or porcelain-fused-to-metal crowns, and can cost $800-$3,000 or more per tooth. 2 CostHelper readers without insurance report paying $860-$3,000, at an average cost of $1,430. 1 CostHelper readers without insurance report paying $860-$3,000, at an average cost...
[1, 0, 0, 0, 0, ...]
- Loss:
PListMLELoss
with these parameters:{ "lambda_weight": "sentence_transformers.cross_encoder.losses.PListMLELoss.PListMLELambdaWeight", "activation_fct": "torch.nn.modules.linear.Identity", "mini_batch_size": null, "respect_input_order": true }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 16per_device_eval_batch_size
: 16learning_rate
: 2e-05num_train_epochs
: 1warmup_ratio
: 0.1seed
: 12bf16
: Trueload_best_model_at_end
: True
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 16per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 1max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 12data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Truefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_R100_ndcg@10 | NanoNFCorpus_R100_ndcg@10 | NanoNQ_R100_ndcg@10 | NanoBEIR_R100_mean_ndcg@10 |
---|---|---|---|---|---|---|---|
-1 | -1 | - | - | 0.0375 (-0.5029) | 0.2604 (-0.0646) | 0.0219 (-0.4788) | 0.1066 (-0.3488) |
0.0002 | 1 | 1713.1071 | - | - | - | - | - |
0.0508 | 250 | 1833.4537 | - | - | - | - | - |
0.1016 | 500 | 1790.301 | 1707.9830 | 0.1182 (-0.4222) | 0.2072 (-0.1178) | 0.3276 (-0.1730) | 0.2177 (-0.2377) |
0.1525 | 750 | 1775.4549 | - | - | - | - | - |
0.2033 | 1000 | 1716.7897 | 1638.4917 | 0.5203 (-0.0201) | 0.3349 (+0.0099) | 0.6145 (+0.1138) | 0.4899 (+0.0345) |
0.2541 | 1250 | 1734.1811 | - | - | - | - | - |
0.3049 | 1500 | 1707.1166 | 1619.5133 | 0.5134 (-0.0270) | 0.3245 (-0.0005) | 0.6225 (+0.1218) | 0.4868 (+0.0314) |
0.3558 | 1750 | 1715.8994 | - | - | - | - | - |
0.4066 | 2000 | 1682.5393 | 1630.9360 | 0.5278 (-0.0127) | 0.3434 (+0.0184) | 0.5907 (+0.0900) | 0.4873 (+0.0319) |
0.4574 | 2250 | 1705.7818 | - | - | - | - | - |
0.5082 | 2500 | 1650.1962 | 1599.1906 | 0.5778 (+0.0374) | 0.3660 (+0.0410) | 0.6325 (+0.1319) | 0.5254 (+0.0701) |
0.5591 | 2750 | 1651.8559 | - | - | - | - | - |
0.6099 | 3000 | 1677.6405 | 1594.7935 | 0.5657 (+0.0253) | 0.3514 (+0.0263) | 0.6304 (+0.1298) | 0.5158 (+0.0605) |
0.6607 | 3250 | 1690.9901 | - | - | - | - | - |
0.7115 | 3500 | 1647.8661 | 1597.9960 | 0.5553 (+0.0149) | 0.3582 (+0.0331) | 0.6342 (+0.1335) | 0.5159 (+0.0605) |
0.7624 | 3750 | 1657.8038 | - | - | - | - | - |
0.8132 | 4000 | 1670.0114 | 1591.1512 | 0.5429 (+0.0025) | 0.3617 (+0.0367) | 0.6377 (+0.1370) | 0.5141 (+0.0587) |
0.8640 | 4250 | 1678.4298 | - | - | - | - | - |
0.9148 | 4500 | 1687.3654 | 1587.0916 | 0.5427 (+0.0023) | 0.3549 (+0.0299) | 0.6317 (+0.1310) | 0.5098 (+0.0544) |
0.9656 | 4750 | 1645.7461 | - | - | - | - | - |
-1 | -1 | - | - | 0.5778 (+0.0374) | 0.3660 (+0.0410) | 0.6325 (+0.1319) | 0.5254 (+0.0701) |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.5.0.dev0
- Transformers: 4.49.0
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.2
- Datasets: 3.4.0
- Tokenizers: 0.21.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
PListMLELoss
@inproceedings{lan2014position,
title={Position-Aware ListMLE: A Sequential Learning Process for Ranking.},
author={Lan, Yanyan and Zhu, Yadong and Guo, Jiafeng and Niu, Shuzi and Cheng, Xueqi},
booktitle={UAI},
volume={14},
pages={449--458},
year={2014}
}
- Downloads last month
- 8
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The HF Inference API does not support text-ranking models for sentence-transformers
library.
Model tree for yjoonjang/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-plistmle-normalize-temperature-2
Base model
microsoft/MiniLM-L12-H384-uncasedDataset used to train yjoonjang/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-plistmle-normalize-temperature-2
Evaluation results
- Map on NanoMSMARCO R100self-reported0.526
- Mrr@10 on NanoMSMARCO R100self-reported0.514
- Ndcg@10 on NanoMSMARCO R100self-reported0.578
- Map on NanoNFCorpus R100self-reported0.339
- Mrr@10 on NanoNFCorpus R100self-reported0.592
- Ndcg@10 on NanoNFCorpus R100self-reported0.366
- Map on NanoNQ R100self-reported0.558
- Mrr@10 on NanoNQ R100self-reported0.565
- Ndcg@10 on NanoNQ R100self-reported0.632
- Map on NanoBEIR R100 meanself-reported0.474