YAML Metadata Warning: The pipeline tag "text-ranking" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, other

CrossEncoder based on microsoft/MiniLM-L12-H384-uncased

This is a Cross Encoder model finetuned from microsoft/MiniLM-L12-H384-uncased on the ms_marco dataset using the sentence-transformers library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.

Model Details

Model Description

Model Sources

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import CrossEncoder

# Download from the 🤗 Hub
model = CrossEncoder("yjoonjang/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-plistmle-normalize-temperature-2")
# Get scores for pairs of texts
pairs = [
    ['How many calories in an egg', 'There are on average between 55 and 80 calories in an egg depending on its size.'],
    ['How many calories in an egg', 'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.'],
    ['How many calories in an egg', 'Most of the calories in an egg come from the yellow yolk in the center.'],
]
scores = model.predict(pairs)
print(scores.shape)
# (3,)

# Or rank different texts based on similarity to a single text
ranks = model.rank(
    'How many calories in an egg',
    [
        'There are on average between 55 and 80 calories in an egg depending on its size.',
        'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.',
        'Most of the calories in an egg come from the yellow yolk in the center.',
    ]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]

Evaluation

Metrics

Cross Encoder Reranking

  • Datasets: NanoMSMARCO_R100, NanoNFCorpus_R100 and NanoNQ_R100
  • Evaluated with CrossEncoderRerankingEvaluator with these parameters:
    {
        "at_k": 10,
        "always_rerank_positives": true
    }
    
Metric NanoMSMARCO_R100 NanoNFCorpus_R100 NanoNQ_R100
map 0.5257 (+0.0362) 0.3387 (+0.0777) 0.5581 (+0.1385)
mrr@10 0.5139 (+0.0364) 0.5921 (+0.0923) 0.5648 (+0.1381)
ndcg@10 0.5778 (+0.0374) 0.3660 (+0.0410) 0.6325 (+0.1319)

Cross Encoder Nano BEIR

  • Dataset: NanoBEIR_R100_mean
  • Evaluated with CrossEncoderNanoBEIREvaluator with these parameters:
    {
        "dataset_names": [
            "msmarco",
            "nfcorpus",
            "nq"
        ],
        "rerank_k": 100,
        "at_k": 10,
        "always_rerank_positives": true
    }
    
Metric Value
map 0.4742 (+0.0841)
mrr@10 0.5569 (+0.0889)
ndcg@10 0.5254 (+0.0701)

Training Details

Training Dataset

ms_marco

  • Dataset: ms_marco at a47ee7a
  • Size: 78,704 training samples
  • Columns: query, docs, and labels
  • Approximate statistics based on the first 1000 samples:
    query docs labels
    type string list list
    details
    • min: 12 characters
    • mean: 33.99 characters
    • max: 98 characters
    • min: 3 elements
    • mean: 6.50 elements
    • max: 10 elements
    • min: 3 elements
    • mean: 6.50 elements
    • max: 10 elements
  • Samples:
    query docs labels
    what does a business development consultant do ['Duties and Responsibilities. An organizational development consultant is a person called in to a company, be it a large corporation or a small business, to evaluate how it operates and make recommendations for improvement.', 'Many sales businesses use business development consultants to help generate leads and show them how to do so. In a business such as sales, lead generation can make or break a company. Having someone show a business owner how to successfully acquire this key piece of information is very important.', 'Development of a marketing strategy is another area covered by a business development consultant. Many businesses struggle with devising ways to effectively market their business to prospective clients.', 'A Good Business Consultant Has Extensive Experience. A good Business Consultant has experience working in and working with a broad range of businesses. It is the accumulated business history of a Business Consultant which makes the consultant valuable.', "A busines... [1, 0, 0, 0, 0, ...]
    did soren kjeldsen ever play in the masters ["Recent News. Soeren Søren kjeldsen stalled on the back nine and finished joint-runner up in The British masters At woburn with a-2-under par-32=37. 69 The, dane who Captured'may S Irish, open looked good for his second win of the season when shooting four birdies against a single bogey on The' marquess'course s front. nine", "Latest News. Soeren Søren kjeldsen stalled on the back nine and finished joint-runner up in The British masters At woburn with a-2-under par-32=37. 69 The, dane who Captured'may S Irish, open looked good for his second win of the season when shooting four birdies against a single bogey on The' marquess'course s front. nine", "Soren Kjeldsen of Denmark (L) celebrates winning the Irish Open with Austria's Bernd Wiesberger …. It is amazing to be holding the trophy but then I felt good coming into the tournament, he said. I played well in my last two tournaments and while I was not in contention I had the chance today to change all that.", "Denmark. Soeren Søren kje... [1, 0, 0, 0, 0, ...]
    what is a sell limit order ["Sell Limit Order. A Sell Limit Order is an order to sell a specified number of shares of a stock that you own at a designated price or higher, at a price that is above the current market price. This is your limit price, in other words, the minimum price you are willing to accept to sell your shares. The main benefit of a Sell Limit Order is that you may be able to sell the shares that you own at a minimum price that you specify IF the stock's price raises to that price. Sell Limit Orders are great for maximizing profit-taking.", 'Stop-Limit Order. A stop-limit order is an order to buy or sell a stock that combines the features of a stop order and a limit order. Once the stop price is reached, a stop-limit order becomes a limit order that will be executed at a specified price (or better)', "You place a Sell Limit Order @ $50 on 100 shares of TGT. Now suppose the price trades up to $50. As long as the price remains above $50 per share, your shares would then be sold at the next best av... [1, 0, 0, 0, 0, ...]
  • Loss: PListMLELoss with these parameters:
    {
        "lambda_weight": "sentence_transformers.cross_encoder.losses.PListMLELoss.PListMLELambdaWeight",
        "activation_fct": "torch.nn.modules.linear.Identity",
        "mini_batch_size": null,
        "respect_input_order": true
    }
    

Evaluation Dataset

ms_marco

  • Dataset: ms_marco at a47ee7a
  • Size: 1,000 evaluation samples
  • Columns: query, docs, and labels
  • Approximate statistics based on the first 1000 samples:
    query docs labels
    type string list list
    details
    • min: 11 characters
    • mean: 33.46 characters
    • max: 108 characters
    • min: 2 elements
    • mean: 6.00 elements
    • max: 10 elements
    • min: 2 elements
    • mean: 6.00 elements
    • max: 10 elements
  • Samples:
    query docs labels
    how much to spend on wordpress hosting ['Shared server. This will cost as little as $3 per month to around $10 per month depending on how you want to pay for it (by the month or by the year). The performance of your site will suffer from shared hosting. It’s a good choice for a personal blog or for getting you started. 1 Courses – You can take courses online that are free for extremely basic information or spend up to $200 or more for mid to advanced topics. 2 Some courses cost from $20 – $50 for a monthly subscription. 3 This allows you to pay for as much training as you want. 4 This can still cost hundreds of dollars', 'You may also choose to invest in customization, SEO or other factors along the way. If your interest is simply to start a blog on WordPress, you can start with a minimal cost of $60 for unlimited hosting and free domain with Bluehost. You can learn all about which hosting service is best for WordPress here. Domain: Cost – $10. The first element you need to shop for is a domain. Having a domain name is g... [1, 1, 0, 0, 0, ...]
    what type blood is the universal donor ["At one time, type O negative blood was considered the universal blood donor type. This implied that anyone — regardless of blood type — could receive type O negative blood without risking a transfusion reaction. Even then, small samples of the recipient's and donor's blood are mixed to check compatibility in a process known as crossmatching. In an emergency, however, type O negative red blood cells may be given to anyone — especially if the situation is life-threatening or the matching blood type is in short supply.", 'People with type O Rh D negative blood are often called universal donors. O Rh D negative is the universal donor because it does not contain any antigens (markers). When you … get donated blood that has antigens that are not the same as those of the recipient the blood will clot in the body. AB is a universal acceptor because RBC (red blood cells) contain the A and B antigen (simply put, it is a marker on the cell) so the body a … ccepts any blood type because it recog... [1, 0, 0, 0, 0, ...]
    dental crown costs average ['The prices for dental crowns range from $500 to $2,500 per crown and are dependent upon the materials used, location of tooth and geographic location. The average cost of a crown is $825, with or without dental insurance coverage. The cheapest cost of a dental crown is $500 for a simple metal crown. Dental crowns are specifically shaped shells that fit over damaged or broken teeth for either cosmetic or structural purposes. 1 People with insurance typically paid $520 – $1,140 out of pocket with an average of $882 per crown. 2 Those without insurance generally paid between $830 and $2,465 per crown with an average cost of $1,350.', '1 All-porcelain crowns require a higher level of skill and take more time to install than metal or porcelain-fused-to-metal crowns, and can cost $800-$3,000 or more per tooth. 2 CostHelper readers without insurance report paying $860-$3,000, at an average cost of $1,430. 1 CostHelper readers without insurance report paying $860-$3,000, at an average cost... [1, 0, 0, 0, 0, ...]
  • Loss: PListMLELoss with these parameters:
    {
        "lambda_weight": "sentence_transformers.cross_encoder.losses.PListMLELoss.PListMLELambdaWeight",
        "activation_fct": "torch.nn.modules.linear.Identity",
        "mini_batch_size": null,
        "respect_input_order": true
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • learning_rate: 2e-05
  • num_train_epochs: 1
  • warmup_ratio: 0.1
  • seed: 12
  • bf16: True
  • load_best_model_at_end: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 12
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss NanoMSMARCO_R100_ndcg@10 NanoNFCorpus_R100_ndcg@10 NanoNQ_R100_ndcg@10 NanoBEIR_R100_mean_ndcg@10
-1 -1 - - 0.0375 (-0.5029) 0.2604 (-0.0646) 0.0219 (-0.4788) 0.1066 (-0.3488)
0.0002 1 1713.1071 - - - - -
0.0508 250 1833.4537 - - - - -
0.1016 500 1790.301 1707.9830 0.1182 (-0.4222) 0.2072 (-0.1178) 0.3276 (-0.1730) 0.2177 (-0.2377)
0.1525 750 1775.4549 - - - - -
0.2033 1000 1716.7897 1638.4917 0.5203 (-0.0201) 0.3349 (+0.0099) 0.6145 (+0.1138) 0.4899 (+0.0345)
0.2541 1250 1734.1811 - - - - -
0.3049 1500 1707.1166 1619.5133 0.5134 (-0.0270) 0.3245 (-0.0005) 0.6225 (+0.1218) 0.4868 (+0.0314)
0.3558 1750 1715.8994 - - - - -
0.4066 2000 1682.5393 1630.9360 0.5278 (-0.0127) 0.3434 (+0.0184) 0.5907 (+0.0900) 0.4873 (+0.0319)
0.4574 2250 1705.7818 - - - - -
0.5082 2500 1650.1962 1599.1906 0.5778 (+0.0374) 0.3660 (+0.0410) 0.6325 (+0.1319) 0.5254 (+0.0701)
0.5591 2750 1651.8559 - - - - -
0.6099 3000 1677.6405 1594.7935 0.5657 (+0.0253) 0.3514 (+0.0263) 0.6304 (+0.1298) 0.5158 (+0.0605)
0.6607 3250 1690.9901 - - - - -
0.7115 3500 1647.8661 1597.9960 0.5553 (+0.0149) 0.3582 (+0.0331) 0.6342 (+0.1335) 0.5159 (+0.0605)
0.7624 3750 1657.8038 - - - - -
0.8132 4000 1670.0114 1591.1512 0.5429 (+0.0025) 0.3617 (+0.0367) 0.6377 (+0.1370) 0.5141 (+0.0587)
0.8640 4250 1678.4298 - - - - -
0.9148 4500 1687.3654 1587.0916 0.5427 (+0.0023) 0.3549 (+0.0299) 0.6317 (+0.1310) 0.5098 (+0.0544)
0.9656 4750 1645.7461 - - - - -
-1 -1 - - 0.5778 (+0.0374) 0.3660 (+0.0410) 0.6325 (+0.1319) 0.5254 (+0.0701)
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.11.11
  • Sentence Transformers: 3.5.0.dev0
  • Transformers: 4.49.0
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.5.2
  • Datasets: 3.4.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

PListMLELoss

@inproceedings{lan2014position,
  title={Position-Aware ListMLE: A Sequential Learning Process for Ranking.},
  author={Lan, Yanyan and Zhu, Yadong and Guo, Jiafeng and Niu, Shuzi and Cheng, Xueqi},
  booktitle={UAI},
  volume={14},
  pages={449--458},
  year={2014}
}
Downloads last month
8
Safetensors
Model size
33.4M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for yjoonjang/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-plistmle-normalize-temperature-2

Finetuned
(83)
this model

Dataset used to train yjoonjang/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-plistmle-normalize-temperature-2

Evaluation results