CrossEncoder based on microsoft/MiniLM-L12-H384-uncased

This is a Cross Encoder model finetuned from microsoft/MiniLM-L12-H384-uncased on the ms_marco dataset using the sentence-transformers library. It computes scores for pairs of texts, which can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import CrossEncoder

# Download from the 🤗 Hub
model = CrossEncoder("Studeni/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-lambdaloss")
# Get scores for pairs of texts
pairs = [
    ['How many calories in an egg', 'There are on average between 55 and 80 calories in an egg depending on its size.'],
    ['How many calories in an egg', 'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.'],
    ['How many calories in an egg', 'Most of the calories in an egg come from the yellow yolk in the center.'],
]
scores = model.predict(pairs)
print(scores.shape)
# (3,)

# Or rank different texts based on similarity to a single text
ranks = model.rank(
    'How many calories in an egg',
    [
        'There are on average between 55 and 80 calories in an egg depending on its size.',
        'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.',
        'Most of the calories in an egg come from the yellow yolk in the center.',
    ]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]

Evaluation

Metrics

Cross Encoder Reranking

Metric NanoMSMARCO NanoNFCorpus NanoNQ
map 0.5185 (+0.0289) 0.3307 (+0.0603) 0.5630 (+0.1423)
mrr@10 0.5102 (+0.0327) 0.5466 (+0.0468) 0.5730 (+0.1464)
ndcg@10 0.5876 (+0.0472) 0.3699 (+0.0449) 0.6260 (+0.1253)

Cross Encoder Nano BEIR

Metric Value
map 0.4707 (+0.0772)
mrr@10 0.5433 (+0.0753)
ndcg@10 0.5278 (+0.0725)

Training Details

Training Dataset

ms_marco

  • Dataset: ms_marco at a47ee7a
  • Size: 82,326 training samples
  • Columns: query, docs, and labels
  • Approximate statistics based on the first 1000 samples:
    query docs labels
    type string list list
    details
    • min: 9 characters
    • mean: 34.34 characters
    • max: 91 characters
    • size: 10 elements
    • size: 10 elements
  • Samples:
    query docs labels
    what does tolterodine do ['Tolterodine (Detrol, Detrusitol) is an antimuscarinic drug that is used for symptomatic treatment of urinary incontinence. It is marketed by Pfizer in Canada and the United States by its brand name Detrol. In Egypt it is also found under the trade names Tolterodine by Sabaa and Incont L.A. by Adwia. Detrusor overactivity (DO, contraction of the muscular bladder wall) is the most common form of UI in older adults. It is characterized by uninhibited bladder contractions causing an uncontrollable urge to void. Urinary frequency, urge incontinence and nocturnal incontinence occur.', 'Tolterodine reduces spasms of the bladder muscles. Tolterodine is used to treat overactive bladder with symptoms of urinary frequency, urgency, and incontinence. Tolterodine may also be used for purposes not listed in this medication guide. You should not take this medication if you are allergic to tolterodine or fesoterodine (Toviaz), if you have untreated or uncontrolled narrow-angle glaucoma, or if you ha... [1, 0, 0, 0, 0, ...]
    why no dairy when taking ciprofloxacin ['Do not take ciprofloxacin with dairy products such as milk or yogurt, or with calcium-fortified juice. You may eat or drink these products as part of a regular meal, but do not use them alone when taking ciprofloxacin. They could make the medication less effective.', 'If your healthcare provider prescribes this medication, it is important to understand some precautions for using this drug. For instance, you should not take ciprofloxacin with dairy products alone (such as milk or yogurt) or with calcium-fortified juices (such as orange juice).', 'Do not take this medicine alone with milk, yogurt, or other dairy products. Do not drink any juice with calcium added when you take this medicine. It is okay to have dairy products or juice as part of a larger meal', 'Do not take ciprofloxacin with dairy products or calcium-fortified juice alone; you can, however, take ciprofloxacin with a meal that includes these...', 'You should not use ciprofloxacin if: 1 you are also taking tizanidine (Z... [1, 0, 0, 0, 0, ...]
    standard depth of countertops overhang ['Overhang. Countertops extend out from the face frame of the cabinets and just over the cabinet doors. This is called the overhang. Standard cabinet frames are 24 inches deep with 3/4 inch to 1 inch thick doors. Most countertops have a 1 inch overhang to make a standard depth of 25 inches. While there are many different materials to use for countertops, most come in a standard thickness of 1 1/2 inches.', 'Hanging Out on an Island. The standard overhang of an island countertop -- on the side designed to sit at and tuck stools underneath -- is 12 inches. If you plan to extend the counter farther, you need to add supports such as legs, or wood corbels or metal L-brackets that extend half the overhang’s distance.', 'The standard vanity counter top depth. Usually countertops overhang the doors by about one half of an inch. So, if your finished box size, including the door is twenty one and three quarters inches deep, then your finished top will be 22 1/4” in depth. The cut size should be ... [1, 0, 0, 0, 0, ...]
  • Loss: LambdaLoss with these parameters:
    {
        "weighing_scheme": "LambdaRankScheme",
        "k": 10,
        "sigma": 1.0,
        "eps": 1e-10,
        "pad_value": -1,
        "reduction": "mean",
        "reduction_log": "binary",
        "activation_fct": null
    }
    

Evaluation Dataset

ms_marco

  • Dataset: ms_marco at a47ee7a
  • Size: 82,326 evaluation samples
  • Columns: query, docs, and labels
  • Approximate statistics based on the first 1000 samples:
    query docs labels
    type string list list
    details
    • min: 11 characters
    • mean: 33.63 characters
    • max: 99 characters
    • size: 10 elements
    • size: 10 elements
  • Samples:
    query docs labels
    define monogenic trait ['An allele is a version of a gene. For example, in fruitflies there is a gene which determines eye colour: one allele gives red eyes, and another gives white eyes; it is the same gene, just different versions of that gene. A monogenic trait is one which is encoded by a single gene. e.g. - cystic fibrosis in humans. There is a single gene which determines this trait: the wild-type allele is healthy, while the disease allele gives you cystic fibrosis', 'Abstract. Monogenic inheritance refers to genetic control of a phenotype or trait by a single gene. For a monogenic trait, mutations in one (dominant) or both (recessive) copies of the gene are sufficient for the trait to be expressed. Digenic inheritance refers to mutation on two genes interacting to cause a genetic phenotype or disease. Triallelic inheritance is a special case of digenic inheritance that requires homozygous mutations at one locus and heterozygous mutations at a second locus to express a phenotype.', 'A trait that is ... [1, 1, 0, 0, 0, ...]
    behavioral theory definition ["Not to be confused with Behavioralism. Behaviorism (or behaviourism) is an approach to psychology that focuses on an individual's behavior. It combines elements of philosophy, methodology, and psychological theory", 'The initial assumption is that behavior can be explained and further described using behavioral theories. For instance, John Watson and B.F. Skinner advocate the theory that behavior can be acquired through conditioning. Also known as general behavior theory. BEHAVIOR THEORY: Each behavioral theory is an advantage to learning, because it provides teachers with a new and different approach.. No related posts. ', 'behaviorism. noun be·hav·ior·ism. : a school of psychology that takes the objective evidence of behavior (as measured responses to stimuli) as the only concern of its research and the only basis of its theory without reference to conscious experience—compare cognitive psychology. : a school of psychology that takes the objective evidence of behavior (as measured ... [1, 0, 0, 0, 0, ...]
    What is a disease that is pleiotropic? ['Unsourced material may be challenged and removed. (September 2013). Pleiotropy occurs when one gene influences two or more seemingly unrelated phenotypic traits, an example being phenylketonuria, which is a human disease that affects multiple systems but is caused by one gene defect. Consequently, a mutation in a pleiotropic gene may have an effect on some or all traits simultaneously. The underlying mechanism is that the gene codes for a product that is, for example, used by various cells, or has a signaling function on various targets. A classic example of pleiotropy is the human disease phenylketonuria (PKU).', 'Pleiotropic, autosomal dominant disorder affecting connective tissue: Related Diseases. Pleiotropic, autosomal dominant disorder affecting connective tissue: Pleiotropic, autosomal dominant disorder affecting connective tissue is listed as a type of (or associated with) the following medical conditions in our database: 1 Heart conditions. Office of Rare Diseases (ORD) of ... [1, 0, 0, 0, 0, ...]
  • Loss: LambdaLoss with these parameters:
    {
        "weighing_scheme": "LambdaRankScheme",
        "k": 10,
        "sigma": 1.0,
        "eps": 1e-10,
        "pad_value": -1,
        "reduction": "mean",
        "reduction_log": "binary",
        "activation_fct": null
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 6
  • per_device_eval_batch_size: 6
  • torch_empty_cache_steps: 2000
  • learning_rate: 2e-05
  • warmup_ratio: 0.1
  • seed: 12
  • bf16: True
  • load_best_model_at_end: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 6
  • per_device_eval_batch_size: 6
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: 2000
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 3
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 12
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss NanoMSMARCO_ndcg@10 NanoNFCorpus_ndcg@10 NanoNQ_ndcg@10 NanoBEIR_mean_ndcg@10
-1 -1 - - 0.1127 (-0.4278) 0.2057 (-0.1193) 0.0150 (-0.4857) 0.1111 (-0.3443)
0.0001 1 0.0767 - - - - -
0.0430 500 0.0864 - - - - -
0.0861 1000 0.0931 - - - - -
0.1291 1500 0.0896 - - - - -
0.1721 2000 0.0832 0.0786 0.4801 (-0.0603) 0.3282 (+0.0031) 0.5660 (+0.0654) 0.4581 (+0.0028)
0.2152 2500 0.0803 - - - - -
0.2582 3000 0.0776 - - - - -
0.3013 3500 0.0775 - - - - -
0.3443 4000 0.0761 0.0729 0.5320 (-0.0084) 0.3207 (-0.0043) 0.6709 (+0.1702) 0.5079 (+0.0525)
0.3873 4500 0.0769 - - - - -
0.4304 5000 0.0736 - - - - -
0.4734 5500 0.0733 - - - - -
0.5164 6000 0.0728 0.0717 0.5413 (+0.0009) 0.3416 (+0.0165) 0.6304 (+0.1297) 0.5044 (+0.0491)
0.5595 6500 0.0742 - - - - -
0.6025 7000 0.0716 - - - - -
0.6456 7500 0.0729 - - - - -
0.6886 8000 0.0717 0.0726 0.5766 (+0.0362) 0.3229 (-0.0021) 0.5439 (+0.0433) 0.4811 (+0.0258)
0.7316 8500 0.0724 - - - - -
0.7747 9000 0.0723 - - - - -
0.8177 9500 0.0696 - - - - -
0.8607 10000 0.0703 0.0688 0.5840 (+0.0436) 0.3482 (+0.0231) 0.6047 (+0.1040) 0.5123 (+0.0569)
0.9038 10500 0.0718 - - - - -
0.9468 11000 0.0709 - - - - -
0.9898 11500 0.0704 - - - - -
1.0329 12000 0.0666 0.0694 0.5643 (+0.0238) 0.3048 (-0.0202) 0.5767 (+0.0761) 0.4819 (+0.0266)
1.0759 12500 0.0665 - - - - -
1.1190 13000 0.0658 - - - - -
1.1620 13500 0.0655 - - - - -
1.2050 14000 0.0657 0.0698 0.5976 (+0.0572) 0.3538 (+0.0287) 0.6231 (+0.1224) 0.5248 (+0.0695)
1.2481 14500 0.0644 - - - - -
1.2911 15000 0.065 - - - - -
1.3341 15500 0.066 - - - - -
1.3772 16000 0.0649 0.0680 0.5993 (+0.0589) 0.3362 (+0.0112) 0.6127 (+0.1120) 0.5161 (+0.0607)
1.4202 16500 0.0655 - - - - -
1.4632 17000 0.0638 - - - - -
1.5063 17500 0.0676 - - - - -
1.5493 18000 0.0645 0.0672 0.5703 (+0.0299) 0.3530 (+0.0280) 0.5643 (+0.0637) 0.4959 (+0.0405)
1.5924 18500 0.0646 - - - - -
1.6354 19000 0.0636 - - - - -
1.6784 19500 0.0671 - - - - -
1.7215 20000 0.0646 0.0678 0.6072 (+0.0667) 0.3586 (+0.0335) 0.5840 (+0.0834) 0.5166 (+0.0612)
1.7645 20500 0.0656 - - - - -
1.8075 21000 0.0623 - - - - -
1.8506 21500 0.0649 - - - - -
1.8936 22000 0.0636 0.0672 0.5940 (+0.0536) 0.3503 (+0.0252) 0.5898 (+0.0891) 0.5114 (+0.0560)
1.9367 22500 0.0632 - - - - -
1.9797 23000 0.0646 - - - - -
2.0227 23500 0.0614 - - - - -
2.0658 24000 0.0572 0.0692 0.5824 (+0.0420) 0.3678 (+0.0428) 0.5803 (+0.0796) 0.5102 (+0.0548)
2.1088 24500 0.0568 - - - - -
2.1518 25000 0.0577 - - - - -
2.1949 25500 0.0575 - - - - -
2.2379 26000 0.0579 0.0704 0.5830 (+0.0425) 0.3662 (+0.0411) 0.5855 (+0.0849) 0.5116 (+0.0562)
2.2809 26500 0.0583 - - - - -
2.3240 27000 0.0572 - - - - -
2.3670 27500 0.058 - - - - -
2.4101 28000 0.0581 0.069 0.5876 (+0.0472) 0.3699 (+0.0449) 0.6260 (+0.1253) 0.5278 (+0.0725)
2.4531 28500 0.0563 - - - - -
2.4961 29000 0.0564 - - - - -
2.5392 29500 0.057 - - - - -
2.5822 30000 0.0568 0.0696 0.5862 (+0.0458) 0.3753 (+0.0502) 0.5947 (+0.0940) 0.5187 (+0.0634)
2.6252 30500 0.0574 - - - - -
2.6683 31000 0.0579 - - - - -
2.7113 31500 0.0577 - - - - -
2.7543 32000 0.056 0.0700 0.5598 (+0.0194) 0.3712 (+0.0462) 0.5826 (+0.0819) 0.5045 (+0.0492)
2.7974 32500 0.0579 - - - - -
2.8404 33000 0.0575 - - - - -
2.8835 33500 0.0567 - - - - -
2.9265 34000 0.0548 0.0700 0.5856 (+0.0452) 0.3734 (+0.0484) 0.5875 (+0.0869) 0.5155 (+0.0601)
2.9695 34500 0.059 - - - - -
-1 -1 - - 0.5876 (+0.0472) 0.3699 (+0.0449) 0.6260 (+0.1253) 0.5278 (+0.0725)
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.13
  • Sentence Transformers: 3.5.0.dev0
  • Transformers: 4.48.1
  • PyTorch: 2.5.1+cu124
  • Accelerate: 1.3.0
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

LambdaLoss

@article{wang2018lambdaloss,
    title={The LambdaLoss Framework for Ranking Metric Optimization},
    author={Wang, Xuanhui and Li, Cheng and Golbandi, Nadav and Bendersky, Michael and Najork, Marc},
    journal={Proceedings of the 27th ACM International Conference on Information and Knowledge Management},
    pages={1313--1322},
    year={2018}
}
Downloads last month
9
Safetensors
Model size
33.4M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support text-classification models for sentence-transformers library.

Model tree for Studeni/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-lambdaloss_v1

Finetuned
(41)
this model

Dataset used to train Studeni/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-lambdaloss_v1