CrossEncoder based on microsoft/MiniLM-L12-H384-uncased

This is a Cross Encoder model finetuned from microsoft/MiniLM-L12-H384-uncased on the ms_marco dataset using the sentence-transformers library. It computes scores for pairs of texts, which can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import CrossEncoder

# Download from the 🤗 Hub
model = CrossEncoder("Studeni/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-lambdaloss")
# Get scores for pairs of texts
pairs = [
    ['How many calories in an egg', 'There are on average between 55 and 80 calories in an egg depending on its size.'],
    ['How many calories in an egg', 'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.'],
    ['How many calories in an egg', 'Most of the calories in an egg come from the yellow yolk in the center.'],
]
scores = model.predict(pairs)
print(scores.shape)
# (3,)

# Or rank different texts based on similarity to a single text
ranks = model.rank(
    'How many calories in an egg',
    [
        'There are on average between 55 and 80 calories in an egg depending on its size.',
        'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.',
        'Most of the calories in an egg come from the yellow yolk in the center.',
    ]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]

Evaluation

Metrics

Cross Encoder Reranking

Metric NanoMSMARCO NanoNFCorpus NanoNQ
map 0.4843 (-0.0053) 0.3225 (+0.0521) 0.5992 (+0.1785)
mrr@10 0.4736 (-0.0039) 0.4868 (-0.0130) 0.6139 (+0.1872)
ndcg@10 0.5463 (+0.0059) 0.3447 (+0.0196) 0.6496 (+0.1490)

Cross Encoder Nano BEIR

Metric Value
map 0.4687 (+0.0751)
mrr@10 0.5248 (+0.0568)
ndcg@10 0.5135 (+0.0582)

Training Details

Training Dataset

ms_marco

  • Dataset: ms_marco at a47ee7a
  • Size: 82,326 training samples
  • Columns: query, docs, and labels
  • Approximate statistics based on the first 1000 samples:
    query docs labels
    type string list list
    details
    • min: 9 characters
    • mean: 34.34 characters
    • max: 91 characters
    • size: 10 elements
    • size: 10 elements
  • Samples:
    query docs labels
    what does tolterodine do ['Tolterodine (Detrol, Detrusitol) is an antimuscarinic drug that is used for symptomatic treatment of urinary incontinence. It is marketed by Pfizer in Canada and the United States by its brand name Detrol. In Egypt it is also found under the trade names Tolterodine by Sabaa and Incont L.A. by Adwia. Detrusor overactivity (DO, contraction of the muscular bladder wall) is the most common form of UI in older adults. It is characterized by uninhibited bladder contractions causing an uncontrollable urge to void. Urinary frequency, urge incontinence and nocturnal incontinence occur.', 'Tolterodine reduces spasms of the bladder muscles. Tolterodine is used to treat overactive bladder with symptoms of urinary frequency, urgency, and incontinence. Tolterodine may also be used for purposes not listed in this medication guide. You should not take this medication if you are allergic to tolterodine or fesoterodine (Toviaz), if you have untreated or uncontrolled narrow-angle glaucoma, or if you ha... [1, 0, 0, 0, 0, ...]
    why no dairy when taking ciprofloxacin ['Do not take ciprofloxacin with dairy products such as milk or yogurt, or with calcium-fortified juice. You may eat or drink these products as part of a regular meal, but do not use them alone when taking ciprofloxacin. They could make the medication less effective.', 'If your healthcare provider prescribes this medication, it is important to understand some precautions for using this drug. For instance, you should not take ciprofloxacin with dairy products alone (such as milk or yogurt) or with calcium-fortified juices (such as orange juice).', 'Do not take this medicine alone with milk, yogurt, or other dairy products. Do not drink any juice with calcium added when you take this medicine. It is okay to have dairy products or juice as part of a larger meal', 'Do not take ciprofloxacin with dairy products or calcium-fortified juice alone; you can, however, take ciprofloxacin with a meal that includes these...', 'You should not use ciprofloxacin if: 1 you are also taking tizanidine (Z... [1, 0, 0, 0, 0, ...]
    standard depth of countertops overhang ['Overhang. Countertops extend out from the face frame of the cabinets and just over the cabinet doors. This is called the overhang. Standard cabinet frames are 24 inches deep with 3/4 inch to 1 inch thick doors. Most countertops have a 1 inch overhang to make a standard depth of 25 inches. While there are many different materials to use for countertops, most come in a standard thickness of 1 1/2 inches.', 'Hanging Out on an Island. The standard overhang of an island countertop -- on the side designed to sit at and tuck stools underneath -- is 12 inches. If you plan to extend the counter farther, you need to add supports such as legs, or wood corbels or metal L-brackets that extend half the overhang’s distance.', 'The standard vanity counter top depth. Usually countertops overhang the doors by about one half of an inch. So, if your finished box size, including the door is twenty one and three quarters inches deep, then your finished top will be 22 1/4” in depth. The cut size should be ... [1, 0, 0, 0, 0, ...]
  • Loss: LambdaLoss with these parameters:
    {
        "weighing_scheme": "NDCGLoss2PPScheme",
        "k": 10,
        "sigma": 1.0,
        "eps": 1e-10,
        "pad_value": -1,
        "reduction": "mean",
        "reduction_log": "binary",
        "activation_fct": null
    }
    

Evaluation Dataset

ms_marco

  • Dataset: ms_marco at a47ee7a
  • Size: 82,326 evaluation samples
  • Columns: query, docs, and labels
  • Approximate statistics based on the first 1000 samples:
    query docs labels
    type string list list
    details
    • min: 11 characters
    • mean: 33.63 characters
    • max: 99 characters
    • size: 10 elements
    • size: 10 elements
  • Samples:
    query docs labels
    define monogenic trait ['An allele is a version of a gene. For example, in fruitflies there is a gene which determines eye colour: one allele gives red eyes, and another gives white eyes; it is the same gene, just different versions of that gene. A monogenic trait is one which is encoded by a single gene. e.g. - cystic fibrosis in humans. There is a single gene which determines this trait: the wild-type allele is healthy, while the disease allele gives you cystic fibrosis', 'Abstract. Monogenic inheritance refers to genetic control of a phenotype or trait by a single gene. For a monogenic trait, mutations in one (dominant) or both (recessive) copies of the gene are sufficient for the trait to be expressed. Digenic inheritance refers to mutation on two genes interacting to cause a genetic phenotype or disease. Triallelic inheritance is a special case of digenic inheritance that requires homozygous mutations at one locus and heterozygous mutations at a second locus to express a phenotype.', 'A trait that is ... [1, 1, 0, 0, 0, ...]
    behavioral theory definition ["Not to be confused with Behavioralism. Behaviorism (or behaviourism) is an approach to psychology that focuses on an individual's behavior. It combines elements of philosophy, methodology, and psychological theory", 'The initial assumption is that behavior can be explained and further described using behavioral theories. For instance, John Watson and B.F. Skinner advocate the theory that behavior can be acquired through conditioning. Also known as general behavior theory. BEHAVIOR THEORY: Each behavioral theory is an advantage to learning, because it provides teachers with a new and different approach.. No related posts. ', 'behaviorism. noun be·hav·ior·ism. : a school of psychology that takes the objective evidence of behavior (as measured responses to stimuli) as the only concern of its research and the only basis of its theory without reference to conscious experience—compare cognitive psychology. : a school of psychology that takes the objective evidence of behavior (as measured ... [1, 0, 0, 0, 0, ...]
    What is a disease that is pleiotropic? ['Unsourced material may be challenged and removed. (September 2013). Pleiotropy occurs when one gene influences two or more seemingly unrelated phenotypic traits, an example being phenylketonuria, which is a human disease that affects multiple systems but is caused by one gene defect. Consequently, a mutation in a pleiotropic gene may have an effect on some or all traits simultaneously. The underlying mechanism is that the gene codes for a product that is, for example, used by various cells, or has a signaling function on various targets. A classic example of pleiotropy is the human disease phenylketonuria (PKU).', 'Pleiotropic, autosomal dominant disorder affecting connective tissue: Related Diseases. Pleiotropic, autosomal dominant disorder affecting connective tissue: Pleiotropic, autosomal dominant disorder affecting connective tissue is listed as a type of (or associated with) the following medical conditions in our database: 1 Heart conditions. Office of Rare Diseases (ORD) of ... [1, 0, 0, 0, 0, ...]
  • Loss: LambdaLoss with these parameters:
    {
        "weighing_scheme": "NDCGLoss2PPScheme",
        "k": 10,
        "sigma": 1.0,
        "eps": 1e-10,
        "pad_value": -1,
        "reduction": "mean",
        "reduction_log": "binary",
        "activation_fct": null
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 12
  • per_device_eval_batch_size: 64
  • torch_empty_cache_steps: 4000
  • learning_rate: 2e-05
  • num_train_epochs: 20
  • warmup_ratio: 0.1
  • seed: 12
  • bf16: True
  • load_best_model_at_end: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 12
  • per_device_eval_batch_size: 64
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: 4000
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 20
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 12
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Click to expand
Epoch Step Training Loss Validation Loss NanoMSMARCO_ndcg@10 NanoNFCorpus_ndcg@10 NanoNQ_ndcg@10 NanoBEIR_mean_ndcg@10
-1 -1 - - 0.0622 (-0.4782) 0.2527 (-0.0724) 0.0100 (-0.4906) 0.1083 (-0.3471)
0.0002 1 0.4667 - - - - -
0.1721 1000 0.5941 - - - - -
0.3443 2000 0.5638 - - - - -
0.5164 3000 0.5211 - - - - -
0.6886 4000 0.5016 0.4713 0.4979 (-0.0425) 0.3085 (-0.0165) 0.5577 (+0.0571) 0.4547 (-0.0007)
0.8607 5000 0.4797 - - - - -
1.0329 6000 0.475 - - - - -
1.2050 7000 0.4607 - - - - -
1.3772 8000 0.4513 0.4492 0.5270 (-0.0134) 0.3316 (+0.0065) 0.6101 (+0.1095) 0.4896 (+0.0342)
1.5493 9000 0.4553 - - - - -
1.7215 10000 0.4537 - - - - -
1.8936 11000 0.4487 - - - - -
2.0658 12000 0.4363 0.4386 0.5697 (+0.0292) 0.3350 (+0.0099) 0.5967 (+0.0961) 0.5005 (+0.0451)
2.2379 13000 0.4234 - - - - -
2.4101 14000 0.4311 - - - - -
2.5822 15000 0.4229 - - - - -
2.7543 16000 0.4274 0.4376 0.5451 (+0.0046) 0.3277 (+0.0026) 0.5990 (+0.0983) 0.4906 (+0.0352)
2.9265 17000 0.4261 - - - - -
3.0986 18000 0.4108 - - - - -
3.2708 19000 0.3939 - - - - -
3.4429 20000 0.3906 0.451 0.5463 (+0.0059) 0.3447 (+0.0196) 0.6496 (+0.1490) 0.5135 (+0.0582)
3.6151 21000 0.3945 - - - - -
3.7872 22000 0.3957 - - - - -
3.9594 23000 0.4019 - - - - -
4.1315 24000 0.3621 0.4659 0.5391 (-0.0013) 0.3107 (-0.0144) 0.5793 (+0.0786) 0.4764 (+0.0210)
4.3037 25000 0.3567 - - - - -
4.4758 26000 0.3634 - - - - -
4.6480 27000 0.3676 - - - - -
4.8201 28000 0.3629 0.4475 0.5385 (-0.0020) 0.3258 (+0.0008) 0.5658 (+0.0651) 0.4767 (+0.0213)
4.9923 29000 0.3642 - - - - -
5.1644 30000 0.3205 - - - - -
5.3365 31000 0.3165 - - - - -
5.5087 32000 0.32 0.4795 0.4801 (-0.0603) 0.3407 (+0.0157) 0.5787 (+0.0781) 0.4665 (+0.0111)
5.6808 33000 0.3265 - - - - -
5.8530 34000 0.3262 - - - - -
6.0251 35000 0.3236 - - - - -
6.1973 36000 0.2724 0.5155 0.5205 (-0.0199) 0.3358 (+0.0108) 0.5770 (+0.0764) 0.4778 (+0.0224)
6.3694 37000 0.2786 - - - - -
6.5416 38000 0.2801 - - - - -
6.7137 39000 0.2893 - - - - -
6.8859 40000 0.2897 0.5202 0.4652 (-0.0752) 0.3174 (-0.0077) 0.5646 (+0.0639) 0.4491 (-0.0063)
7.0580 41000 0.2725 - - - - -
7.2302 42000 0.2358 - - - - -
7.4023 43000 0.2476 - - - - -
7.5745 44000 0.2462 0.5478 0.4854 (-0.0550) 0.3359 (+0.0108) 0.5429 (+0.0423) 0.4547 (-0.0006)
7.7466 45000 0.2528 - - - - -
7.9187 46000 0.2511 - - - - -
8.0909 47000 0.2319 - - - - -
8.2630 48000 0.208 0.6023 0.5022 (-0.0383) 0.3442 (+0.0191) 0.5476 (+0.0470) 0.4647 (+0.0093)
8.4352 49000 0.208 - - - - -
8.6073 50000 0.2135 - - - - -
8.7795 51000 0.2166 - - - - -
8.9516 52000 0.2191 0.5834 0.4787 (-0.0617) 0.3086 (-0.0165) 0.5523 (+0.0516) 0.4465 (-0.0088)
9.1238 53000 0.1904 - - - - -
9.2959 54000 0.1772 - - - - -
9.4681 55000 0.1819 - - - - -
9.6402 56000 0.1884 0.6514 0.4547 (-0.0857) 0.3002 (-0.0249) 0.5211 (+0.0205) 0.4253 (-0.0301)
9.8124 57000 0.1858 - - - - -
9.9845 58000 0.1923 - - - - -
10.1567 59000 0.1532 - - - - -
10.3288 60000 0.1545 0.6739 0.4811 (-0.0593) 0.3173 (-0.0077) 0.4580 (-0.0426) 0.4188 (-0.0365)
10.5009 61000 0.1555 - - - - -
10.6731 62000 0.1593 - - - - -
10.8452 63000 0.1595 - - - - -
11.0174 64000 0.1618 0.7444 0.4996 (-0.0408) 0.3049 (-0.0202) 0.5172 (+0.0165) 0.4406 (-0.0148)
11.1895 65000 0.129 - - - - -
11.3617 66000 0.1363 - - - - -
11.5338 67000 0.1354 - - - - -
11.7060 68000 0.1376 0.7677 0.4343 (-0.1061) 0.3166 (-0.0084) 0.5041 (+0.0035) 0.4184 (-0.0370)
11.8781 69000 0.1395 - - - - -
12.0503 70000 0.1308 - - - - -
12.2224 71000 0.1153 - - - - -
12.3946 72000 0.1193 0.7996 0.5052 (-0.0352) 0.3280 (+0.0029) 0.5315 (+0.0309) 0.4549 (-0.0005)
12.5667 73000 0.1231 - - - - -
12.7389 74000 0.1204 - - - - -
12.9110 75000 0.1206 - - - - -
13.0831 76000 0.1109 0.8810 0.4419 (-0.0985) 0.3067 (-0.0183) 0.5233 (+0.0227) 0.4240 (-0.0314)
13.2553 77000 0.1011 - - - - -
13.4274 78000 0.1038 - - - - -
13.5996 79000 0.104 - - - - -
13.7717 80000 0.1049 0.8477 0.4585 (-0.0819) 0.3006 (-0.0245) 0.5301 (+0.0294) 0.4297 (-0.0256)
13.9439 81000 0.1077 - - - - -
14.1160 82000 0.0965 - - - - -
14.2882 83000 0.0893 - - - - -
14.4603 84000 0.0944 0.8906 0.4366 (-0.1038) 0.3126 (-0.0125) 0.5211 (+0.0204) 0.4234 (-0.0319)
14.6325 85000 0.0917 - - - - -
14.8046 86000 0.0976 - - - - -
14.9768 87000 0.0951 - - - - -
15.1489 88000 0.0844 0.9619 0.4702 (-0.0703) 0.3029 (-0.0221) 0.5251 (+0.0245) 0.4327 (-0.0226)
15.3211 89000 0.0817 - - - - -
15.4932 90000 0.0813 - - - - -
15.6653 91000 0.0827 - - - - -
15.8375 92000 0.0846 0.9694 0.4621 (-0.0783) 0.2904 (-0.0346) 0.5199 (+0.0192) 0.4241 (-0.0312)
16.0096 93000 0.085 - - - - -
16.1818 94000 0.0765 - - - - -
16.3539 95000 0.0741 - - - - -
16.5261 96000 0.0759 0.9809 0.4631 (-0.0773) 0.3001 (-0.0250) 0.4871 (-0.0136) 0.4167 (-0.0386)
16.6982 97000 0.0771 - - - - -
16.8704 98000 0.0779 - - - - -
17.0425 99000 0.0728 - - - - -
17.2147 100000 0.0654 1.0386 0.4475 (-0.0929) 0.3072 (-0.0179) 0.5297 (+0.0291) 0.4281 (-0.0272)
17.3868 101000 0.0698 - - - - -
17.5590 102000 0.0654 - - - - -
17.7311 103000 0.0703 - - - - -
17.9033 104000 0.0667 1.0807 0.4435 (-0.0970) 0.2996 (-0.0254) 0.5155 (+0.0149) 0.4195 (-0.0358)
18.0754 105000 0.0695 - - - - -
18.2475 106000 0.0637 - - - - -
18.4197 107000 0.0614 - - - - -
18.5918 108000 0.0641 1.0612 0.4309 (-0.1096) 0.2955 (-0.0296) 0.5077 (+0.0070) 0.4113 (-0.0440)
18.7640 109000 0.0647 - - - - -
18.9361 110000 0.0589 - - - - -
19.1083 111000 0.0619 - - - - -
19.2804 112000 0.0612 1.0888 0.4514 (-0.0890) 0.2999 (-0.0251) 0.5098 (+0.0092) 0.4204 (-0.0350)
19.4526 113000 0.0611 - - - - -
19.6247 114000 0.0591 - - - - -
19.7969 115000 0.0574 - - - - -
19.9690 116000 0.0597 1.0934 0.4511 (-0.0893) 0.2993 (-0.0257) 0.5122 (+0.0115) 0.4209 (-0.0345)
-1 -1 - - 0.5463 (+0.0059) 0.3447 (+0.0196) 0.6496 (+0.1490) 0.5135 (+0.0582)
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.13
  • Sentence Transformers: 3.5.0.dev0
  • Transformers: 4.48.1
  • PyTorch: 2.5.1+cu124
  • Accelerate: 1.3.0
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

LambdaLoss

@article{wang2018lambdaloss,
    title={The LambdaLoss Framework for Ranking Metric Optimization},
    author={Wang, Xuanhui and Li, Cheng and Golbandi, Nadav and Bendersky, Michael and Najork, Marc},
    journal={Proceedings of the 27th ACM International Conference on Information and Knowledge Management},
    pages={1313--1322},
    year={2018}
}
Downloads last month
11
Safetensors
Model size
33.4M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support text-classification models for sentence-transformers library.

Model tree for Studeni/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-lambdaloss

Finetuned
(41)
this model

Dataset used to train Studeni/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-lambdaloss