Biomedical MRL

This is a sentence-transformers model trained. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("potsu-potsu/bge-base-mrl-train40k")
# Run inference
sentences = [
    'What is known about the Digit Ratio (2D:4D) cancer?',
    'BACKGROUND: The ratio of the lengths of index and ring fingers (2D:4D) is a \nmarker of prenatal exposure to sex hormones, with low 2D:4D being indicative of \nhigh prenatal androgen action. Recent studies have reported a strong association \nbetween 2D:4D and risk of prostate cancer.\nMETHODS: A total of 6258 men participating in the Melbourne Collaborative Cohort \nStudy had 2D:4D assessed. Of these men, we identified 686 incident prostate \ncancer cases. Hazard ratios (HRs) and confidence intervals (CIs) were estimated \nfor a standard deviation increase in 2D:4D.\nRESULTS: No association was observed between 2D:4D and prostate cancer risk \noverall (HRs 1.00; 95% CIs, 0.92-1.08 for right, 0.93-1.08 for left). We \nobserved a weak inverse association between 2D:4D and risk of prostate cancer \nfor age <60, however 95% CIs included unity for all observed ages.\nCONCLUSION: Our results are not consistent with an association between 2D:4D and \noverall prostate cancer risk, but we cannot exclude a weak inverse association \nbetween 2D:4D and early onset prostate cancer risk.',
    "Proteins undergo conformational changes during their biological function. As \nsuch, a high-resolution structure of a protein's resting conformation provides a \nstarting point for elucidating its reaction mechanism, but provides no direct \ninformation concerning the protein's conformational dynamics. Several X-ray \nmethods have been developed to elucidate those conformational changes that occur \nduring a protein's reaction, including time-resolved Laue diffraction and \nintermediate trapping studies on three-dimensional protein crystals, and \ntime-resolved wide-angle X-ray scattering and X-ray absorption studies on \nproteins in the solution phase. This review emphasizes the scope and limitations \nof these complementary experimental approaches when seeking to understand \nprotein conformational dynamics. These methods are illustrated using a limited \nset of examples including myoglobin and haemoglobin in complex with carbon \nmonoxide, the simple light-driven proton pump bacteriorhodopsin, and the \nsuperoxide scavenger superoxide reductase. In conclusion, likely future \ndevelopments of these methods at synchrotron X-ray sources and the potential \nimpact of emerging X-ray free-electron laser facilities are speculated upon.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.7397
cosine_accuracy@3 0.8472
cosine_accuracy@5 0.8925
cosine_accuracy@10 0.9293
cosine_precision@1 0.7397
cosine_precision@3 0.6058
cosine_precision@5 0.5296
cosine_precision@10 0.411
cosine_recall@1 0.2276
cosine_recall@3 0.3939
cosine_recall@5 0.4954
cosine_recall@10 0.6262
cosine_ndcg@10 0.7037
cosine_mrr@10 0.8042
cosine_map@100 0.65

Information Retrieval

Metric Value
cosine_accuracy@1 0.7327
cosine_accuracy@3 0.843
cosine_accuracy@5 0.8883
cosine_accuracy@10 0.9151
cosine_precision@1 0.7327
cosine_precision@3 0.5964
cosine_precision@5 0.5279
cosine_precision@10 0.4099
cosine_recall@1 0.2192
cosine_recall@3 0.3867
cosine_recall@5 0.4915
cosine_recall@10 0.623
cosine_ndcg@10 0.6971
cosine_mrr@10 0.7969
cosine_map@100 0.6403

Information Retrieval

Metric Value
cosine_accuracy@1 0.7228
cosine_accuracy@3 0.8373
cosine_accuracy@5 0.8769
cosine_accuracy@10 0.9109
cosine_precision@1 0.7228
cosine_precision@3 0.5893
cosine_precision@5 0.5132
cosine_precision@10 0.4048
cosine_recall@1 0.2165
cosine_recall@3 0.3844
cosine_recall@5 0.4707
cosine_recall@10 0.6082
cosine_ndcg@10 0.6857
cosine_mrr@10 0.7889
cosine_map@100 0.6255

Information Retrieval

Metric Value
cosine_accuracy@1 0.7072
cosine_accuracy@3 0.8076
cosine_accuracy@5 0.8458
cosine_accuracy@10 0.8967
cosine_precision@1 0.7072
cosine_precision@3 0.5606
cosine_precision@5 0.4877
cosine_precision@10 0.3819
cosine_recall@1 0.2132
cosine_recall@3 0.3572
cosine_recall@5 0.4428
cosine_recall@10 0.5764
cosine_ndcg@10 0.652
cosine_mrr@10 0.7681
cosine_map@100 0.5861

Information Retrieval

Metric Value
cosine_accuracy@1 0.6436
cosine_accuracy@3 0.7666
cosine_accuracy@5 0.8048
cosine_accuracy@10 0.8416
cosine_precision@1 0.6436
cosine_precision@3 0.5116
cosine_precision@5 0.4501
cosine_precision@10 0.3511
cosine_recall@1 0.1851
cosine_recall@3 0.3181
cosine_recall@5 0.3926
cosine_recall@10 0.5118
cosine_ndcg@10 0.5894
cosine_mrr@10 0.7115
cosine_map@100 0.5197

Training Details

Training Dataset

Unnamed Dataset

  • Size: 40,482 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 6 tokens
    • mean: 16.0 tokens
    • max: 32 tokens
    • min: 4 tokens
    • mean: 287.89 tokens
    • max: 512 tokens
  • Samples:
    anchor positive
    What is the implication of histone lysine methylation in medulloblastoma? Aberrant patterns of H3K4, H3K9, and H3K27 histone lysine methylation were shown to result in histone code alterations, which induce changes in gene expression, and affect the proliferation rate of cells in medulloblastoma.
    What is the implication of histone lysine methylation in medulloblastoma? Recent studies showed frequent mutations in histone H3 lysine 27 (H3K27)
    demethylases in medulloblastomas of Group 3 and Group 4, suggesting a role for
    H3K27 methylation in these tumors. Indeed, trimethylated H3K27 (H3K27me3) levels
    were shown to be higher in Group 3 and 4 tumors compared to WNT and SHH
    medulloblastomas, also in tumors without detectable mutations in demethylases.
    Here, we report that polycomb genes, required for H3K27 methylation, are
    consistently upregulated in Group 3 and 4 tumors. These tumors show high
    expression of the homeobox transcription factor OTX2. Silencing of OTX2 in D425
    medulloblastoma cells resulted in downregulation of polycomb genes such as EZH2,
    EED, SUZ12 and RBBP4 and upregulation of H3K27 demethylases KDM6A, KDM6B, JARID2
    and KDM7A. This was accompanied by decreased H3K27me3 and increased H3K27me1
    levels in promoter regions. Strikingly, the decrease of H3K27me3 was most
    prominent in promoters that bind OTX2. OTX2-bound promoters showe...
    What is the implication of histone lysine methylation in medulloblastoma? We used high-resolution SNP genotyping to identify regions of genomic gain and
    loss in the genomes of 212 medulloblastomas, malignant pediatric brain tumors.
    We found focal amplifications of 15 known oncogenes and focal deletions of 20
    known tumor suppressor genes (TSG), most not previously implicated in
    medulloblastoma. Notably, we identified previously unknown amplifications and
    homozygous deletions, including recurrent, mutually exclusive, highly focal
    genetic events in genes targeting histone lysine methylation, particularly that
    of histone 3, lysine 9 (H3K9). Post-translational modification of histone
    proteins is critical for regulation of gene expression, can participate in
    determination of stem cell fates and has been implicated in carcinogenesis.
    Consistent with our genetic data, restoration of expression of genes controlling
    H3K9 methylation greatly diminishes proliferation of medulloblastoma in vitro.
    Copy number aberrations of genes with critical roles in writing...
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 4
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_768_cosine_ndcg@10 dim_512_cosine_ndcg@10 dim_256_cosine_ndcg@10 dim_128_cosine_ndcg@10 dim_64_cosine_ndcg@10
0.1264 10 65.1116 - - - - -
0.2528 20 52.0541 - - - - -
0.3791 30 36.0158 - - - - -
0.5055 40 26.0258 - - - - -
0.6319 50 24.2254 - - - - -
0.7583 60 21.8763 - - - - -
0.8847 70 18.0685 - - - - -
1.0 80 17.7443 0.7094 0.7054 0.6895 0.6487 0.5783
1.1264 90 14.5363 - - - - -
1.2528 100 14.1097 - - - - -
1.3791 110 13.5251 - - - - -
1.5055 120 13.3574 - - - - -
1.6319 130 13.3079 - - - - -
1.7583 140 12.926 - - - - -
1.8847 150 12.0388 - - - - -
2.0 160 10.9161 0.7063 0.7005 0.6880 0.6514 0.5886
2.1264 170 10.7059 - - - - -
2.2528 180 10.1178 - - - - -
2.3791 190 10.4664 - - - - -
2.5055 200 10.4824 - - - - -
2.6319 210 10.2784 - - - - -
2.7583 220 9.2031 - - - - -
2.8847 230 8.9788 - - - - -
3.0 240 7.5905 0.7027 0.6964 0.6855 0.6515 0.5881
3.1264 250 8.4637 - - - - -
3.2528 260 9.4921 - - - - -
3.3791 270 9.0615 - - - - -
3.5055 280 9.0181 - - - - -
3.6319 290 8.6193 - - - - -
3.7583 300 8.3741 - - - - -
3.8847 310 8.9504 - - - - -
4.0 320 7.4761 0.7037 0.6971 0.6857 0.652 0.5894
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.12.6
  • Sentence Transformers: 4.1.0
  • Transformers: 4.52.4
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.7.0
  • Datasets: 3.6.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
3
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Evaluation results