MedEmbed Biomedical MRL

This is a sentence-transformers model trained. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("potsu-potsu/medembed-base-mrl-train36k")
# Run inference
sentences = [
    'What is Pseudomelanosis duodeni?',
    'Pseudomelanosis duodeni is a rare condition in which dark pigment accumulates in \nmacrophages located in the lamina propria of the duodenal mucosa. Three cases \nare reported here and the literature is reviewed. No clinical association can be \nfound that points clearly to the underlying etiology. Electron probe x-ray \nmicroanalysis was used to study the pigment in macrophage granules in 2 of our \npatients and demonstrated high iron and sulfur content. Iron accumulation in \nferritinlike particles was detected in absorptive cell lysosomes. A possible \nmechanism for the accumulation of absorbed iron by macrophages is considered.',
    'This year marks the 100th anniversary of the deadliest event in human history. \nIn 1918-1919, pandemic influenza appeared nearly simultaneously around the globe \nand caused extraordinary mortality (an estimated 50-100 million deaths) \nassociated with unexpected clinical and epidemiological features. The \ndescendants of the 1918 virus remain today; as endemic influenza viruses, they \ncause significant mortality each year. Although the ability to predict influenza \npandemics remains no better than it was a century ago, numerous scientific \nadvances provide an important head start in limiting severe disease and death \nfrom both current and future influenza viruses: identification and substantial \ncharacterization of the natural history and pathogenesis of the 1918 causative \nvirus itself, as well as hundreds of its viral descendants; development of \nmoderately effective vaccines; improved diagnosis and treatment of \ninfluenza-associated pneumonia; and effective prevention and control measures. \nRemaining challenges include development of vaccines eliciting significantly \nbroader protection (against antigenically different influenza viruses) that can \nprevent or significantly downregulate viral replication; more complete \ncharacterization of natural history and pathogenesis emphasizing the protective \nrole of mucosal immunity; and biomarkers of impending influenza-associated \npneumonia.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.7341
cosine_accuracy@3 0.8515
cosine_accuracy@5 0.8911
cosine_accuracy@10 0.9279
cosine_precision@1 0.7341
cosine_precision@3 0.6073
cosine_precision@5 0.527
cosine_precision@10 0.4123
cosine_recall@1 0.2179
cosine_recall@3 0.3935
cosine_recall@5 0.489
cosine_recall@10 0.6294
cosine_ndcg@10 0.7018
cosine_mrr@10 0.8017
cosine_map@100 0.6443

Information Retrieval

Metric Value
cosine_accuracy@1 0.7284
cosine_accuracy@3 0.8359
cosine_accuracy@5 0.8868
cosine_accuracy@10 0.918
cosine_precision@1 0.7284
cosine_precision@3 0.5997
cosine_precision@5 0.5267
cosine_precision@10 0.4078
cosine_recall@1 0.2181
cosine_recall@3 0.3851
cosine_recall@5 0.4866
cosine_recall@10 0.6179
cosine_ndcg@10 0.6935
cosine_mrr@10 0.7944
cosine_map@100 0.6375

Information Retrieval

Metric Value
cosine_accuracy@1 0.71
cosine_accuracy@3 0.8331
cosine_accuracy@5 0.8755
cosine_accuracy@10 0.9066
cosine_precision@1 0.71
cosine_precision@3 0.586
cosine_precision@5 0.5092
cosine_precision@10 0.4023
cosine_recall@1 0.2135
cosine_recall@3 0.3792
cosine_recall@5 0.4684
cosine_recall@10 0.6041
cosine_ndcg@10 0.6795
cosine_mrr@10 0.7786
cosine_map@100 0.6206

Information Retrieval

Metric Value
cosine_accuracy@1 0.6931
cosine_accuracy@3 0.8091
cosine_accuracy@5 0.8458
cosine_accuracy@10 0.8911
cosine_precision@1 0.6931
cosine_precision@3 0.5582
cosine_precision@5 0.4885
cosine_precision@10 0.3767
cosine_recall@1 0.2057
cosine_recall@3 0.3584
cosine_recall@5 0.4425
cosine_recall@10 0.5665
cosine_ndcg@10 0.6445
cosine_mrr@10 0.7607
cosine_map@100 0.5817

Information Retrieval

Metric Value
cosine_accuracy@1 0.6294
cosine_accuracy@3 0.7638
cosine_accuracy@5 0.7935
cosine_accuracy@10 0.8458
cosine_precision@1 0.6294
cosine_precision@3 0.5144
cosine_precision@5 0.445
cosine_precision@10 0.3487
cosine_recall@1 0.1813
cosine_recall@3 0.3196
cosine_recall@5 0.3905
cosine_recall@10 0.5098
cosine_ndcg@10 0.5848
cosine_mrr@10 0.7038
cosine_map@100 0.5145

Training Details

Training Dataset

Unnamed Dataset

  • Size: 36,470 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 6 tokens
    • mean: 15.86 tokens
    • max: 32 tokens
    • min: 31 tokens
    • mean: 316.54 tokens
    • max: 512 tokens
  • Samples:
    anchor positive
    What is the implication of histone lysine methylation in medulloblastoma? Recent studies showed frequent mutations in histone H3 lysine 27 (H3K27)
    demethylases in medulloblastomas of Group 3 and Group 4, suggesting a role for
    H3K27 methylation in these tumors. Indeed, trimethylated H3K27 (H3K27me3) levels
    were shown to be higher in Group 3 and 4 tumors compared to WNT and SHH
    medulloblastomas, also in tumors without detectable mutations in demethylases.
    Here, we report that polycomb genes, required for H3K27 methylation, are
    consistently upregulated in Group 3 and 4 tumors. These tumors show high
    expression of the homeobox transcription factor OTX2. Silencing of OTX2 in D425
    medulloblastoma cells resulted in downregulation of polycomb genes such as EZH2,
    EED, SUZ12 and RBBP4 and upregulation of H3K27 demethylases KDM6A, KDM6B, JARID2
    and KDM7A. This was accompanied by decreased H3K27me3 and increased H3K27me1
    levels in promoter regions. Strikingly, the decrease of H3K27me3 was most
    prominent in promoters that bind OTX2. OTX2-bound promoters showe...
    What is the implication of histone lysine methylation in medulloblastoma? We used high-resolution SNP genotyping to identify regions of genomic gain and
    loss in the genomes of 212 medulloblastomas, malignant pediatric brain tumors.
    We found focal amplifications of 15 known oncogenes and focal deletions of 20
    known tumor suppressor genes (TSG), most not previously implicated in
    medulloblastoma. Notably, we identified previously unknown amplifications and
    homozygous deletions, including recurrent, mutually exclusive, highly focal
    genetic events in genes targeting histone lysine methylation, particularly that
    of histone 3, lysine 9 (H3K9). Post-translational modification of histone
    proteins is critical for regulation of gene expression, can participate in
    determination of stem cell fates and has been implicated in carcinogenesis.
    Consistent with our genetic data, restoration of expression of genes controlling
    H3K9 methylation greatly diminishes proliferation of medulloblastoma in vitro.
    Copy number aberrations of genes with critical roles in writing...
    What is the implication of histone lysine methylation in medulloblastoma? Recent sequencing efforts have described the mutational landscape of the
    pediatric brain tumor medulloblastoma. Although MLL2 is among the most frequent
    somatic single nucleotide variants (SNV), the clinical and biological
    significance of these mutations remains uncharacterized. Through targeted
    re-sequencing, we identified mutations of MLL2 in 8 % (14/175) of MBs, the
    majority of which were loss of function. Notably, we also report mutations
    affecting the MLL2-binding partner KDM6A, in 4 % (7/175) of tumors. While MLL2
    mutations were independent of age, gender, histological subtype, M-stage or
    molecular subgroup, KDM6A mutations were most commonly identified in Group 4
    MBs, and were mutually exclusive with MLL2 mutations. Immunohistochemical
    staining for H3K4me3 and H3K27me3, the chromatin effectors of MLL2 and KDM6A
    activity, respectively, demonstrated alterations of the histone code in 24 %
    (53/220) of MBs across all subgroups. Correlating these MLL2- and KDM6A-driven
    h...
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 4
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_768_cosine_ndcg@10 dim_512_cosine_ndcg@10 dim_256_cosine_ndcg@10 dim_128_cosine_ndcg@10 dim_64_cosine_ndcg@10
0.1404 10 74.2852 - - - - -
0.2807 20 53.9914 - - - - -
0.4211 30 37.5251 - - - - -
0.5614 40 28.6889 - - - - -
0.7018 50 23.8582 - - - - -
0.8421 60 21.6883 - - - - -
0.9825 70 19.7715 - - - - -
1.0 72 - 0.7030 0.6976 0.6855 0.6427 0.5803
1.1123 80 16.5108 - - - - -
1.2526 90 16.5154 - - - - -
1.3930 100 14.3628 - - - - -
1.5333 110 15.1679 - - - - -
1.6737 120 13.5316 - - - - -
1.8140 130 12.5184 - - - - -
1.9544 140 13.1961 - - - - -
2.0 144 - 0.7011 0.6923 0.6803 0.6459 0.5873
2.0842 150 11.1752 - - - - -
2.2246 160 10.771 - - - - -
2.3649 170 11.0394 - - - - -
2.5053 180 10.0241 - - - - -
2.6456 190 10.862 - - - - -
2.7860 200 10.39 - - - - -
2.9263 210 10.6967 - - - - -
3.0 216 - 0.7014 0.6936 0.6805 0.6450 0.5824
3.0561 220 9.2254 - - - - -
3.1965 230 9.7925 - - - - -
3.3368 240 9.6484 - - - - -
3.4772 250 9.4891 - - - - -
3.6175 260 9.5589 - - - - -
3.7579 270 8.773 - - - - -
3.8982 280 9.3302 - - - - -
4.0 288 - 0.7018 0.6935 0.6795 0.6445 0.5848
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.12.6
  • Sentence Transformers: 4.1.0
  • Transformers: 4.52.4
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.7.0
  • Datasets: 3.6.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
1
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Evaluation results