SentenceTransformer based on NeuML/pubmedbert-base-embeddings

This is a sentence-transformers model finetuned from NeuML/pubmedbert-base-embeddings on the geo_70k_multiplets_natural_language_annotation dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): MMContextEncoder(
    (text_encoder): BertModel(
      (embeddings): BertEmbeddings(
        (word_embeddings): Embedding(30522, 768, padding_idx=0)
        (position_embeddings): Embedding(512, 768)
        (token_type_embeddings): Embedding(2, 768)
        (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
        (dropout): Dropout(p=0.1, inplace=False)
      )
      (encoder): BertEncoder(
        (layer): ModuleList(
          (0-11): 12 x BertLayer(
            (attention): BertAttention(
              (self): BertSdpaSelfAttention(
                (query): Linear(in_features=768, out_features=768, bias=True)
                (key): Linear(in_features=768, out_features=768, bias=True)
                (value): Linear(in_features=768, out_features=768, bias=True)
                (dropout): Dropout(p=0.1, inplace=False)
              )
              (output): BertSelfOutput(
                (dense): Linear(in_features=768, out_features=768, bias=True)
                (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
                (dropout): Dropout(p=0.1, inplace=False)
              )
            )
            (intermediate): BertIntermediate(
              (dense): Linear(in_features=768, out_features=3072, bias=True)
              (intermediate_act_fn): GELUActivation()
            )
            (output): BertOutput(
              (dense): Linear(in_features=3072, out_features=768, bias=True)
              (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
          )
        )
      )
      (pooler): BertPooler(
        (dense): Linear(in_features=768, out_features=768, bias=True)
        (activation): Tanh()
      )
    )
    (text_adapter): AdapterModule(
      (net): Sequential(
        (0): Linear(in_features=768, out_features=512, bias=True)
        (1): ReLU(inplace=True)
        (2): Linear(in_features=512, out_features=1024, bias=True)
        (3): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (pooling): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
    (omics_adapter): AdapterModule(
      (net): Sequential(
        (0): Linear(in_features=512, out_features=512, bias=True)
        (1): ReLU(inplace=True)
        (2): Linear(in_features=512, out_features=1024, bias=True)
        (3): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (omics_encoder): MiniOmicsModel(
      (embeddings): Embedding(68813, 512, padding_idx=0)
    )
  )
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the ๐Ÿค— Hub
model = SentenceTransformer("jo-mengr/mmcontext-pubmedbert-geneformer-70k")
# Run inference
sentences = [
    'sample_idx:SRX3989281',
    'This measurement was conducted with Illumina HiSeq 2500. 2435 is a cell line of coronary artery smooth muscle cells cultured in vitro.',
    'This measurement was conducted with Illumina HiSeq 2500. UM-UC18 bladder cancer cell line, a type of urinary bladder cancer cell line, cultured for study of bladder disease, cancer cell proliferation, and neoplasm.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.4324, 0.4722],
#         [0.4324, 1.0000, 0.9796],
#         [0.4722, 0.9796, 1.0000]])

Evaluation

Metrics

Triplet

  • Dataset: geo_70k_multiplets_natural_language_annotation_cell_sentence_1
  • Evaluated with TripletEvaluator
Metric Value
cosine_accuracy 0.4553

Training Details

Training Dataset

geo_70k_multiplets_natural_language_annotation

  • Dataset: geo_70k_multiplets_natural_language_annotation at 4c62cd1
  • Size: 61,911 training samples
  • Columns: anchor, positive, negative_1, and negative_2
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative_1 negative_2
    type string string string string
    details
    • min: 20 characters
    • mean: 20.29 characters
    • max: 21 characters
    • min: 16 tokens
    • mean: 39.49 tokens
    • max: 188 tokens
    • min: 18 tokens
    • mean: 37.59 tokens
    • max: 114 tokens
    • min: 20 characters
    • mean: 20.1 characters
    • max: 21 characters
  • Samples:
    anchor positive negative_1 negative_2
    sample_idx:SRX083304 This measurement was conducted with Illumina HiSeq 2000. 5-day HeLa cell line with ELAVL1/HuR siRNA1 knockdown, 120 hours post-transfection. This measurement was conducted with Illumina HiSeq 2000. BJ fibroblast cells in a proliferative stage, with polyA RNA subtype. sample_idx:SRX105303
    sample_idx:SRX105302 This measurement was conducted with Illumina HiSeq 2000. BJ fibroblast cells in a proliferative stage, with polyA RNA subtype. This measurement was conducted with Illumina HiSeq 2000. 5-day HeLa cell line with ELAVL1/HuR siRNA1 knockdown, 120 hours post-transfection. sample_idx:SRX105303
    sample_idx:SRX105303 This measurement was conducted with Illumina HiSeq 2000. BJ fibroblast cells at a confluent growth stage, with polyA RNA subtype. This measurement was conducted with Illumina HiSeq 2000. 5-day HeLa cell line with ELAVL1/HuR siRNA1 knockdown, 120 hours post-transfection. sample_idx:SRX105302
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Evaluation Dataset

geo_70k_multiplets_natural_language_annotation

  • Dataset: geo_70k_multiplets_natural_language_annotation at 4c62cd1
  • Size: 6,901 evaluation samples
  • Columns: anchor, positive, negative_1, and negative_2
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative_1 negative_2
    type string string string string
    details
    • min: 20 characters
    • mean: 21.35 characters
    • max: 22 characters
    • min: 16 tokens
    • mean: 41.2 tokens
    • max: 210 tokens
    • min: 19 tokens
    • mean: 44.09 tokens
    • max: 178 tokens
    • min: 21 characters
    • mean: 21.04 characters
    • max: 22 characters
  • Samples:
    anchor positive negative_1 negative_2
    sample_idx:SRX2244363 This measurement was conducted with Illumina HiSeq 2000. 15-year-old male HepG2 immortalized cell line with hepatocellular carcinoma, transiently expressing shRNA targeting PKM2 for RNA-seq study. This measurement was conducted with Illumina HiSeq 2000. 15-year-old male patient with hepatocellular carcinoma; HNRNPC knocked down via shRNA in HepG2 (immortalized cell line) for RNA-seq analysis. sample_idx:SRX5457055
    sample_idx:SRX3136447 This measurement was conducted with Illumina HiSeq 2000. 16-year-old female's T cells from a control group, stimulated with ag85 at timepoint 0, and primary cells. This measurement was conducted with Illumina HiSeq 2000. 17-year-old male's monocytes stimulated with mTb, taken at 180 days post-stimulation, as part of the control group in a study. sample_idx:SRX3137689
    sample_idx:SRX2734845 This measurement was conducted with Illumina HiSeq 2500. UM-UC18 bladder cancer cell line, a type of urinary bladder cancer cell line, cultured for study of bladder disease, cancer cell proliferation, and neoplasm. This measurement was conducted with NextSeq 500. HeLa cells with PARP knockdown treatment. sample_idx:SRX3130770
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 256
  • per_device_eval_batch_size: 256
  • learning_rate: 0.05
  • num_train_epochs: 4
  • warmup_ratio: 0.1
  • bf16: True
  • gradient_checkpointing: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 256
  • per_device_eval_batch_size: 256
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 0.05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: True
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss geo 70k multiplets natural language annotation loss geo_70k_multiplets_natural_language_annotation_cell_sentence_1_cosine_accuracy
0.4132 100 7.4235 13.0773 0.2663
0.8264 200 5.73 10.2578 0.3405
1.2397 300 5.5955 10.9942 0.5121
1.6529 400 5.4931 11.7546 0.5454
2.0661 500 5.3751 12.2888 0.5856
2.4793 600 5.3351 13.9381 0.5372
2.8926 700 5.2961 12.3787 0.5634
3.3058 800 5.2609 13.7257 0.5086
3.7190 900 5.2377 14.9069 0.4553

Framework Versions

  • Python: 3.11.6
  • Sentence Transformers: 5.0.0
  • Transformers: 4.55.0.dev0
  • PyTorch: 2.5.1+cu121
  • Accelerate: 1.9.0
  • Datasets: 2.19.1
  • Tokenizers: 0.21.4

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for jo-mengr/mmcontext-pubmedbert-geneformer-70k

Evaluation results

  • Cosine Accuracy on geo 70k multiplets natural language annotation cell sentence 1
    self-reported
    0.455