SentenceTransformer based on NeuML/pubmedbert-base-embeddings

This is a sentence-transformers model finetuned from NeuML/pubmedbert-base-embeddings on the cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): MMContextEncoder(
    (text_encoder): BertModel(
      (embeddings): BertEmbeddings(
        (word_embeddings): Embedding(30522, 768, padding_idx=0)
        (position_embeddings): Embedding(512, 768)
        (token_type_embeddings): Embedding(2, 768)
        (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
        (dropout): Dropout(p=0.1, inplace=False)
      )
      (encoder): BertEncoder(
        (layer): ModuleList(
          (0-11): 12 x BertLayer(
            (attention): BertAttention(
              (self): BertSdpaSelfAttention(
                (query): Linear(in_features=768, out_features=768, bias=True)
                (key): Linear(in_features=768, out_features=768, bias=True)
                (value): Linear(in_features=768, out_features=768, bias=True)
                (dropout): Dropout(p=0.1, inplace=False)
              )
              (output): BertSelfOutput(
                (dense): Linear(in_features=768, out_features=768, bias=True)
                (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
                (dropout): Dropout(p=0.1, inplace=False)
              )
            )
            (intermediate): BertIntermediate(
              (dense): Linear(in_features=768, out_features=3072, bias=True)
              (intermediate_act_fn): GELUActivation()
            )
            (output): BertOutput(
              (dense): Linear(in_features=3072, out_features=768, bias=True)
              (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
          )
        )
      )
      (pooler): BertPooler(
        (dense): Linear(in_features=768, out_features=768, bias=True)
        (activation): Tanh()
      )
    )
    (text_adapter): AdapterModule(
      (net): Sequential(
        (0): Linear(in_features=768, out_features=512, bias=True)
        (1): ReLU(inplace=True)
        (2): Linear(in_features=512, out_features=1024, bias=True)
        (3): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (pooling): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
    (omics_adapter): AdapterModule(
      (net): Sequential(
        (0): Linear(in_features=512, out_features=512, bias=True)
        (1): ReLU(inplace=True)
        (2): Linear(in_features=512, out_features=1024, bias=True)
        (3): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (omics_encoder): MiniOmicsModel(
      (embeddings): Embedding(90155, 512, padding_idx=0)
    )
  )
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the ๐Ÿค— Hub
model = SentenceTransformer("jo-mengr/mmcontext-pubmedbert-geneformer-mixed_no_bio_100k")
# Run inference
sentences = [
    'EEF1A1 TMSB4X MALAT1 CD74 H3-3B FAU TPT1 ACTB FTH1 PTMA EIF1 ZFP36L1 UBA52 NPM1 PPIA HSP90AA1 RGS2 SAT1 TSC22D3 EEF2 HMGB1 GRN STK17B COTL1 EDF1 CD83 PRDX1 ZFP36 COX4I1 ANP32B EML4 TAF1D UQCRH NACA RACK1 ENO1 RBM3 PFN1 PARK7 IRF1 SNRPD2 SNRPB COX7C KLF2 ATP6V1F ZNF331 BTF3 EIF3H HNRNPDL UQCRB EIF4A2 TAGLN2 ARPC2 YWHAB SF1 EIF3F ZFAS1 H4C3 TMSB10 HERPUD1 SLC2A3 WNK1 MEF2A ARHGAP15',
    "This measurement was conducted with 10x 5' v1. Plasmablast cell sample from a 3-year-old male, taken from the tonsil tissue, expressing IgM isotype, with IGH_IN_FRAME, IGH_FUNCTIONAL, IGH_JUNCTION_LENGTH 48.0, IGH_J_CALL IGHJ3*02, IGH_V_CALL_GENOTYPED IGHV4-39*01, IGK_C_Gene IGKC, IGK_FullLength 2, IGK_Productive 2, IGK_VDJ_Gene IGKV3-20 None IGKJ1.",
    "This measurement was conducted with 10x 5' v1. Memory B cell derived from a 3-year-old male human tonsil tissue, expressing IGHJ4*02, IGHV4-59*01, IGKV3-20, IGKJ2, and IgG1 isotype.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000, -0.0410, -0.1469],
#         [-0.0410,  1.0000,  0.9250],
#         [-0.1469,  0.9250,  1.0000]])

Evaluation

Metrics

Triplet

  • Datasets: cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_1 and cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_2
  • Evaluated with TripletEvaluator
Metric cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_1 cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_2
cosine_accuracy 0.4943 0.7481

Training Details

Training Dataset

cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation

  • Dataset: cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation at b141493
  • Size: 81,143 training samples
  • Columns: anchor, positive, negative_1, and negative_2
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative_1 negative_2
    type string string string string
    details
    • min: 135 tokens
    • mean: 152.0 tokens
    • max: 185 tokens
    • min: 22 tokens
    • mean: 48.4 tokens
    • max: 159 tokens
    • min: 23 tokens
    • mean: 48.65 tokens
    • max: 158 tokens
    • min: 135 tokens
    • mean: 151.82 tokens
    • max: 181 tokens
  • Samples:
    anchor positive negative_1 negative_2
    TMSB4X TMSB10 ACTB MALAT1 GNLY NKG7 IFITM2 LGALS1 GZMA EEF1A1 PFN1 HMGB2 FTH1 PTMA HSP90AA1 GZMB ARHGDIB HNRNPA2B1 PLAAT4 FAU CMC1 VIM MYL12A CBX3 ATP5F1E HCST IFI44L KLRF1 H3-3A COX6C ARL6IP1 CFL1 ISG15 HMGB1 S100A4 ATP5MF RORA MYL6 CORO1A OAZ1 KLRB1 ID2 HMGN3 CCNI RBM39 CAP1 SERF2 ELOC FCER1G S100A9 IFI16 YWHAZ EIF1 CALR HMGN2 SKAP2 SLC25A5 ZZZ3 YBX1 NUCB2 CDC42 GSTP1 FTL ATP5F1D This measurement was conducted with 10x 3' v2. A proliferating lymphocyte cell sample, obtained from a 34-year-old female Asian individual, derived from peripheral blood mononuclear cells. This measurement was conducted with 10x 3' v2. Sample is a 25-year-old female with European ethnicity, having CD8-positive, alpha-beta T cell type. This cell type exhibits elevated expression of type 1 interferon-stimulated genes (ISGs) in monocytes, reduction of naรฏve CD4+ T cells correlating with monocyte ISG expression, and expansion of repertoire-restricted cytotoxic GZMH+ CD8+ T cells. MALAT1 TMSB4X EEF1A1 CD74 BTG1 PTMA TMSB10 TPT1 FAU EIF1 FTH1 FTL CXCR4 TSC22D3 DUSP1 UBA52 ACTB CD37 CD52 NACA RACK1 EZR CD69 LAPTM5 H3-3A FOS ISG20 YBX1 CIRBP EIF3E OAZ1 COX7C SAT1 COX4I1 H3-3B SH3BGRL3 UBC UBB JUNB COMMD6 VIM CYBA KLF6 STK17B FUS HNRNPC MYL6 GADD45B LGALS1 EIF3L SRSF5 NFKBIA ANKRD12 CORO1A TLE5 NOP53 CHCHD2 PFN1 DDX5 ARPC3 COX7A2 YPEL5 ARL4A SRGN
    EEF1A1 MALAT1 FTH1 JUNB TPT1 FOS TMSB10 BTG1 TMSB4X ZFP36L2 NACA PABPC1 ACTB FAU VIM H3-3B EIF1 ZFP36 SARAF PTMA IL7R JUN RACK1 EEF2 UBA52 GAPDH FTL FXYD5 DUSP1 S100A4 CD69 CXCR4 UBC TSC22D3 CFL1 KLF6 ARHGDIB KLF2 BTG2 CITED2 IER2 TUBB4B CD3E EEF1G SLC2A3 NFKBIA PFN1 SRGN SNX9 COX4I1 DNAJB1 SERF2 CD8A PCBP2 IL32 BIRC3 SMAP2 FUS GADD45B MYL12A OAZ1 ATP5F1E TUBA4A PNRC1 This measurement was conducted with 10x 5' v1. Sample is a cell from the omentum tissue, specifically an effector memory CD4-positive, alpha-beta T cell, from a female in her sixth decade. This measurement was conducted with 10x 5' v2. Conventional dendritic cell from the jejunal epithelium of a female in her eighth decade. CD74 MALAT1 EEF1A1 FOS TPT1 TMSB4X TMSB10 ACTB FAU JUN CD37 DUSP1 RACK1 JUNB EIF1 PTMA FTL DNAJB1 H3-3B CD52 NACA BTG1 TSC22D3 FTH1 PABPC1 EEF2 UBA52 EEF1G HSP90AA1 LAPTM5 CYBA PPP1R15A HSP90AB1 CD69 ARHGDIB ZFP36 SERF2 UBC H3-3A PCBP2 HLA-DRB5 KLF6 PFN1 DDX5 HSPA8 ARPC3 CD83 CCNI CXCR4 ATP5F1E SARAF TUBA1A ZFP36L1 TOMM7 HERPUD1 YBX1 RHOA MEF2C FXYD5 MYL6 SRSF5 MYL12A CORO1A OAZ1
    MALAT1 GRIK1 SYT1 PCDH9 RORA NRG1 CADPS ZFPM2 LRRC4C LINGO2 RALYL PTPRD SPHKAP CNTNAP5 SLC8A1 CCSER1 HDAC9 CELF2 R3HDM1 CNTN4 RBMS3 PCDH7 GALNT13 UNC5D ROBO1 SYNPR SNAP25 GPM6A ANK3 FRMPD4 CHRM2 RYR2 KHDRBS2 CADM1 CACNA1D RGS6 PDE4D DOCK4 UNC13C CDH18 FAT3 MEG3 NR2F2-AS1 HMCN1 GULP1 CAMK2D ZEB1 SYN2 DYNC1I1 OXR1 DPP10 OSBPL6 FRAS1 PPP3CA ZNF385D ZMAT4 PCBP3 HS6ST3 ERC2 PLEKHA5 CDK14 MAP2 NCOA1 ATP8A2 This measurement was conducted with 10x 3' v3. Neuron cell type from a 29-year-old male, specifically from the thalamic complex, specifically the thalamus (THM) - posterior nuclear complex of thalamus (PoN) - medial geniculate nuclei (MG). This measurement was conducted with 10x 3' v3. Neuron from the thalamic complex (thalamus, posterior nuclear complex of thalamus, medial geniculate nuclei) of a 42-year-old male, identified as a midbrain-derived inhibitory neuron. MALAT1 PCDH9 PTPRD NRG1 SYT1 DPP10 ROBO1 TENM2 LRRC4C RBMS3 CNTNAP5 LINGO2 CDH18 SLC8A1 DMD PDE4D RYR2 ATP1B1 RGS6 PTPRT CHRM3 ADGRL2 NOVA1 NTNG1 PCDH7 TAFA2 CCSER1 ANK3 MEG3 MAP2 PLCB4 CACNA2D1 PRKG1 LINC03000 RMST RORA FOXP2 LHFPL3 MEG8 TNRC6A DAB1 KCTD8 RALYL GNAS INPP4B OLFM3 CNTN4 FRMD4A LINC00632 GAPDH ENOX1 AHI1 GPM6A EBF1 LRFN5 PCSK1N SEMA5A KIAA1217 CALY MAP1B SNAP25 GABRB2 CDH8 GRIP1
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Evaluation Dataset

cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation

  • Dataset: cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation at b141493
  • Size: 9,011 evaluation samples
  • Columns: anchor, positive, negative_1, and negative_2
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative_1 negative_2
    type string string string string
    details
    • min: 134 tokens
    • mean: 152.94 tokens
    • max: 178 tokens
    • min: 23 tokens
    • mean: 47.49 tokens
    • max: 157 tokens
    • min: 21 tokens
    • mean: 48.43 tokens
    • max: 206 tokens
    • min: 137 tokens
    • mean: 152.7 tokens
    • max: 178 tokens
  • Samples:
    anchor positive negative_1 negative_2
    MALAT1 EEF1A1 FTH1 TMSB4X ACTB FTL RTN4 ATP6V0B TPT1 FAU S100A6 NDUFA4 ATP5F1E COX7C ITM2B IGFBP7 EIF1 C12orf75 CD9 COX7B SERF2 ATP1B1 COX8A TXNIP NDUFB2 MYL6 PPDPF COX6B1 UQCR11 APOE COX4I1 CALM2 UQCRB S100A11 UQCRQ COX6C ATP5MG BSG ATP6AP2 UQCR10 PTMA NACA UBL5 UBA52 TMSB10 ADGRF5 HSP90AA1 GSTP1 ATP5F1D CHCHD2 GAPDH COX7A2 SKP1 HSPE1 PRDX1 CYSTM1 LGALS3 CD63 ATP5MJ CKB NDUFS5 ATP5ME UBB MAL This measurement was conducted with 10x 3' v3. Cell sample from the cortex of kidney, taken from a 43-year-old male of European ethnicity with a reported history of kidney cancer. The cell type is identified as a kidney collecting duct intercalated cell. This measurement was conducted with 10x 3' v3. Kidney collecting duct intercalated cell from a 43-year old European male with kidney cancer, taken from the cortex of kidney and cryopreserved for further analysis. MALAT1 EEF1A1 CRYAB S100A6 ITM2B ACTB TPT1 PTMA FTL PEBP1 H3-3B GSTP1 ADIRF IGFBP7 S100A10 HIPK2 MYL6 SERF2 TPM1 FAU FTH1 ID4 EIF1 TMSB10 HSP90AA1 SKP1 IGFBP2 IGFBP5 PRDX1 MYL12B CYSTM1 CLU ATP5F1E AHNAK PPDPF DSTN ID1 COX7C JUND SRP14 ATP1B1 HINT1 NDUFA4 PPIA NACA TMA7 NEAT1 CD9 SYNE2 LAPTM4A GNAS CIRBP ATP5F1D DDX17 EDF1 CCND1 LDHB RTN4 TMEM59 NR4A1 KTN1 SAT1 TMBIM6 APP
    MALAT1 KCND2 NRXN1 CDH18 NRXN3 ZNF385D CADM2 RALYL NKAIN2 CADPS2 RIMS1 FSTL5 GRID2 TRPM3 CHN2 DPP6 JMJD1C RORA PDE1A UNC13C TIAM1 NRG1 SNAP25 ZFPM2 CALN1 LSAMP CNTN1 ABLIM1 SYNE1 ANK3 CA10 NFIA ZBTB20 NTM CADM1 OPCML RELN DNM3 NEBL ERC1 SCN2A PPP3CA CACNA1A GALNT13 LRRC4C GPM6A RABGAP1L RIT2 CAMK4 GRIA4 PTPRD RBFOX3 MCTP1 LHFPL6 PCLO MEG3 PDE10A NOVA1 RTN1 ZNF385B CNTN4 GABRB2 SPOCK1 OXR1 This measurement was conducted with 10x 3' v3. Neuron cell type from a 29-year-old male cerebellum, specifically from the Cerebellar Vermis - CBV region, with European self-reported ethnicity, analyzed at the nucleus level. This measurement was conducted with 10x 3' v3. Endothelial cells derived from the cerebellum (specifically, cerebellar vermis) of a 42-year-old male, classified under the vascular supercluster term. MALAT1 ATP10A COBLL1 GPCPD1 PTPRG SLC39A10 FLT1 FLI1 TSPAN5 THSD4 RUNDC3B CCNY IGFBP7 ST6GALNAC3 PRKCH ST6GAL1 MECOM ESYT2 TBC1D4 IGF1R TACC1 HERC4 CDH2 TCF4 ABCB1 DOCK9 SORBS2 USP54 CBFA2T2 TSC22D1 QKI EPAS1 APP NFIB AOPEP ELMO1 ZNF704 PTPRM NET1 A2M FGD6 EPHA3 NEBL RAPGEF2 ACVR1 SPTBN1 BBS9 KLF2 MKLN1 EXOC6 LEF1 PPP3CA RBMS3 LRMDA WDFY3 BCL2L1 TTC3 SIPA1L1 CFLAR ADGRF5 MAP4K4 SCARB1 RAPGEF4 ABLIM1
    EEF1A1 ACTB GAPDH HMGN2 PTMA SERF2 TMSB4X CD74 PABPC1 FTH1 TMSB10 FAU PFN1 HMGN1 OAZ1 HMGB1 TPT1 PPIA NACA BTF3 MALAT1 MYL6 ATP5MG CFL1 RACK1 ODC1 ATP5F1E TMA7 SLC25A5 ELOB ARPC3 NPM1 COX7C ANP32B C4orf3 EIF1 PCBP2 KLF6 LAPTM5 COX8A RHOA HSPA8 H3-3B PTP4A2 UBA52 OST4 CIRBP LGALS1 EIF3L STMN1 PPDPF COX4I1 RAN EIF3F PPP1CC COMMD6 NDUFA4 YBX1 PEBP1 COTL1 COX7A2 HSPE1 CCNI TRIR This measurement was conducted with 10x 5' v1. Cell sample from the tonsil of a 9-year-old female with recurrent tonsillitis, characterized as a centroblast B cell with IGLC2, IGLV7-43, IGLJ3 immunoglobulin genes expressed. This measurement was conducted with 10x 5' v1. Centroblast cells derived from a 3-year-old male human tonsil sample, with obstructive sleep apnea and recurrent tonsillitis, undergoing affinity maturation and differentiation into memory or plasma cells. CD74 MALAT1 EEF1A1 ACTB TMSB4X LAPTM5 PTMA TPT1 TMSB10 CXCR4 FAU BTG1 TXNIP PABPC1 FTH1 NACA FTL IRF1 RBM3 CD83 CCNI SARAF BTF3 HNRNPA3 HLA-DRB5 UBA52 MEF2C CORO1A UBE2D3 ATP5F1E PDIA6 UBC GABARAP CFL1 CALR RACK1 HSPA5 EIF4B RHOA HNRNPC SRSF5 PFN1 HSPA8 CNOT2 IFT57 HNRNPA2B1 COX7C ITM2B SH3BGRL3 PNRC1 PDIA3 EEF2 UBB PARP14 SNX2 LAP3 SLC25A5 POU2F2 ADAM28 ZNF800 CYBA GDI2 STK17B EIF3I
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 256
  • per_device_eval_batch_size: 256
  • learning_rate: 0.05
  • num_train_epochs: 4
  • warmup_ratio: 0.1
  • bf16: True
  • gradient_checkpointing: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 256
  • per_device_eval_batch_size: 256
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 0.05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: True
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss cellxgene pseudo bulk 100k multiplets natural language annotation loss cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_1_cosine_accuracy cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_2_cosine_accuracy
0.1577 50 5.1803 19.7916 0.4968 0.6344
0.3155 100 4.2707 18.6945 0.4957 0.6765
0.4732 150 4.0462 19.5027 0.4992 0.6883
0.6309 200 3.8315 21.2863 0.4924 0.6853
0.7886 250 3.6913 22.0394 0.4951 0.6898
0.9464 300 3.6189 21.9135 0.4955 0.6959
1.1041 350 3.4915 19.9717 0.4987 0.7175
1.2618 400 3.4386 20.4141 0.4983 0.7146
1.4196 450 3.356 19.3673 0.4920 0.7213
1.5773 500 3.296 17.6550 0.4912 0.7299
1.7350 550 3.2573 18.1930 0.4951 0.7256
1.8927 600 3.2169 17.0915 0.4956 0.7418
2.0505 650 3.211 16.8919 0.4925 0.7402
2.2082 700 3.242 16.5824 0.4905 0.7435
2.3659 750 3.0709 17.1403 0.4912 0.7438
2.5237 800 3.0 17.3579 0.4936 0.7430
2.6814 850 3.0125 17.4030 0.4933 0.7428
2.8391 900 2.9719 17.2877 0.4926 0.7452
2.9968 950 2.9421 17.1169 0.4927 0.7473
3.1546 1000 2.9269 17.1022 0.4932 0.7479
3.3123 1050 2.9255 17.2456 0.4934 0.7476
3.4700 1100 2.9256 17.2210 0.4938 0.7473
3.6278 1150 2.9058 17.2008 0.4950 0.7475
3.7855 1200 2.9191 17.2417 0.4940 0.7478
3.9432 1250 2.9345 17.2633 0.4943 0.7481

Framework Versions

  • Python: 3.11.6
  • Sentence Transformers: 5.0.0
  • Transformers: 4.55.0.dev0
  • PyTorch: 2.5.1+cu121
  • Accelerate: 1.9.0
  • Datasets: 2.19.1
  • Tokenizers: 0.21.4

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for jo-mengr/mmcontext-pubmedbert-geneformer-mixed_no_bio_100k

Evaluation results

  • Cosine Accuracy on cellxgene pseudo bulk 100k multiplets natural language annotation cell sentence 1
    self-reported
    0.494
  • Cosine Accuracy on cellxgene pseudo bulk 100k multiplets natural language annotation cell sentence 2
    self-reported
    0.748