ModernBERT DAPT Embed DAPT Math

This is a sentence-transformers model finetuned from answerdotai/ModernBERT-base. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: answerdotai/ModernBERT-base
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Master-thesis-NAP/ModernBERT-base-finetuned")
# Run inference
sentences = [
    'What is the meaning of the identity containment $1_x:x\\to x$ in the context of the bond system?',
    "A \\emph{bond system} is a tuple $(B,C,s,t,1,\\cdot)$, where $B$ is a set of \\emph{bonds}, $C$ is a set of \\emph{content} relations, and $s,t:C\\to B$ are \\emph{source} and \\emph{target} functions. For $c\\in C$ with $s(c)=x$ and $t(c)=y$, we write $x\\xrightarrow{c}y$ or $c:x\\to y$, indicating that $x$ \\emph{contains} $y$. Each bond $x\\in B$ has an \\emph{identity} containment $1_x:x\\to x$, meaning every bond trivially contains itself. For $c:x\\to y$ and $c':y\\to z$, their composition is $cc':x\\to z$. These data must satisfy:\n    \\begin{enumerate}\n        \\item Identity laws: For each $c:x\\to y$, $1_x c= c=c1_y$\n        \\item Associativity: For $c:x\\to y$, $c':y\\to z$, $c'':z\\to w$, $c(c'c'')=(cc')c''$\n        \\item Anti-symmetry: For $c:x\\to y$ and $c':y\\to x$, $x=y$\n        \\item Left cancellation: For $c,c':x\\to y$ and $c'':y\\to z$, if $cc''=c'c''$, then $c=c'$\n    \\end{enumerate}",
    '\\label{lem:opt_lin}\nConsider the optimization problem\n\\begin{equation}\\label{eq:max_tr_lem}\n\\begin{aligned}\n    \\max_{\\bs{U}}&\\;\\; \\Re\\{\\mrm{tr}(\\bs{U}^\\mrm{H}\\bs{B}) \\}\\\\\n    \\mrm{s.t. \\;\\;}& \\bs{U}\\in \\mathcal{U}(N),\n\\end{aligned}\n\\end{equation}\nwhere $\\bs{B}$ may be an arbitrary $N\\times N$ matrix with singular value decomposition (SVD) $\\bs{B}=\\bs{U}_{\\bs{B}}\\bs{S}_{\\bs{B}}\\bs{V}_{\\bs{B}}^\\mrm{H}$. The solution to \\eqref{eq:max_tr_lem} is given by\n\\begin{equation}\\label{eq:sol_max}\n    \\bs{U}_\\mrm{opt} = \\bs{U}_{\\bs{B}}^\\mrm{H}\\bs{V}_{\\bs{B}}.\n\\end{equation}\n\\begin{skproof}\n    A formal proof, which may be included in the extended version, can be obtained by defining the Riemannian gradient over the unitary group and finding the stationary point where it vanishes. However, an intuitive argument is that the solution to \\eqref{eq:max_tr_lem} is obtained by positively combining the singular values of $\\bs{B}$, leading to \\eqref{eq:sol_max}.\n\\end{skproof}',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.8649
cosine_accuracy@3 0.9143
cosine_accuracy@5 0.9287
cosine_accuracy@10 0.9458
cosine_precision@1 0.8649
cosine_precision@3 0.6053
cosine_precision@5 0.4861
cosine_precision@10 0.3406
cosine_recall@1 0.0418
cosine_recall@3 0.0822
cosine_recall@5 0.1057
cosine_recall@10 0.1394
cosine_ndcg@10 0.4427
cosine_mrr@10 0.8928
cosine_map@100 0.1599

Training Details

Training Dataset

Unnamed Dataset

  • Size: 79,876 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 9 tokens
    • mean: 38.48 tokens
    • max: 142 tokens
    • min: 5 tokens
    • mean: 210.43 tokens
    • max: 924 tokens
  • Samples:
    anchor positive
    What is the limit of the proportion of 1's in the sequence $a_n$ as $n$ approaches infinity, given that $0 \leq 3g_n -2n \leq 4$? Let $g_n$ be the number of $1$'s in the sequence $a_1 a_2 \cdots a_n$.
    Then
    \begin{equation}
    0 \leq 3g_n -2n \leq 4
    \label{star}
    \end{equation}
    for all $n$, and hence
    $\lim_{n \rightarrow \infty} g_n/n = 2/3$.
    \label{thm1}
    Does the statement of \textbf{ThmConjAreTrue} imply that the maximum genus of a locally Cohen-Macaulay curve in $\mathbb{P}^3_{\mathbb{C}}$ of degree $d$ that does not lie on a surface of degree $s-1$ is always equal to $g(d,s)$? \label{ThmConjAreTrue}
    Conjectures \ref{Conj1} and \ref{Conj2} are true.
    As a consequence,
    if either $d=s \geq 1$ or $d \geq 2s+1 \geq 3$,
    the maximum genus of a locally Cohen-Macaulay curve in $\mathbb{P}^3_{\mathbb{C}}$ of degree $d$ that does not lie on a surface of degree $s-1$ is equal to $g(d,s)$.
    \emph{Is the statement \emph{If $X$ is a compact Hausdorff space, then $X$ is normal}, proven in the first isomorphism theorem for topological groups, or is it a well-known result in topology?} }
    \newcommand{\ep}{
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 8
  • learning_rate: 2e-05
  • num_train_epochs: 4
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 8
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Click to expand
Epoch Step Training Loss TESTING_cosine_ndcg@10
0.0160 10 20.3071 -
0.0320 20 19.7465 -
0.0481 30 19.0342 -
0.0641 40 17.755 -
0.0801 50 14.9942 -
0.0961 60 10.9013 -
0.1122 70 7.043 -
0.1282 80 4.5936 -
0.1442 90 3.5183 -
0.1602 100 2.3748 -
0.1762 110 1.5711 -
0.1923 120 1.1523 -
0.2083 130 1.0827 -
0.2243 140 1.0009 -
0.2403 150 0.9493 -
0.2564 160 0.9231 -
0.2724 170 0.9511 -
0.2884 180 0.7963 -
0.3044 190 0.6874 -
0.3204 200 0.6039 -
0.3365 210 0.6957 -
0.3525 220 0.5578 -
0.3685 230 0.7451 -
0.3845 240 0.6107 -
0.4006 250 0.644 -
0.4166 260 0.4977 -
0.4326 270 0.5744 -
0.4486 280 0.5136 -
0.4647 290 0.4888 -
0.4807 300 0.4784 -
0.4967 310 0.475 -
0.5127 320 0.4976 -
0.5287 330 0.415 -
0.5448 340 0.415 -
0.5608 350 0.492 -
0.5768 360 0.5054 -
0.5928 370 0.4836 -
0.6089 380 0.4985 -
0.6249 390 0.4503 -
0.6409 400 0.3757 -
0.6569 410 0.3236 -
0.6729 420 0.403 -
0.6890 430 0.3306 -
0.7050 440 0.4443 -
0.7210 450 0.3631 -
0.7370 460 0.3318 -
0.7531 470 0.3811 -
0.7691 480 0.3179 -
0.7851 490 0.3291 -
0.8011 500 0.264 -
0.8171 510 0.3179 -
0.8332 520 0.2381 -
0.8492 530 0.4066 -
0.8652 540 0.3693 -
0.8812 550 0.329 -
0.8973 560 0.3047 -
0.9133 570 0.3337 -
0.9293 580 0.285 -
0.9453 590 0.4788 -
0.9613 600 0.3036 -
0.9774 610 0.366 -
0.9934 620 0.2561 -
1.0 625 - 0.4194
1.0080 630 0.3096 -
1.0240 640 0.2129 -
1.0401 650 0.1873 -
1.0561 660 0.1672 -
1.0721 670 0.1529 -
1.0881 680 0.1929 -
1.1041 690 0.156 -
1.1202 700 0.2057 -
1.1362 710 0.1691 -
1.1522 720 0.1902 -
1.1682 730 0.2281 -
1.1843 740 0.1805 -
1.2003 750 0.1673 -
1.2163 760 0.2152 -
1.2323 770 0.1149 -
1.2483 780 0.1583 -
1.2644 790 0.1441 -
1.2804 800 0.1711 -
1.2964 810 0.2311 -
1.3124 820 0.2057 -
1.3285 830 0.1312 -
1.3445 840 0.1055 -
1.3605 850 0.1858 -
1.3765 860 0.1483 -
1.3925 870 0.1335 -
1.4086 880 0.1716 -
1.4246 890 0.1133 -
1.4406 900 0.126 -
1.4566 910 0.1402 -
1.4727 920 0.1974 -
1.4887 930 0.1582 -
1.5047 940 0.1771 -
1.5207 950 0.1583 -
1.5368 960 0.1219 -
1.5528 970 0.1388 -
1.5688 980 0.196 -
1.5848 990 0.1474 -
1.6008 1000 0.2324 -
1.6169 1010 0.1499 -
1.6329 1020 0.1359 -
1.6489 1030 0.1597 -
1.6649 1040 0.1636 -
1.6810 1050 0.1395 -
1.6970 1060 0.187 -
1.7130 1070 0.1424 -
1.7290 1080 0.1971 -
1.7450 1090 0.139 -
1.7611 1100 0.184 -
1.7771 1110 0.1212 -
1.7931 1120 0.1127 -
1.8091 1130 0.1308 -
1.8252 1140 0.1648 -
1.8412 1150 0.1225 -
1.8572 1160 0.1262 -
1.8732 1170 0.1247 -
1.8892 1180 0.1462 -
1.9053 1190 0.1529 -
1.9213 1200 0.1835 -
1.9373 1210 0.1672 -
1.9533 1220 0.1245 -
1.9694 1230 0.172 -
1.9854 1240 0.1443 -
2.0 1250 0.1285 0.4340
2.0160 1260 0.0587 -
2.0320 1270 0.0631 -
2.0481 1280 0.069 -
2.0641 1290 0.0685 -
2.0801 1300 0.062 -
2.0961 1310 0.0505 -
2.1122 1320 0.0757 -
2.1282 1330 0.1258 -
2.1442 1340 0.0549 -
2.1602 1350 0.0625 -
2.1762 1360 0.0568 -
2.1923 1370 0.0664 -
2.2083 1380 0.0759 -
2.2243 1390 0.0608 -
2.2403 1400 0.0519 -
2.2564 1410 0.0818 -
2.2724 1420 0.0722 -
2.2884 1430 0.0791 -
2.3044 1440 0.0575 -
2.3204 1450 0.0456 -
2.3365 1460 0.0564 -
2.3525 1470 0.0574 -
2.3685 1480 0.0675 -
2.3845 1490 0.0525 -
2.4006 1500 0.0517 -
2.4166 1510 0.0492 -
2.4326 1520 0.0535 -
2.4486 1530 0.0602 -
2.4647 1540 0.0598 -
2.4807 1550 0.0497 -
2.4967 1560 0.0603 -
2.5127 1570 0.0932 -
2.5287 1580 0.0559 -
2.5448 1590 0.0403 -
2.5608 1600 0.0764 -
2.5768 1610 0.0558 -
2.5928 1620 0.0691 -
2.6089 1630 0.0602 -
2.6249 1640 0.0795 -
2.6409 1650 0.0846 -
2.6569 1660 0.0582 -
2.6729 1670 0.0413 -
2.6890 1680 0.062 -
2.7050 1690 0.0697 -
2.7210 1700 0.0655 -
2.7370 1710 0.0647 -
2.7531 1720 0.0591 -
2.7691 1730 0.045 -
2.7851 1740 0.0666 -
2.8011 1750 0.0532 -
2.8171 1760 0.0455 -
2.8332 1770 0.0603 -
2.8492 1780 0.0893 -
2.8652 1790 0.0498 -
2.8812 1800 0.053 -
2.8973 1810 0.0481 -
2.9133 1820 0.0556 -
2.9293 1830 0.0643 -
2.9453 1840 0.0825 -
2.9613 1850 0.1111 -
2.9774 1860 0.0655 -
2.9934 1870 0.0432 -
3.0 1875 - 0.4412
3.0080 1880 0.0502 -
3.0240 1890 0.0269 -
3.0401 1900 0.0277 -
3.0561 1910 0.0535 -
3.0721 1920 0.0465 -
3.0881 1930 0.0548 -
3.1041 1940 0.0498 -
3.1202 1950 0.0394 -
3.1362 1960 0.0286 -
3.1522 1970 0.0434 -
3.1682 1980 0.0316 -
3.1843 1990 0.0346 -
3.2003 2000 0.0402 -
3.2163 2010 0.0368 -
3.2323 2020 0.0501 -
3.2483 2030 0.061 -
3.2644 2040 0.0445 -
3.2804 2050 0.0381 -
3.2964 2060 0.0367 -
3.3124 2070 0.0485 -
3.3285 2080 0.0567 -
3.3445 2090 0.0512 -
3.3605 2100 0.0329 -
3.3765 2110 0.0442 -
3.3925 2120 0.032 -
3.4086 2130 0.0387 -
3.4246 2140 0.0397 -
3.4406 2150 0.0258 -
3.4566 2160 0.039 -
3.4727 2170 0.0432 -
3.4887 2180 0.0382 -
3.5047 2190 0.0467 -
3.5207 2200 0.0334 -
3.5368 2210 0.0365 -
3.5528 2220 0.0553 -
3.5688 2230 0.0483 -
3.5848 2240 0.0456 -
3.6008 2250 0.0397 -
3.6169 2260 0.037 -
3.6329 2270 0.0414 -
3.6489 2280 0.0431 -
3.6649 2290 0.0416 -
3.6810 2300 0.0532 -
3.6970 2310 0.0304 -
3.7130 2320 0.0376 -
3.7290 2330 0.0417 -
3.7450 2340 0.0519 -
3.7611 2350 0.0371 -
3.7771 2360 0.0481 -
3.7931 2370 0.0374 -
3.8091 2380 0.0322 -
3.8252 2390 0.0493 -
3.8412 2400 0.0337 -
3.8572 2410 0.0288 -
3.8732 2420 0.0383 -
3.8892 2430 0.0341 -
3.9053 2440 0.028 -
3.9213 2450 0.0328 -
3.9373 2460 0.0403 -
3.9533 2470 0.0366 -
3.9694 2480 0.0322 -
3.9854 2490 0.0288 -
4.0 2500 0.0423 0.4427
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.11.12
  • Sentence Transformers: 4.1.0
  • Transformers: 4.52.3
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.6.0
  • Datasets: 2.14.4
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
7
Safetensors
Model size
149M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Master-thesis-NAP/ModernBERT-base-finetuned

Finetuned
(558)
this model

Evaluation results