SentenceTransformer based on Shuu12121/CodeModernBERT-Owl-5.2

This is a sentence-transformers model finetuned from Shuu12121/CodeModernBERT-Owl-5.2. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Shuu12121/CodeModernBERT-Owl-5.2
  • Maximum Sequence Length: 1024 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 1024, 'do_lower_case': False, 'architecture': 'ModernBertModel'})
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the ๐Ÿค— Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    "WalkShallow reads the entries in the named directory and calls fn for each.\nIt does not recurse into subdirectories.\n\nIf fn returns an error, iteration stops and WalkShallow returns that value.\n\nOn Linux, WalkShallow does not allocate, so long as certain methods on the\nWalkFunc's DirEntry are not called which necessarily allocate.",
    'func WalkShallow(dirName mem.RO, fn WalkFunc) error {\n\tif f := osWalkShallow; f != nil {\n\t\treturn f(dirName, fn)\n\t}\n\tof, err := os.Open(dirName.StringCopy())\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer of.Close()\n\tfor {\n\t\tfis, err := of.ReadDir(100)\n\t\tfor _, de := range fis {\n\t\t\tif err := fn(mem.S(de.Name()), de); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\t\tif err != nil {\n\t\t\tif err == io.EOF {\n\t\t\t\treturn nil\n\t\t\t}\n\t\t\treturn err\n\t\t}\n\t}\n}',
    '@Before\n  public void setupMockCluster() throws IOException {\n    Configuration conf = new HdfsConfiguration();\n    conf.setDouble(DFSConfigKeys.DFS_NAMENODE_SAFEMODE_THRESHOLD_PCT_KEY,\n        THRESHOLD);\n    conf.setInt(DFSConfigKeys.DFS_NAMENODE_SAFEMODE_EXTENSION_KEY,\n        EXTENSION);\n    conf.setInt(DFSConfigKeys.DFS_NAMENODE_SAFEMODE_MIN_DATANODES_KEY,\n        DATANODE_NUM);\n\n    fsn = mock(FSNamesystem.class);\n    doReturn(true).when(fsn).hasWriteLock();\n    doReturn(true).when(fsn).hasReadLock();\n    doReturn(true).when(fsn).hasWriteLock(RwLockMode.BM);\n    doReturn(true).when(fsn).hasReadLock(RwLockMode.BM);\n    doReturn(true).when(fsn).isRunning();\n    NameNode.initMetrics(conf, NamenodeRole.NAMENODE);\n\n    bm = spy(new BlockManager(fsn, false, conf));\n    doReturn(true).when(bm).isGenStampInFuture(any(Block.class));\n    dn = spy(bm.getDatanodeManager());\n    Whitebox.setInternalState(bm, "datanodeManager", dn);\n    // the datanode threshold is always met\n    when(dn.getNumLiveDataNodes()).thenReturn(DATANODE_NUM);\n\n    bmSafeMode = new BlockManagerSafeMode(bm, fsn, false, conf);\n  }',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.9081, 0.0440],
#         [0.9081, 1.0000, 0.0554],
#         [0.0440, 0.0554, 1.0000]])

Training Details

Training Dataset

Unnamed Dataset

  • Size: 4,000,000 training samples
  • Columns: sentence_0, sentence_1, and label
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 label
    type string string float
    details
    • min: 8 tokens
    • mean: 76.07 tokens
    • max: 1024 tokens
    • min: 13 tokens
    • mean: 150.67 tokens
    • max: 1024 tokens
    • min: 1.0
    • mean: 1.0
    • max: 1.0
  • Samples:
    sentence_0 sentence_1 label
    Set the column title

    @param column - column number (first column is: 0)
    @param title - new column title
    setHeader = function(column, newValue) {
    const obj = this;

    if (obj.headers[column]) {
    const oldValue = obj.headers[column].textContent;
    const onchangeheaderOldValue = (obj.options.columns && obj.options.columns[column] && obj.options.columns[column].title)
    Elsewhere this is known as a "Weak Value Map". Whereas a std JS WeakMap
    is weak on its keys, this map is weak on its values. It does not retain these
    values strongly. If a given value disappears, then the entries for it
    disappear from every weak-value-map that holds it as a value.

    Just as a WeakMap only allows gc-able values as keys, a weak-value-map
    only allows gc-able values as values.

    Unlike a WeakMap, a weak-value-map unavoidably exposes the non-determinism of
    gc to its clients. Thus, both the ability to create one, as well as each
    created one, must be treated as dangerous capabilities that must be closely
    held. A program with access to these can read side channels though gc that do
    not* rely on the ability to measure duration. This is a separate, and bad,
    timing-independent side channel.

    This non-determinism also enables code to escape deterministic replay. In a
    blockchain context, this could cause validators to differ from each other,
    preventing consensus, and thus preventing ...
    makeFinalizingMap = (finalizer, opts) => {
    const { weakValues = false } = opts
    Creates a function that memoizes the result of func. If resolver is
    provided, it determines the cache key for storing the result based on the
    arguments provided to the memoized function. By default, the first argument
    provided to the memoized function is used as the map cache key. The func
    is invoked with the this binding of the memoized function.

    Note: The cache is exposed as the cache property on the memoized
    function. Its creation may be customized by replacing the _.memoize.Cache
    constructor with one whose instances implement the
    Map
    method interface of delete, get, has, and set.

    @static
    @memberOf _
    @since 0.1.0
    @category Function
    @param {Function} func The function to have its output memoized.
    @param {Function} [resolver] The function to resolve the cache key.
    @returns {Function} Returns the new memoized function.
    @example

    var object = { 'a': 1, 'b': 2 };
    var othe...
    function memoize(func, resolver) {
    if (typeof func != 'function'
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 200
  • per_device_eval_batch_size: 200
  • fp16: True
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 200
  • per_device_eval_batch_size: 200
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 3
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Click to expand
Epoch Step Training Loss
0.025 500 0.3908
0.05 1000 0.1332
0.075 1500 0.1201
0.1 2000 0.1078
0.125 2500 0.1017
0.15 3000 0.0962
0.175 3500 0.0899
0.2 4000 0.0855
0.225 4500 0.0837
0.25 5000 0.0779
0.275 5500 0.0753
0.3 6000 0.0698
0.325 6500 0.072
0.35 7000 0.0669
0.375 7500 0.0656
0.4 8000 0.0641
0.425 8500 0.0624
0.45 9000 0.0598
0.475 9500 0.0566
0.5 10000 0.0554
0.525 10500 0.0544
0.55 11000 0.0524
0.575 11500 0.0528
0.6 12000 0.0503
0.625 12500 0.0479
0.65 13000 0.0475
0.675 13500 0.047
0.7 14000 0.0462
0.725 14500 0.0432
0.75 15000 0.043
0.775 15500 0.042
0.8 16000 0.041
0.825 16500 0.0389
0.85 17000 0.0401
0.875 17500 0.0399
0.9 18000 0.039
0.925 18500 0.0383
0.95 19000 0.0363
0.975 19500 0.0361
1.0 20000 0.0345
1.025 20500 0.0165
1.05 21000 0.0152
1.075 21500 0.0154
1.1 22000 0.0151
1.125 22500 0.0149
1.15 23000 0.0151
1.175 23500 0.0152
1.2 24000 0.0151
1.225 24500 0.0164
1.25 25000 0.0155
1.275 25500 0.0158
1.3 26000 0.0159
1.325 26500 0.0158
1.35 27000 0.0148
1.375 27500 0.0151
1.4 28000 0.0156
1.425 28500 0.0144
1.45 29000 0.0144
1.475 29500 0.0151
1.5 30000 0.0141
1.525 30500 0.0137
1.55 31000 0.0145
1.575 31500 0.0138
1.6 32000 0.0132
1.625 32500 0.0138
1.65 33000 0.0138
1.675 33500 0.0132
1.7 34000 0.0133
1.725 34500 0.0133
1.75 35000 0.0135
1.775 35500 0.0129
1.8 36000 0.0124
1.825 36500 0.0127
1.85 37000 0.0128
1.875 37500 0.0122
1.9 38000 0.012
1.925 38500 0.012
1.95 39000 0.0123
1.975 39500 0.0111
2.0 40000 0.0124
2.025 40500 0.0054
2.05 41000 0.0054
2.075 41500 0.0052
2.1 42000 0.005
2.125 42500 0.0049
2.15 43000 0.0048
2.175 43500 0.0051
2.2 44000 0.0049
2.225 44500 0.0047
2.25 45000 0.0047
2.275 45500 0.0048
2.3 46000 0.0048
2.325 46500 0.0048
2.35 47000 0.0049
2.375 47500 0.0044
2.4 48000 0.0047
2.425 48500 0.0044
2.45 49000 0.0044
2.475 49500 0.0048
2.5 50000 0.0046
2.525 50500 0.0045
2.55 51000 0.0043
2.575 51500 0.0047
2.6 52000 0.0044
2.625 52500 0.0042
2.65 53000 0.0043
2.675 53500 0.004
2.7 54000 0.0042
2.725 54500 0.004
2.75 55000 0.0041
2.775 55500 0.0039
2.8 56000 0.0041
2.825 56500 0.0041
2.85 57000 0.0039
2.875 57500 0.0037
2.9 58000 0.004
2.925 58500 0.0038
2.95 59000 0.0041
2.975 59500 0.0039
3.0 60000 0.0038

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 5.0.0
  • Transformers: 4.53.1
  • PyTorch: 2.7.0+cu128
  • Accelerate: 1.7.0
  • Datasets: 3.6.0
  • Tokenizers: 0.21.2

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
11
Safetensors
Model size
152M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Shuu12121/CodeSearch-ModernBERT-Owl-5.2

Finetuned
(1)
this model