SentenceTransformer based on Shuu12121/CodeModernBERT-Owl-5.0-Pre
This is a sentence-transformers model finetuned from Shuu12121/CodeModernBERT-Owl-5.0-Pre. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: Shuu12121/CodeModernBERT-Owl-5.0-Pre
- Maximum Sequence Length: 1024 tokens
- Output Dimensionality: 768 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
"WalkShallow reads the entries in the named directory and calls fn for each.\nIt does not recurse into subdirectories.\n\nIf fn returns an error, iteration stops and WalkShallow returns that value.\n\nOn Linux, WalkShallow does not allocate, so long as certain methods on the\nWalkFunc's DirEntry are not called which necessarily allocate.",
'func WalkShallow(dirName mem.RO, fn WalkFunc) error {\n\tif f := osWalkShallow; f != nil {\n\t\treturn f(dirName, fn)\n\t}\n\tof, err := os.Open(dirName.StringCopy())\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer of.Close()\n\tfor {\n\t\tfis, err := of.ReadDir(100)\n\t\tfor _, de := range fis {\n\t\t\tif err := fn(mem.S(de.Name()), de); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\t\tif err != nil {\n\t\t\tif err == io.EOF {\n\t\t\t\treturn nil\n\t\t\t}\n\t\t\treturn err\n\t\t}\n\t}\n}',
'pub fn map_status<T>(\n result: &Result<T, NetworksError>,\n ) -> Option<(NetworkCommissioningStatusEnum, Option<i32>)> {\n match result {\n Ok(_) => Some((NetworkCommissioningStatusEnum::Success, None)),\n Err(NetworksError::NetworkIdNotFound) => {\n Some((NetworkCommissioningStatusEnum::NetworkIDNotFound, None))\n }\n Err(NetworksError::DuplicateNetworkId) => {\n Some((NetworkCommissioningStatusEnum::DuplicateNetworkID, None))\n }\n Err(NetworksError::OutOfRange) => {\n Some((NetworkCommissioningStatusEnum::OutOfRange, None))\n }\n Err(NetworksError::BoundsExceeded) => {\n Some((NetworkCommissioningStatusEnum::BoundsExceeded, None))\n }\n Err(NetworksError::Other(_)) => None,\n }\n }',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Training Details
Training Dataset
Unnamed Dataset
- Size: 4,000,000 training samples
- Columns:
sentence_0
,sentence_1
, andlabel
- Approximate statistics based on the first 1000 samples:
sentence_0 sentence_1 label type string string float details - min: 8 tokens
- mean: 76.07 tokens
- max: 1024 tokens
- min: 13 tokens
- mean: 150.67 tokens
- max: 1024 tokens
- min: 1.0
- mean: 1.0
- max: 1.0
- Samples:
sentence_0 sentence_1 label Set the column title
@param column - column number (first column is: 0)
@param title - new column titlesetHeader = function(column, newValue) {
const obj = this;
if (obj.headers[column]) {
const oldValue = obj.headers[column].textContent;
const onchangeheaderOldValue = (obj.options.columns && obj.options.columns[column] && obj.options.columns[column].title)Elsewhere this is known as a "Weak Value Map". Whereas a std JS WeakMap
is weak on its keys, this map is weak on its values. It does not retain these
values strongly. If a given value disappears, then the entries for it
disappear from every weak-value-map that holds it as a value.
Just as a WeakMap only allows gc-able values as keys, a weak-value-map
only allows gc-able values as values.
Unlike a WeakMap, a weak-value-map unavoidably exposes the non-determinism of
gc to its clients. Thus, both the ability to create one, as well as each
created one, must be treated as dangerous capabilities that must be closely
held. A program with access to these can read side channels though gc that do
not* rely on the ability to measure duration. This is a separate, and bad,
timing-independent side channel.
This non-determinism also enables code to escape deterministic replay. In a
blockchain context, this could cause validators to differ from each other,
preventing consensus, and thus preventing ...makeFinalizingMap = (finalizer, opts) => {
const { weakValues = false } = optsCreates a function that memoizes the result of
func
. Ifresolver
is
provided, it determines the cache key for storing the result based on the
arguments provided to the memoized function. By default, the first argument
provided to the memoized function is used as the map cache key. Thefunc
is invoked with thethis
binding of the memoized function.
Note: The cache is exposed as thecache
property on the memoized
function. Its creation may be customized by replacing the_.memoize.Cache
constructor with one whose instances implement theMap
method interface ofdelete
,get
,has
, andset
.
@static
@memberOf _
@since 0.1.0
@category Function
@param {Function} func The function to have its output memoized.
@param {Function} [resolver] The function to resolve the cache key.
@returns {Function} Returns the new memoized function.
@example
var object = { 'a': 1, 'b': 2 };
var othe...function memoize(func, resolver) {
if (typeof func != 'function' - Loss:
MultipleNegativesRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Training Hyperparameters
Non-Default Hyperparameters
per_device_train_batch_size
: 200per_device_eval_batch_size
: 200fp16
: Truemulti_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: noprediction_loss_only
: Trueper_device_train_batch_size
: 200per_device_eval_batch_size
: 200per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 3max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Truefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsehub_revision
: Nonegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseliger_kernel_config
: Noneeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Training Logs
Click to expand
Epoch | Step | Training Loss |
---|---|---|
0.025 | 500 | 0.403 |
0.05 | 1000 | 0.136 |
0.075 | 1500 | 0.1222 |
0.1 | 2000 | 0.1102 |
0.125 | 2500 | 0.1034 |
0.15 | 3000 | 0.0987 |
0.175 | 3500 | 0.0925 |
0.2 | 4000 | 0.0869 |
0.225 | 4500 | 0.0837 |
0.25 | 5000 | 0.0808 |
0.275 | 5500 | 0.077 |
0.3 | 6000 | 0.0727 |
0.325 | 6500 | 0.0727 |
0.35 | 7000 | 0.0683 |
0.375 | 7500 | 0.068 |
0.4 | 8000 | 0.0641 |
0.425 | 8500 | 0.0629 |
0.45 | 9000 | 0.0618 |
0.475 | 9500 | 0.0572 |
0.5 | 10000 | 0.0575 |
0.525 | 10500 | 0.0553 |
0.55 | 11000 | 0.0543 |
0.575 | 11500 | 0.0539 |
0.6 | 12000 | 0.051 |
0.625 | 12500 | 0.049 |
0.65 | 13000 | 0.0484 |
0.675 | 13500 | 0.048 |
0.7 | 14000 | 0.0477 |
0.725 | 14500 | 0.0445 |
0.75 | 15000 | 0.0433 |
0.775 | 15500 | 0.0438 |
0.8 | 16000 | 0.0415 |
0.825 | 16500 | 0.0405 |
0.85 | 17000 | 0.0417 |
0.875 | 17500 | 0.0411 |
0.9 | 18000 | 0.0409 |
0.925 | 18500 | 0.0395 |
0.95 | 19000 | 0.0381 |
0.975 | 19500 | 0.0374 |
1.0 | 20000 | 0.0367 |
1.025 | 20500 | 0.0169 |
1.05 | 21000 | 0.0163 |
1.075 | 21500 | 0.016 |
1.1 | 22000 | 0.016 |
1.125 | 22500 | 0.0156 |
1.15 | 23000 | 0.0161 |
1.175 | 23500 | 0.0157 |
1.2 | 24000 | 0.0156 |
1.225 | 24500 | 0.0167 |
1.25 | 25000 | 0.0162 |
1.275 | 25500 | 0.0169 |
1.3 | 26000 | 0.0162 |
1.325 | 26500 | 0.0162 |
1.35 | 27000 | 0.0152 |
1.375 | 27500 | 0.0161 |
1.4 | 28000 | 0.0153 |
1.425 | 28500 | 0.0152 |
1.45 | 29000 | 0.0151 |
1.475 | 29500 | 0.0152 |
1.5 | 30000 | 0.0138 |
1.525 | 30500 | 0.0146 |
1.55 | 31000 | 0.0152 |
1.575 | 31500 | 0.014 |
1.6 | 32000 | 0.0143 |
1.625 | 32500 | 0.0146 |
1.65 | 33000 | 0.0141 |
1.675 | 33500 | 0.0134 |
1.7 | 34000 | 0.0137 |
1.725 | 34500 | 0.0135 |
1.75 | 35000 | 0.014 |
1.775 | 35500 | 0.0132 |
1.8 | 36000 | 0.0129 |
1.825 | 36500 | 0.0129 |
1.85 | 37000 | 0.013 |
1.875 | 37500 | 0.0127 |
1.9 | 38000 | 0.0129 |
1.925 | 38500 | 0.0127 |
1.95 | 39000 | 0.0127 |
1.975 | 39500 | 0.0114 |
2.0 | 40000 | 0.0125 |
2.025 | 40500 | 0.0055 |
2.05 | 41000 | 0.0055 |
2.075 | 41500 | 0.0054 |
2.1 | 42000 | 0.0052 |
2.125 | 42500 | 0.005 |
2.15 | 43000 | 0.0051 |
2.175 | 43500 | 0.0054 |
2.2 | 44000 | 0.005 |
2.225 | 44500 | 0.0049 |
2.25 | 45000 | 0.005 |
2.275 | 45500 | 0.0047 |
2.3 | 46000 | 0.0051 |
2.325 | 46500 | 0.0051 |
2.35 | 47000 | 0.0049 |
2.375 | 47500 | 0.0044 |
2.4 | 48000 | 0.0047 |
2.425 | 48500 | 0.0047 |
2.45 | 49000 | 0.0047 |
2.475 | 49500 | 0.0048 |
2.5 | 50000 | 0.0046 |
2.525 | 50500 | 0.0046 |
2.55 | 51000 | 0.0047 |
2.575 | 51500 | 0.0047 |
2.6 | 52000 | 0.0044 |
2.625 | 52500 | 0.0046 |
2.65 | 53000 | 0.0044 |
2.675 | 53500 | 0.0042 |
2.7 | 54000 | 0.0045 |
2.725 | 54500 | 0.0041 |
2.75 | 55000 | 0.0043 |
2.775 | 55500 | 0.0041 |
2.8 | 56000 | 0.0045 |
2.825 | 56500 | 0.0042 |
2.85 | 57000 | 0.0041 |
2.875 | 57500 | 0.004 |
2.9 | 58000 | 0.0041 |
2.925 | 58500 | 0.0039 |
2.95 | 59000 | 0.0041 |
2.975 | 59500 | 0.0041 |
3.0 | 60000 | 0.004 |
Framework Versions
- Python: 3.11.13
- Sentence Transformers: 4.1.0
- Transformers: 4.53.2
- PyTorch: 2.6.0+cu124
- Accelerate: 1.8.1
- Datasets: 3.6.0
- Tokenizers: 0.21.2
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 13
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for Shuu12121/CodeSearch-ModernBERT-Owl-5.0-Pre
Base model
Shuu12121/CodeModernBERT-Owl-5.0-Pre