SentenceTransformer based on redis/langcache-embed-v1
This is a sentence-transformers model finetuned from redis/langcache-embed-v1 on the triplet dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: redis/langcache-embed-v1
- Maximum Sequence Length: 8192 tokens
- Output Dimensionality: 768 dimensions
- Similarity Function: Cosine Similarity
- Training Dataset:
- triplet
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("waris-gill/langcache-embed-v2-local")
# Run inference
sentences = [
'What are some examples of crimes understood as a moral turpitude?',
'What are some examples of crimes of moral turpitude?',
'What are some examples of crimes understood as a legal aptitude?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Training Details
Training Dataset
triplet
- Dataset: triplet
- Size: 36,864 training samples
- Columns:
anchor
,positive
,negative_1
,negative_2
, andnegative_3
- Approximate statistics based on the first 1000 samples:
anchor positive negative_1 negative_2 negative_3 type string string string string string details - min: 6 tokens
- mean: 13.88 tokens
- max: 54 tokens
- min: 5 tokens
- mean: 13.89 tokens
- max: 45 tokens
- min: 6 tokens
- mean: 18.68 tokens
- max: 118 tokens
- min: 5 tokens
- mean: 19.26 tokens
- max: 117 tokens
- min: 6 tokens
- mean: 18.07 tokens
- max: 108 tokens
- Samples:
anchor positive negative_1 negative_2 negative_3 Is life really what I make of it?
Life is what you make it?
Is life hardly what I take of it?
Life is not entirely what I make of it.
Is life not what I make of it?
When you visit a website, can a person running the website see your IP address?
Does every website I visit knows my public ip address?
When you avoid a website, can a person hiding the website see your MAC address?
When you send an email, can the recipient see your physical location?
When you visit a website, a person running the website cannot see your IP address.
What are some cool features about iOS 10?
What are the best new features of iOS 10?
iOS 10 received criticism for its initial bugs and performance issues, and some users found the redesigned apps less intuitive compared to previous versions.
What are the drawbacks of using Android 14?
iOS 10 was widely criticized for its bugs, removal of beloved features, and generally being a downgrade from previous versions.
- Loss:
MatryoshkaLoss
with these parameters:{ "loss": "CachedMultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Evaluation Dataset
triplet
- Dataset: triplet
- Size: 7,267 evaluation samples
- Columns:
anchor
,positive
,negative_1
,negative_2
, andnegative_3
- Approximate statistics based on the first 1000 samples:
anchor positive negative_1 negative_2 negative_3 type string string string string string details - min: 6 tokens
- mean: 13.62 tokens
- max: 46 tokens
- min: 6 tokens
- mean: 13.58 tokens
- max: 39 tokens
- min: 6 tokens
- mean: 18.32 tokens
- max: 107 tokens
- min: 6 tokens
- mean: 18.1 tokens
- max: 174 tokens
- min: 6 tokens
- mean: 18.26 tokens
- max: 172 tokens
- Samples:
anchor positive negative_1 negative_2 negative_3 How do I make friends in office?
How can I make friends in office?
How do I lose friends in office?
How do I lose enemies in office?
I already have plenty of friends at work.
Is it good to do MBA after Engineering?
Is it necessary to do MBA after Engineering?
Is learning to code essential for a successful marketing career?
Not necessarily; an MBA isn't always the best next step after engineering โ practical experience or specialized master's degrees can be more valuable depending on career goals.
Is it bad to do MBA after Engineering?
How I should fix my computer while it is showing no boot device found?
How do I fix the "Boot device not found" problem?
My computer is booting normally and does not have any issues with the boot device.
I should not fix my computer while it is showing no boot device found.
When will I break my phone while it is showing full boot device found?
- Loss:
MatryoshkaLoss
with these parameters:{ "loss": "CachedMultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 2048per_device_eval_batch_size
: 1024learning_rate
: 1e-05num_train_epochs
: 1lr_scheduler_type
: constantwarmup_steps
: 10gradient_checkpointing
: Truetorch_compile
: Truetorch_compile_backend
: inductorbatch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 2048per_device_eval_batch_size
: 1024per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 1e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 1max_steps
: -1lr_scheduler_type
: constantlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 10log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}tp_size
: 0fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Truegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Truetorch_compile_backend
: inductortorch_compile_mode
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | triplet loss |
---|---|---|---|
0.0556 | 1 | 6.4636 | - |
0.1111 | 2 | 6.1076 | - |
0.1667 | 3 | 5.8323 | - |
0.2222 | 4 | 5.6861 | - |
0.2778 | 5 | 5.5694 | - |
0.3333 | 6 | 5.2121 | - |
0.3889 | 7 | 5.0695 | - |
0.4444 | 8 | 4.81 | - |
0.5 | 9 | 4.6698 | - |
0.5556 | 10 | 4.3546 | 1.2224 |
0.6111 | 11 | 4.1922 | - |
0.6667 | 12 | 4.1434 | - |
0.7222 | 13 | 3.9918 | - |
0.7778 | 14 | 3.702 | - |
0.8333 | 15 | 3.6501 | - |
0.8889 | 16 | 3.6641 | - |
0.9444 | 17 | 3.3196 | - |
1.0 | 18 | 2.7108 | - |
Framework Versions
- Python: 3.11.11
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
CachedMultipleNegativesRankingLoss
@misc{gao2021scaling,
title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
year={2021},
eprint={2101.06983},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
- Downloads last month
- 5
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for waris-gill/langcache-embed-v2-local
Base model
answerdotai/ModernBERT-base
Finetuned
Alibaba-NLP/gte-modernbert-base
Quantized
redis/langcache-embed-v1