SentenceTransformer based on redis/langcache-embed-v1

This is a sentence-transformers model finetuned from redis/langcache-embed-v1 on the triplet dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: redis/langcache-embed-v1
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • triplet

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the ๐Ÿค— Hub
model = SentenceTransformer("waris-gill/langcache-embed-v2-local")
# Run inference
sentences = [
    'What are some examples of crimes understood as a moral turpitude?',
    'What are some examples of crimes of moral turpitude?',
    'What are some examples of crimes understood as a legal aptitude?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

triplet

  • Dataset: triplet
  • Size: 36,864 training samples
  • Columns: anchor, positive, negative_1, negative_2, and negative_3
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative_1 negative_2 negative_3
    type string string string string string
    details
    • min: 6 tokens
    • mean: 13.88 tokens
    • max: 54 tokens
    • min: 5 tokens
    • mean: 13.89 tokens
    • max: 45 tokens
    • min: 6 tokens
    • mean: 18.68 tokens
    • max: 118 tokens
    • min: 5 tokens
    • mean: 19.26 tokens
    • max: 117 tokens
    • min: 6 tokens
    • mean: 18.07 tokens
    • max: 108 tokens
  • Samples:
    anchor positive negative_1 negative_2 negative_3
    Is life really what I make of it? Life is what you make it? Is life hardly what I take of it? Life is not entirely what I make of it. Is life not what I make of it?
    When you visit a website, can a person running the website see your IP address? Does every website I visit knows my public ip address? When you avoid a website, can a person hiding the website see your MAC address? When you send an email, can the recipient see your physical location? When you visit a website, a person running the website cannot see your IP address.
    What are some cool features about iOS 10? What are the best new features of iOS 10? iOS 10 received criticism for its initial bugs and performance issues, and some users found the redesigned apps less intuitive compared to previous versions. What are the drawbacks of using Android 14? iOS 10 was widely criticized for its bugs, removal of beloved features, and generally being a downgrade from previous versions.
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "CachedMultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Evaluation Dataset

triplet

  • Dataset: triplet
  • Size: 7,267 evaluation samples
  • Columns: anchor, positive, negative_1, negative_2, and negative_3
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative_1 negative_2 negative_3
    type string string string string string
    details
    • min: 6 tokens
    • mean: 13.62 tokens
    • max: 46 tokens
    • min: 6 tokens
    • mean: 13.58 tokens
    • max: 39 tokens
    • min: 6 tokens
    • mean: 18.32 tokens
    • max: 107 tokens
    • min: 6 tokens
    • mean: 18.1 tokens
    • max: 174 tokens
    • min: 6 tokens
    • mean: 18.26 tokens
    • max: 172 tokens
  • Samples:
    anchor positive negative_1 negative_2 negative_3
    How do I make friends in office? How can I make friends in office? How do I lose friends in office? How do I lose enemies in office? I already have plenty of friends at work.
    Is it good to do MBA after Engineering? Is it necessary to do MBA after Engineering? Is learning to code essential for a successful marketing career? Not necessarily; an MBA isn't always the best next step after engineering โ€“ practical experience or specialized master's degrees can be more valuable depending on career goals. Is it bad to do MBA after Engineering?
    How I should fix my computer while it is showing no boot device found? How do I fix the "Boot device not found" problem? My computer is booting normally and does not have any issues with the boot device. I should not fix my computer while it is showing no boot device found. When will I break my phone while it is showing full boot device found?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "CachedMultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 2048
  • per_device_eval_batch_size: 1024
  • learning_rate: 1e-05
  • num_train_epochs: 1
  • lr_scheduler_type: constant
  • warmup_steps: 10
  • gradient_checkpointing: True
  • torch_compile: True
  • torch_compile_backend: inductor
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 2048
  • per_device_eval_batch_size: 1024
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 1e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: constant
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 10
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • tp_size: 0
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: True
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: True
  • torch_compile_backend: inductor
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss triplet loss
0.0556 1 6.4636 -
0.1111 2 6.1076 -
0.1667 3 5.8323 -
0.2222 4 5.6861 -
0.2778 5 5.5694 -
0.3333 6 5.2121 -
0.3889 7 5.0695 -
0.4444 8 4.81 -
0.5 9 4.6698 -
0.5556 10 4.3546 1.2224
0.6111 11 4.1922 -
0.6667 12 4.1434 -
0.7222 13 3.9918 -
0.7778 14 3.702 -
0.8333 15 3.6501 -
0.8889 16 3.6641 -
0.9444 17 3.3196 -
1.0 18 2.7108 -

Framework Versions

  • Python: 3.11.11
  • Sentence Transformers: 4.1.0
  • Transformers: 4.51.3
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.6.0
  • Datasets: 3.5.1
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

CachedMultipleNegativesRankingLoss

@misc{gao2021scaling,
    title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
    author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
    year={2021},
    eprint={2101.06983},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}
Downloads last month
5
Safetensors
Model size
149M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for waris-gill/langcache-embed-v2-local

Finetuned
(3)
this model