ColBERT based on nreimers/MiniLM-L6-H384-uncased

This is a sentence-transformers model finetuned from nreimers/MiniLM-L6-H384-uncased. It maps sentences & paragraphs to a 128-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: nreimers/MiniLM-L6-H384-uncased
  • Maximum Sequence Length: 31 tokens
  • Output Dimensionality: 128 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

ColBERT(
  (0): Transformer({'max_seq_length': 31, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Dense({'in_features': 384, 'out_features': 128, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'are emg pickups any good?',
    "EMGs are a one trick pony, and only sound good for high gain applications. Sort of, they definitely aren't as flexible as most passive options, but most metal oriented passive pickups have the same issue. A lot of guitarists forget that EMG makes more pickups than just the 81/85 set.",
    "Among guitar and bass accessories, the company sells active humbucker pickups, such as the EMG 81, the EMG 85, the EMG 60, and the EMG 89. They also produce passive pickups such as the EMG-HZ Series, which include SRO-OC1's and SC Sets.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 128]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

Unnamed Dataset

  • Size: 1,893,949 training samples
  • Columns: question, answer, and negative
  • Approximate statistics based on the first 1000 samples:
    question answer negative
    type string string string
    details
    • min: 9 tokens
    • mean: 13.01 tokens
    • max: 27 tokens
    • min: 16 tokens
    • mean: 31.78 tokens
    • max: 32 tokens
    • min: 14 tokens
    • mean: 31.66 tokens
    • max: 32 tokens
  • Samples:
    question answer negative
    what is the relationship between humility and thankfulness? how gratitude can influence humility and vice versa. Humility is characterized by low self-focus, secure sense of self, and increased valuation of others. Gratitude is marked by a sense that one has benefited from the actions of another. -hum-, root. -hum- comes from Latin, where it has the meaning "ground. '' This meaning is found in such words as: exhume, humble, humiliate, humility, humus, posthumous.
    what is the difference between usb a b c? The USB-A has a much larger physical connector than the Type C, Type C is around the same size as a micro-USB connector. Unlike, Type A, you won't need to try and insert it, flip it over and then flip it over once more just to find the right orientation when trying to make a connection. First the transfer rates: USB 2.0 offers transfer rates of 480 Mbps and USB 3.0 offers transfer rates of 4.8 Gbps - that's 10 times faster. ... USB 2.0 provided up to 500 mA whereas USB 3.0 provides up to 900 mA, allowing power hungry devices to now be bus powered.
    how hyaluronic acid is made? Hyaluronic acid is a substance that is naturally present in the human body. It is found in the highest concentrations in fluids in the eyes and joints. The hyaluronic acid that is used as medicine is extracted from rooster combs or made by bacteria in the laboratory. Hyaluronic acid helps your skin hang on to the moisture. 2. ... Hyaluronic acid by itself is non-comedogenic (doesn't clog pores), but you should be careful when choosing a hyaluronic acid serum that the ingredient list doesn't contain any sneaky pore-clogging ingredients you're not expecting.
  • Loss: pylate.losses.contrastive.Contrastive

Evaluation Dataset

Unnamed Dataset

  • Size: 5,000 evaluation samples
  • Columns: question, answer, and negative_1
  • Approximate statistics based on the first 1000 samples:
    question answer negative_1
    type string string string
    details
    • min: 9 tokens
    • mean: 12.96 tokens
    • max: 22 tokens
    • min: 19 tokens
    • mean: 31.7 tokens
    • max: 32 tokens
    • min: 14 tokens
    • mean: 31.43 tokens
    • max: 32 tokens
  • Samples:
    question answer negative_1
    are tefal ingenio pans suitable for induction hobs? Tefal Ingenio is a revolutionary concept that brings a whole new take on versatility. ... The frying pans also feature Tefal's iconic Thermo-Spot which lets you know when the pan has reached optimal cooking temperature. The Ingenio Induction range is compatible with all hobs and is also dishwasher safe. Tefal Ingenio is a revolutionary concept that brings a whole new take on versatility. ... The frying pans also feature Tefal's iconic Thermo-Spot which lets you know when the pan has reached optimal cooking temperature. The Ingenio Induction range is compatible with all hobs and is also dishwasher safe.
    how many continuing education hours is acls? The ACLS, PALS, and NRP certification courses are approved for 8 CEUs/CMEs, and recertification courses are approved for 4 CEUs/CMEs. The BLS certification course is approved for 4 CEUs/CMEs and the recertification course is approved for 2 CEUs/CMEs. For more information, please visit our Accreditation page. The foremost difference between the two is their advancement level. Essentially, ACLS is a sophisticated and more advanced course and builds upon the major fundamentals developed during BLS. The main purpose of BLS and ACLS certification are well explained in this article.
    what are the health benefits of drinking peppermint tea? ['Makes you Stress Free. When it comes to relieving stress and anxiety, peppermint tea is one of the best allies. ... ', 'Sleep-Friendly. ... ', 'Aids in Weight Loss. ... ', 'Cure for an Upset Stomach. ... ', 'Improves Digestion. ... ', 'Boosts Immune System. ... ', 'Fights Bad Breath.'] Peppermint tea is a popular herbal tea that is naturally calorie- and caffeine-free. Some research has suggested that the oils in peppermint may have a number of other health benefits, such as fresher breath, better digestion, and reduced pain from headaches. Peppermint tea also has antibacterial properties.
  • Loss: pylate.losses.contrastive.Contrastive

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 256
  • per_device_eval_batch_size: 256
  • learning_rate: 3e-06
  • num_train_epochs: 1
  • warmup_ratio: 0.1
  • seed: 12
  • bf16: True
  • dataloader_num_workers: 12
  • load_best_model_at_end: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 256
  • per_device_eval_batch_size: 256
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 3e-06
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 12
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 12
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • tp_size: 0
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss
0.0001 1 10.8061
0.0270 200 8.9391
0.0541 400 5.1795
0.0811 600 2.3951
0.1081 800 1.6927
0.1352 1000 1.404
0.1622 1200 1.2496
0.1892 1400 1.1613
0.2162 1600 1.0843
0.2433 1800 1.0427
0.2703 2000 1.0005
0.2973 2200 0.9695
0.3244 2400 0.9325
0.3514 2600 0.9122
0.3784 2800 0.8832
0.4055 3000 0.8689
0.4325 3200 0.8626
0.4595 3400 0.8452
0.4866 3600 0.8329
0.5136 3800 0.8132
0.5406 4000 0.8111
0.5676 4200 0.7952
0.5947 4400 0.7892
0.6217 4600 0.7772
0.6487 4800 0.7793
0.6758 5000 0.7705
0.7028 5200 0.7692
0.7298 5400 0.7625
0.7569 5600 0.7595
0.7839 5800 0.7405
0.8109 6000 0.7513
0.8380 6200 0.7396
0.8650 6400 0.7312
0.8920 6600 0.7325
0.9190 6800 0.7371
0.9461 7000 0.7422
0.9731 7200 0.7296

Framework Versions

  • Python: 3.11.0
  • Sentence Transformers: 4.0.1
  • Transformers: 4.50.3
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.5.2
  • Datasets: 3.5.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month
3
Safetensors
Model size
22.7M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ayushexel/colbert-MiniLM-L6-H384-1-epoch-gooaq-1995000

Finetuned
(13)
this model