colbert-ettin-17m / README.md
yosefw's picture
Add new SentenceTransformer model
fac5815 verified
|
raw
history blame
35.1 kB
metadata
tags:
  - ColBERT
  - PyLate
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:850000
  - loss:Contrastive
base_model: jhu-clsp/ettin-encoder-17m
pipeline_tag: sentence-similarity
library_name: PyLate
metrics:
  - accuracy
model-index:
  - name: PyLate model based on jhu-clsp/ettin-encoder-17m
    results:
      - task:
          type: col-berttriplet
          name: Col BERTTriplet
        dataset:
          name: Unknown
          type: unknown
        metrics:
          - type: accuracy
            value: 0.7892000079154968
            name: Accuracy

PyLate model based on jhu-clsp/ettin-encoder-17m

This is a PyLate model finetuned from jhu-clsp/ettin-encoder-17m. It maps sentences & paragraphs to sequences of 128-dimensional dense vectors and can be used for semantic textual similarity using the MaxSim operator.

Model Details

Model Description

  • Model Type: PyLate model
  • Base model: jhu-clsp/ettin-encoder-17m
  • Document Length: 256 tokens
  • Query Length: 32 tokens
  • Output Dimensionality: 128 tokens
  • Similarity Function: MaxSim

Model Sources

Full Model Architecture

ColBERT(
  (0): Transformer({'max_seq_length': 255, 'do_lower_case': False}) with Transformer model: ModernBertModel 
  (1): Dense({'in_features': 256, 'out_features': 128, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
)

Usage

First install the PyLate library:

pip install -U pylate

Retrieval

PyLate provides a streamlined interface to index and retrieve documents using ColBERT models. The index leverages the Voyager HNSW index to efficiently handle document embeddings and enable fast retrieval.

Indexing documents

First, load the ColBERT model and initialize the Voyager index, then encode and index your documents:

from pylate import indexes, models, retrieve

# Step 1: Load the ColBERT model
model = models.ColBERT(
    model_name_or_path=yosefw/colbert-ettin-17m,
)

# Step 2: Initialize the Voyager index
index = indexes.Voyager(
    index_folder="pylate-index",
    index_name="index",
    override=True,  # This overwrites the existing index if any
)

# Step 3: Encode the documents
documents_ids = ["1", "2", "3"]
documents = ["document 1 text", "document 2 text", "document 3 text"]

documents_embeddings = model.encode(
    documents,
    batch_size=32,
    is_query=False,  # Ensure that it is set to False to indicate that these are documents, not queries
    show_progress_bar=True,
)

# Step 4: Add document embeddings to the index by providing embeddings and corresponding ids
index.add_documents(
    documents_ids=documents_ids,
    documents_embeddings=documents_embeddings,
)

Note that you do not have to recreate the index and encode the documents every time. Once you have created an index and added the documents, you can re-use the index later by loading it:

# To load an index, simply instantiate it with the correct folder/name and without overriding it
index = indexes.Voyager(
    index_folder="pylate-index",
    index_name="index",
)

Retrieving top-k documents for queries

Once the documents are indexed, you can retrieve the top-k most relevant documents for a given set of queries. To do so, initialize the ColBERT retriever with the index you want to search in, encode the queries and then retrieve the top-k documents to get the top matches ids and relevance scores:

# Step 1: Initialize the ColBERT retriever
retriever = retrieve.ColBERT(index=index)

# Step 2: Encode the queries
queries_embeddings = model.encode(
    ["query for document 3", "query for document 1"],
    batch_size=32,
    is_query=True,  #  # Ensure that it is set to False to indicate that these are queries
    show_progress_bar=True,
)

# Step 3: Retrieve top-k documents
scores = retriever.retrieve(
    queries_embeddings=queries_embeddings,
    k=10,  # Retrieve the top 10 matches for each query
)

Reranking

If you only want to use the ColBERT model to perform reranking on top of your first-stage retrieval pipeline without building an index, you can simply use rank function and pass the queries and documents to rerank:

from pylate import rank, models

queries = [
    "query A",
    "query B",
]

documents = [
    ["document A", "document B"],
    ["document 1", "document C", "document B"],
]

documents_ids = [
    [1, 2],
    [1, 3, 2],
]

model = models.ColBERT(
    model_name_or_path=yosefw/colbert-ettin-17m,
)

queries_embeddings = model.encode(
    queries,
    is_query=True,
)

documents_embeddings = model.encode(
    documents,
    is_query=False,
)

reranked_documents = rank.rerank(
    documents_ids=documents_ids,
    queries_embeddings=queries_embeddings,
    documents_embeddings=documents_embeddings,
)

Evaluation

Metrics

Col BERTTriplet

  • Evaluated with pylate.evaluation.colbert_triplet.ColBERTTripletEvaluator
Metric Value
accuracy 0.7892

Training Details

Training Dataset

Unnamed Dataset

  • Size: 850,000 training samples
  • Columns: query, positive, negative_1, negative_2, and negative_3
  • Approximate statistics based on the first 1000 samples:
    query positive negative_1 negative_2 negative_3
    type string string string string string
    details
    • min: 6 tokens
    • mean: 10.39 tokens
    • max: 32 tokens
    • min: 18 tokens
    • mean: 31.91 tokens
    • max: 32 tokens
    • min: 24 tokens
    • mean: 31.95 tokens
    • max: 32 tokens
    • min: 21 tokens
    • mean: 31.95 tokens
    • max: 32 tokens
    • min: 14 tokens
    • mean: 31.96 tokens
    • max: 32 tokens
  • Samples:
    query positive negative_1 negative_2 negative_3
    are freight charges taxable in arizona Shipping is Not Taxable if Separately Stated These states say shipping charges are not taxable if you show the charge separately from the selling price of the item. They are taxable if you include the charge as part of the price of the item. 1 Alabama. 2 Arizona. They have nexus in Arizona, and the merchandise ships from California via a 3rd party shipper. The invoice states a merchandise cost and a freight/handling cost in two separate boxes. But they are charging sales tax on the sum of the merchandise plus the freight/handling cost. Tax on freight generally follows the transaction the freight relates to. If the goods you sell are exempt and you charge your customer for freight or delivery charges or handling charges, then these charges are exempt. 1 Freight is by common carrier-you should not use your own truck. 2 To do so indicates shipping is a necessary part of the sales transaction and this makes the freight taxable in many states; 3 Show charges for freight separate from everything else-Do not mark it up and do not combine handling charges with freight charges.
    are freight charges taxable in arizona Shipping is Not Taxable if Separately Stated These states say shipping charges are not taxable if you show the charge separately from the selling price of the item. They are taxable if you include the charge as part of the price of the item. 1 Alabama. 2 Arizona. From the Arizona Department of Revenue website: When included in the retail sales price, a shipping charge from the manufacturer or wholesaler to the retailer is considered a cost of doing business and is included in the taxable proceeds from the sale. 1 Show charges for freight separate from everything else-Do not mark it up and do not combine handling charges with freight charges. 2 Most states charge tax on handling charges, so combining them with freight charges (e.g. S&H) will make both taxable. Summary of Sales Tax on Shipping in Arizona. Many states are confusing on shipping or even simply don’t take into account, making you play guessing games. Arizona, though, is nice and straightforward. If the shipping costs are separately stated on the invoice, you shouldn’t have to collect.
    average gas price oahu HONOLULU (HawaiiNewsNow) -. Even though gas prices have sunk below $3 at some locations on Oahu for the first time in seven years, the average gas price in Hawaii is still the highest in the country. Hawaii News Now found the lowest-priced gas at Costco in Iwilei Friday, at $2.99 a gallon.ven though gas prices have sunk below $3 at some locations on Oahu for the first time in seven years, the average gas price in Hawaii is still the highest in the country. Even though gas prices have sunk below $3 at some locations on Oahu for the first time in seven years, the average gas price in Hawaii is still the highest in the country.Hawaii News Now found the lowest-priced gas at Costco in Iwilei Friday, at $2.99 a gallon.ven though gas prices have sunk below $3 at some locations on Oahu for the first time in seven years, the average gas price in Hawaii is still the highest in the country. yesterday. $2.73update. There are 48 Regular gas price reports in the past 5 days in Honolulu, HI. The average Regular gas price in Honolulu, HI is $2.72, which is $0.07 lower than U.S. national average Regular gas price $2.79.The lowest Regular gas price is $2.43 of CGES located at 400 Sand Island Pkwy, Honolulu, HI 96819.2.73update. There are 48 Regular gas price reports in the past 5 days in Honolulu, HI. The average Regular gas price in Honolulu, HI is $2.72, which is $0.07 lower than U.S. national average Regular gas price $2.79. The lowest Regular gas price is $2.43 of CGES located at 400 Sand Island Pkwy, Honolulu, HI 96819. Diesel Fuel Prices: The national average price for diesel in September 2015 was $2.521 per gallon which was a decrease of $1.250 (33.1%) from the same month a year ago, and a decrease of 12.4 cents (4.7%) from August 2015.asoline Consumption: Hawaii’s statewide consumption for gasoline in July 2015, as measured by the gasoline tax base, was 39.6 million gallons, which increased 1.0 million gallons (2.5%) from the same month a year ago.
  • Loss: pylate.losses.contrastive.Contrastive

Evaluation Dataset

Unnamed Dataset

  • Size: 20,000 evaluation samples
  • Columns: query, positive, negative_1, negative_2, and negative_3
  • Approximate statistics based on the first 1000 samples:
    query positive negative_1 negative_2 negative_3
    type string string string string string
    details
    • min: 6 tokens
    • mean: 10.12 tokens
    • max: 24 tokens
    • min: 25 tokens
    • mean: 31.98 tokens
    • max: 32 tokens
    • min: 24 tokens
    • mean: 31.94 tokens
    • max: 32 tokens
    • min: 20 tokens
    • mean: 31.93 tokens
    • max: 32 tokens
    • min: 18 tokens
    • mean: 31.95 tokens
    • max: 32 tokens
  • Samples:
    query positive negative_1 negative_2 negative_3
    heart specialists in ridgeland ms Dr. George Reynolds Jr, MD is a cardiology specialist in Ridgeland, MS and has been practicing for 35 years. He graduated from Vanderbilt University School Of Medicine in 1977 and specializes in cardiology and internal medicine. Dr. James Kramer is a Internist in Ridgeland, MS. Find Dr. Kramer's phone number, address and more. General internal medicine physicians, or internists, are primary-care doctors who perform physical exams and treat a wide spectrum of common illnesses in adult men and women. One of every four physicians in the U.S. is an internist, many of whom are certified in one of 19 subspecialties, including cardiology, infectious disease and medical oncology. Chronic Pulmonary Heart Diseases (incl. Pulmonary Hypertension) Coarctation of the Aorta; Congenital Aortic Valve Disorders; Congenital Heart Defects; Congenital Heart Disease; Congestive Heart Failure; Coronary Artery Disease (CAD) Endocarditis; Heart Attack (Acute Myocardial Infarction) Heart Disease; Heart Murmur; Heart Palpitations; Hyperlipidemia; Hypertension
    heart specialists in ridgeland ms Dr. George Reynolds Jr, MD is a cardiology specialist in Ridgeland, MS and has been practicing for 35 years. He graduated from Vanderbilt University School Of Medicine in 1977 and specializes in cardiology and internal medicine. Dr. James Kramer is an internist in Ridgeland, Mississippi. He received his medical degree from Loma Linda University School of Medicine and has been in practice for more than 20 years. Dr. James Kramer's Details Congenital Heart Defects; Congenital Heart Disease; Congestive Heart Failure; Coronary Artery Disease (CAD) Endocarditis; Heart Attack (Acute Myocardial Infarction) Heart Disease; Heart Murmur; Heart Palpitations; Hyperlipidemia; Hypertension; Hypertensive Chronic Kidney Disease; Hypertensive Heart and Chronic Kidney Disease; Hypertensive Heart Disease; Hypotension Aneurysm and Dissection of Heart; Aneurysm of Heart; Angina and Acute Coronary Syndrome; Aortic Aneurysm; Aortic Dissection; Aortic Ectasia; Aortic Stenosis; Aortic Valve Disease; Aortic Valve Regurgitation; Arrhythmias; Atrial Fibrillation; Atrial Flutter; Autonomic Disorders; Benign Tumor; Cardiomegaly; Cardiomyopathy; Carotid Artery Disease; Chest Pain
    does baytril otic require a prescription Baytril Otic Ear Drops-Enrofloxacin/Silver Sulfadiazine-Prices & Information. A prescription is required for this item. A prescription is required for this item. Brand medication is not available at this time. RX required for this item. Click here for our full Prescription Policy and Form. Baytril Otic (enrofloxacin/silver sulfadiazine) Emulsion from Bayer is the first fluoroquinolone approved by the Food and Drug Administration for the topical treatment of canine otitis externa. Baytril Otic is indicated as a treatment for canine otitis externa (ear infection) complicated by certain bacterial and fungal organisms. Baytril Otic is an emulsion containing 5 mg enrofloxacin and 10 mg silver sulfadiazine per ml. Baytril Otic Drops are administered into the outer ear. As a general guide, dogs weighing 35 lbs. or less are given 5-10 drops per treatment; dogs over 35 lbs. are given 10-15 drops per treatment. Following treatment, gently massage the ear to ensure distribution. May be applied twice daily for up to 14 days, or as indicated. Enrofloxacin and silver sulfadiazine. Baytril for dogs is an antibiotic often prescribed for bacterial infections, particularly those involving the ears. Ear infections are rare in many animals, but quite common in dogs. This is particularly true for dogs with long droopy ears, where it will stay very warm and moist.
  • Loss: pylate.losses.contrastive.Contrastive

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • learning_rate: 2e-05
  • num_train_epochs: 4
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.05
  • fp16: True
  • push_to_hub: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.05
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: True
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss accuracy
1.0 26563 1.1462 - -
0 0 - - 0.8005
1.0 26563 - 0.9766 -
2.0 53126 0.7825 - -
0 0 - - 0.8016
2.0 53126 - 1.0002 -
3.0 79689 0.5814 - -
0 0 - - 0.7923
3.0 79689 - 1.0741 -
4.0 106252 0.4569 - -
0 0 - - 0.7892
4.0 106252 - 1.1220 -
0 0 - - 0.7892

Framework Versions

  • Python: 3.11.11
  • Sentence Transformers: 4.0.2
  • PyLate: 1.2.0
  • Transformers: 4.48.2
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.5.2
  • Datasets: 3.6.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084"
}

PyLate

@misc{PyLate,
title={PyLate: Flexible Training and Retrieval for Late Interaction Models},
author={Chaffin, Antoine and Sourty, Raphaël},
url={https://github.com/lightonai/pylate},
year={2024}
}