SPLADE-BERT-Mini
This is a SPLADE Sparse Encoder model finetuned from prajjwal1/bert-mini using the sentence-transformers library. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.
Model Details
Model Description
- Model Type: SPLADE Sparse Encoder
- Base model: prajjwal1/bert-mini
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 30522 dimensions
- Similarity Function: Dot Product
- Language: en
- License: mit
Model Sources
- Documentation: Sentence Transformers Documentation
- Documentation: Sparse Encoder Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sparse Encoders on Hugging Face
Full Model Architecture
SparseEncoder(
(0): MLMTransformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertForMaskedLM'})
(1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SparseEncoder
# Download from the 🤗 Hub
model = SparseEncoder("rasyosef/SPLADE-BERT-Mini")
# Run inference
queries = [
"cantigny gardens cost",
]
documents = [
'The fee for a ceremony ranges from $400 to $2,500 with reception rental or $3,000 for a ceremony-only wedding. Please inquire about discounted rates for ceremony guest counts under 75. The average wedding cost at Cantigny Park is estimated at between $12,881 and $22,238 for a ceremony & reception for 100 guests.',
'Nestled in a serene setting, Cantigny Park is a scenic realm where you will create a unique wedding, the memories of which you will always cherish. This expansive estate encompasses 500 acres of beautiful gardens, colorful botanicals and tranquil water features, creating an idyllic background for this ideal day.',
'Cantigny Park. Cantigny is a 500-acre (2.0 km2) park in Wheaton, Illinois, 30 miles west of Chicago. It is the former estate of Joseph Medill and his grandson Colonel Robert R. McCormick, publishers of the Chicago Tribune, and is open to the public.',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 30522] [3, 30522]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[18.8703, 13.8253, 13.4587]])
Evaluation
Metrics
Sparse Information Retrieval
- Evaluated with
SparseInformationRetrievalEvaluator
Metric | Value |
---|---|
dot_accuracy@1 | 0.6303 |
dot_accuracy@3 | 0.7972 |
dot_accuracy@5 | 0.851 |
dot_accuracy@10 | 0.9055 |
dot_precision@1 | 0.6303 |
dot_precision@3 | 0.2657 |
dot_precision@5 | 0.1702 |
dot_precision@10 | 0.0905 |
dot_recall@1 | 0.6303 |
dot_recall@3 | 0.7972 |
dot_recall@5 | 0.851 |
dot_recall@10 | 0.9055 |
dot_ndcg@10 | 0.769 |
dot_mrr@10 | 0.7251 |
dot_map@100 | 0.7288 |
query_active_dims | 26.4352 |
query_sparsity_ratio | 0.9991 |
corpus_active_dims | 326.676 |
corpus_sparsity_ratio | 0.9893 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 250,000 training samples
- Columns:
query
,positive
,negative_1
, andnegative_2
- Approximate statistics based on the first 1000 samples:
query positive negative_1 negative_2 type string string string string details - min: 4 tokens
- mean: 8.87 tokens
- max: 31 tokens
- min: 20 tokens
- mean: 82.54 tokens
- max: 218 tokens
- min: 20 tokens
- mean: 79.98 tokens
- max: 252 tokens
- min: 19 tokens
- mean: 80.55 tokens
- max: 211 tokens
- Samples:
query positive negative_1 negative_2 how do automotive technicians get paid
104 months ago. The amount of pay from company to company does not vary too much, but you do have a wide variety of compensation methods. There are various combinations of hourly and commission pay rates, which depending on what type of work you specialize in can vary your bottom line considerably.04 months ago. The amount of pay from company to company does not vary too much, but you do have a wide variety of compensation methods. There are various combinations of hourly and commission pay rates, which depending on what type of work you specialize in can vary your bottom line considerably.
Bureau of Labor Statistics figures indicate that automotive technicians earned an average annual salary of $38,560 and an average hourly wage of $18.54 as of May 2011.Half of auto technicians reported annual salaries of between $26,850 and $47,540 and hourly wages of between $12.91 and $22.86.The 10 percent of automotive techs who earned the lowest made $20,620 or less per year, and the top 10 percent of earners made $59,600 or more per year.ver one-third of all automotive technicians employed as of May 2011 worked in the automotive repair and maintenance industry, where they earned an average of $35,090 per year.
It really depends on what automaker your working for, how much experience you have, and how long you've been in the industry. Obviously if you're working for a highend company(BMW,Mercedes,Ferrari) you can expect to be paid more per hour. And automotive technicians don't get paid by the hour.We get paid per FLAT RATE hour. Which basically means that we get paid by the job. Which could range from 0.2 of an hour for replacing a headlight bulb to 10hours for a transmission overhaul. Then there's a difference between warranty jobs and cash jobs.ut I won't get into too much detail. Automotive technicians get paid around $12-$15/hr at entry level. But can make around $18-$26/hr with much more experience. Which means you can expect to make 30,000 to 60,000/year. Though most technicians don't see past 45,000 a year.
how far is steamboat springs from golden?
The distance between Steamboat Springs and Golden in a straight line is 100 miles or 160.9 Kilometers. Driving Directions & Drive Times from Steamboat Springs to Golden can be found further down the page.
Steamboat Springs Vacation Rentals Steamboat Springs Vacations Steamboat Springs Restaurants Things to Do in Steamboat Springs Steamboat Springs Travel Forum Steamboat Springs Photos Steamboat Springs Map Steamboat Springs Travel Guide All Steamboat Springs Hotels; Steamboat Springs Hotel Deals; Last Minute Hotels in Steamboat Springs; By Hotel Type Steamboat Springs Family Hotels
There are 98.92 miles from Golden to Steamboat Springs in northwest direction and 143 miles (230.14 kilometers) by car, following the US-40 route. Golden and Steamboat Springs are 3 hours 20 mins far apart, if you drive non-stop. This is the fastest route from Golden, CO to Steamboat Springs, CO. The halfway point is Heeney, CO. Golden, CO and Steamboat Springs, CO are in the same time zone (MDT). Current time in both locations is 1:26 pm.
incoming wire routing number for california bank and trust
Please call California Bank And Trust representative at (888) 315-2271 for more information. 1 Routing Number: 122003396. 2 250 EAST FIRST STREET # 700. LOS ANGELES, CA 90012-0000. 3 Phone Number: (888) 315-2271.
When asked to provide a routing number for incoming wire transfers to Union Bank accounts, the routing number to use is: 122000496. back to top What options do I have to send wires?
Business Contracting Officers (BCO) have access to Online Banking wires. Simply sign on to Online Banking, click “Send Wires”, and then complete the required information. This particular service is limited to sending wires to U.S. banks only.
- Loss:
SpladeLoss
with these parameters:{ "loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')", "document_regularizer_weight": 0.001, "query_regularizer_weight": 0.002 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: epochper_device_train_batch_size
: 64per_device_eval_batch_size
: 64learning_rate
: 6e-05num_train_epochs
: 4lr_scheduler_type
: cosinewarmup_ratio
: 0.025fp16
: Trueoptim
: adamw_torch_fusedbatch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: epochprediction_loss_only
: Trueper_device_train_batch_size
: 64per_device_eval_batch_size
: 64per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 6e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 4max_steps
: -1lr_scheduler_type
: cosinelr_scheduler_kwargs
: {}warmup_ratio
: 0.025warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Truefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torch_fusedoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsehub_revision
: Nonegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseliger_kernel_config
: Noneeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportionalrouter_mapping
: {}learning_rate_mapping
: {}
Training Logs
Epoch | Step | Training Loss | dot_ndcg@10 |
---|---|---|---|
1.0 | 3907 | 23.8846 | 0.7509 |
2.0 | 7814 | 0.785 | 0.7670 |
3.0 | 11721 | 0.6873 | 0.7685 |
4.0 | 15628 | 0.6283 | 0.7690 |
-1 | -1 | - | 0.7690 |
Framework Versions
- Python: 3.11.13
- Sentence Transformers: 5.0.0
- Transformers: 4.53.1
- PyTorch: 2.6.0+cu124
- Accelerate: 1.8.1
- Datasets: 3.6.0
- Tokenizers: 0.21.2
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
SpladeLoss
@misc{formal2022distillationhardnegativesampling,
title={From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective},
author={Thibault Formal and Carlos Lassance and Benjamin Piwowarski and Stéphane Clinchant},
year={2022},
eprint={2205.04733},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2205.04733},
}
SparseMultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
FlopsLoss
@article{paria2020minimizing,
title={Minimizing flops to learn efficient sparse representations},
author={Paria, Biswajit and Yeh, Chih-Kuan and Yen, Ian EH and Xu, Ning and Ravikumar, Pradeep and P{'o}czos, Barnab{'a}s},
journal={arXiv preprint arXiv:2004.05665},
year={2020}
}
- Downloads last month
- 9
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for rasyosef/SPLADE-BERT-Mini
Base model
prajjwal1/bert-miniDataset used to train rasyosef/SPLADE-BERT-Mini
Evaluation results
- Dot Accuracy@1 on Unknownself-reported0.630
- Dot Accuracy@3 on Unknownself-reported0.797
- Dot Accuracy@5 on Unknownself-reported0.851
- Dot Accuracy@10 on Unknownself-reported0.905
- Dot Precision@1 on Unknownself-reported0.630
- Dot Precision@3 on Unknownself-reported0.266
- Dot Precision@5 on Unknownself-reported0.170
- Dot Precision@10 on Unknownself-reported0.091
- Dot Recall@1 on Unknownself-reported0.630
- Dot Recall@3 on Unknownself-reported0.797