BGE base Financial Matryoshka
This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5 on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: BAAI/bge-base-en-v1.5
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 dimensions
- Similarity Function: Cosine Similarity
- Training Dataset:
- json
- Language: en
- License: apache-2.0
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("cristiano-sartori/bge-base-financial-matryoshka")
# Run inference
sentences = [
'How does a phased array antenna system work?',
'A phased array antenna system consists of multiple antennas whose signals are phase-shifted and combined to steer the beam direction electronically. This allows the antenna system to change its beam direction quickly without physical movement, widely used in radar systems, satellite communications, and wireless communications.',
'A thermocouple works on the Seebeck effect, where a voltage is generated at the junction of two different metals when exposed to temperature differences. This voltage change is proportional to the temperature change, allowing the thermocouple to measure temperature accurately.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Datasets:
dim_768
,dim_512
,dim_256
,dim_128
anddim_64
- Evaluated with
InformationRetrievalEvaluator
Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 |
---|---|---|---|---|---|
cosine_accuracy@1 | 0.9298 | 0.9386 | 0.9298 | 0.9211 | 0.9123 |
cosine_accuracy@3 | 1.0 | 1.0 | 0.9912 | 0.9912 | 0.9737 |
cosine_accuracy@5 | 1.0 | 1.0 | 1.0 | 1.0 | 0.9825 |
cosine_accuracy@10 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
cosine_precision@1 | 0.9298 | 0.9386 | 0.9298 | 0.9211 | 0.9123 |
cosine_precision@3 | 0.3333 | 0.3333 | 0.3304 | 0.3304 | 0.3246 |
cosine_precision@5 | 0.2 | 0.2 | 0.2 | 0.2 | 0.1965 |
cosine_precision@10 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 |
cosine_recall@1 | 0.9298 | 0.9386 | 0.9298 | 0.9211 | 0.9123 |
cosine_recall@3 | 1.0 | 1.0 | 0.9912 | 0.9912 | 0.9737 |
cosine_recall@5 | 1.0 | 1.0 | 1.0 | 1.0 | 0.9825 |
cosine_recall@10 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
cosine_ndcg@10 | 0.9741 | 0.9773 | 0.972 | 0.9664 | 0.9608 |
cosine_mrr@10 | 0.9649 | 0.9693 | 0.9623 | 0.955 | 0.9479 |
cosine_map@100 | 0.9649 | 0.9693 | 0.9623 | 0.955 | 0.9479 |
Training Details
Training Dataset
json
- Dataset: json
- Size: 1,017 training samples
- Columns:
anchor
andpositive
- Approximate statistics based on the first 1000 samples:
anchor positive type string string details - min: 7 tokens
- mean: 16.87 tokens
- max: 30 tokens
- min: 18 tokens
- mean: 73.06 tokens
- max: 349 tokens
- Samples:
anchor positive Why are transformers used in long-distance power transmission?
Transformers are used in long-distance power transmission to step up the voltage for efficient transmission over power lines, reducing energy loss due to resistance in the wires. At the destination, transformers step down the voltage for safe usage in homes and businesses.
How do automated teller machines (ATMs) use magnetic stripe readers?
Automated teller machines (ATMs) use magnetic stripe readers to read the information encoded on the magnetic stripe of a debit or credit card. The reader decodes the data stored in the stripe, which is necessary for transaction processing and account access.
How do optical encoders work in measuring rotational position?
Optical encoders work by emitting a light beam through a rotating disk with transparent and opaque segments. The light is detected by a photodiode array, which generates a digital signal corresponding to the rotation. This allows for precise measurement of the angular position and speed of a rotating shaft.
- Loss:
MatryoshkaLoss
with these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: epochper_device_train_batch_size
: 32per_device_eval_batch_size
: 16gradient_accumulation_steps
: 16learning_rate
: 2e-05num_train_epochs
: 4lr_scheduler_type
: cosinewarmup_ratio
: 0.1bf16
: Truetf32
: Trueload_best_model_at_end
: Trueoptim
: adamw_torch_fusedbatch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: epochprediction_loss_only
: Trueper_device_train_batch_size
: 32per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 16eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 4max_steps
: -1lr_scheduler_type
: cosinelr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Truefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Truelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torch_fusedoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
---|---|---|---|---|---|---|
1.0 | 2 | 0.9676 | 0.9741 | 0.9756 | 0.9557 | 0.9281 |
2.0 | 4 | 0.9741 | 0.9773 | 0.9752 | 0.9662 | 0.9511 |
3.0 | 6 | 0.9741 | 0.9773 | 0.972 | 0.9664 | 0.9608 |
4.0 | 8 | 0.9741 | 0.9773 | 0.9720 | 0.9664 | 0.9608 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.12.8
- Sentence Transformers: 3.4.1
- Transformers: 4.48.2
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 3
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for cristiano-sartori/bge-base-financial-matryoshka
Base model
BAAI/bge-base-en-v1.5Evaluation results
- Cosine Accuracy@1 on dim 768self-reported0.930
- Cosine Accuracy@3 on dim 768self-reported1.000
- Cosine Accuracy@5 on dim 768self-reported1.000
- Cosine Accuracy@10 on dim 768self-reported1.000
- Cosine Precision@1 on dim 768self-reported0.930
- Cosine Precision@3 on dim 768self-reported0.333
- Cosine Precision@5 on dim 768self-reported0.200
- Cosine Precision@10 on dim 768self-reported0.100
- Cosine Recall@1 on dim 768self-reported0.930
- Cosine Recall@3 on dim 768self-reported1.000