BGE base Financial Matryoshka
This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5 on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: BAAI/bge-base-en-v1.5
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 dimensions
- Similarity Function: Cosine Similarity
- Training Dataset:
- json
- Language: en
- License: apache-2.0
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("cristiano-sartori/bge-base-en-v1.5_finetuned")
# Run inference
sentences = [
"I have the following code in my microcontroler program:\n\nint analogValue = ADCH; // ADC Data Register\n\n//\n// Simple analog comparator. \n// If analogValue lower than threshold then toggle output high,\n// Otherwise toggle it low.\n//\nif ( analogValue > 128 ) {\n PORTB = 0; // Port B Data Register\n} else {\n PORTB = _BS( outputPin ); // Port B Data Register\n}\n\n\nWhere: \n\n\nADCH is the register that contains the value from the ADC\nPORTB is a dital output port that toggles an LED\n\n\nLooking at the resulting assembly code, I noticed that it is doing a 16 bit compare (lines 40-44) where strictly speaking only 8 bits would have been sufficient:\n\n40: 90 e0 ldi r25, 0x00 ; 0\n42: 81 38 cpi r24, 0x81 ; 129\n44: 91 05 cpc r25, r1\n46: 14 f0 brlt .+4 ; 0x4c <__SREG__+0xd>\n48: 18 ba out 0x18, r1 ; PORTB\n4a: f5 cf rjmp .-22 ; 0x36 <__CCP__+0x2>\n4c: 28 bb out 0x18, r18 ; PORTB\n4e: f3 cf rjmp .-26 ; 0x36 <__CCP__+0x2>\n\n\nI realize I declared analogValue as int, which indeed is 16 bit on AVR, but ...\n\nHow can I instruct the compiler to use 8 bit comparison? The Arduino IDE allows me to use byte, but avr-gcc by default doesn't.\n\nCheck this page for the complete program and its disassembled resulting code.\n\nEDIT1:\n\nChanging int to char changes the assembly code to:\n\n14: 11 24 eor r1, r1 ; r1 = 0\n3e: 18 ba out 0x18, r1 ; PORTB\n\n\nBasically skipping the test entirely.\n\nEDIT2: (Thnx: Wouter van Ooijen)\n\nChanging int to unsigned char changes the assembly code to:\n\n3c: 85 b1 in r24, 0x05 ; ADCH\n3e: ...\n40: 87 fd sbrc r24, 7 ; compare < 128 (well optimized)\n42: 02 c0 rjmp .+4 ; 0x48 <__SREG__+0x9>\n44: 18 ba out 0x18, r1 ; 24\n46: f7 cf rjmp .-18 ; 0x36 <__CCP__+0x2>\n48: 98 bb out 0x18, r25 ; 24\n4a: f5 cf rjmp .-22 ; 0x36 <__CCP__+0x2>\n\n",
'I actually think a better practice that avoids this architectural ambiguity is to include <stdint.h> then use declarative types like:\n\n\nuint8_t for unsigned 8-bit integers\nint8_t for signed 8-bit integers\nuint16_t for unsigned 16-bit integers\nuint32_t for unsigned 32-bit integers\n\n\nand so on...\n',
'Entropy is a function of the distribution. That is, the process used to generate a byte stream is what has entropy, not the byte stream itself. If I give you the bits 1011, that could have anywhere from 0 to 4 bits of entropy; you have no way of knowing that value.\n\nHere is the definition of Shannon entropy. Let $X$ be a random variable that takes on the values $x_1,x_2,x_3,\\dots,x_n$. Then the Shannon entropy is defined as\n\n$$H(X) = -\\sum_{i=1}^{n} \\operatorname{Pr}[x_i] \\cdot \\log_2\\left(\\operatorname{Pr}[x_i]\\right)$$\n\nwhere $\\operatorname{Pr}[\\cdot]$ represents probability. Note that the definition is a function of a random variable (i.e., a distribution), not a particular value!\n\nSo what is the entropy in a single flip of a coin? Let $F$ be a random variable representing such. There are two events, heads and tails, each with probability $0.5$. So, the Shannon entropy of $F$ is:\n\n$$H(F) = -(0.5\\cdot\\log_2 0.5 + 0.5\\cdot\\log_2 0.5) = -(-0.5 + -0.5) = 1.$$\n\nThus, $F$ has exactly one bit of entropy, what we expected. \n\nSo, to find how much entropy is present in a byte stream, you need to know how the byte stream is generated and the entropy of any inputs (in the case of PRNGs). Recall that a deterministic algorithm cannot add entropy to an input, only take it away, so the entropy of all inputs to a deterministic algorithm is the maximum entropy possible in the output. \n\nIf you\'re using a hardware RNG, then you need to know the probabilities associated with the data it gives you, else you cannot formally find the Shannon entropy (though you could give it a lower bound if you know the probabilities of some, but not all, events). \n\nBut note that in any case, you are dependent on the knowledge of the distribution associated with the byte stream. You can do statistical tests, like you mention, to verify that the output "looks random" (from a certain perspective). But you\'ll never be able to say any more than "it looks pretty uniformly distributed to me!". You\'ll never be able to look at a bitstream without knowing the distribution and say "there are X bits of entropy here."\n',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Datasets:
dim_768
,dim_512
,dim_256
,dim_128
anddim_64
- Evaluated with
InformationRetrievalEvaluator
Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 |
---|---|---|---|---|---|
cosine_accuracy@1 | 0.4595 | 0.4538 | 0.437 | 0.4098 | 0.3445 |
cosine_accuracy@3 | 0.623 | 0.6178 | 0.599 | 0.5605 | 0.4933 |
cosine_accuracy@5 | 0.682 | 0.6787 | 0.6541 | 0.6138 | 0.5508 |
cosine_accuracy@10 | 0.7415 | 0.74 | 0.719 | 0.6861 | 0.617 |
cosine_precision@1 | 0.4595 | 0.4538 | 0.437 | 0.4098 | 0.3445 |
cosine_precision@3 | 0.2077 | 0.2059 | 0.1997 | 0.1868 | 0.1644 |
cosine_precision@5 | 0.1364 | 0.1357 | 0.1308 | 0.1228 | 0.1102 |
cosine_precision@10 | 0.0742 | 0.074 | 0.0719 | 0.0686 | 0.0617 |
cosine_recall@1 | 0.4595 | 0.4538 | 0.437 | 0.4098 | 0.3445 |
cosine_recall@3 | 0.623 | 0.6178 | 0.599 | 0.5605 | 0.4933 |
cosine_recall@5 | 0.682 | 0.6787 | 0.6541 | 0.6138 | 0.5508 |
cosine_recall@10 | 0.7415 | 0.74 | 0.719 | 0.6861 | 0.617 |
cosine_ndcg@10 | 0.6 | 0.5958 | 0.5766 | 0.5443 | 0.4765 |
cosine_mrr@10 | 0.5547 | 0.5498 | 0.5312 | 0.4994 | 0.432 |
cosine_map@100 | 0.5602 | 0.555 | 0.537 | 0.5058 | 0.4398 |
Training Details
Training Dataset
json
- Dataset: json
- Size: 35,129 training samples
- Columns:
anchor
andpositive
- Approximate statistics based on the first 1000 samples:
anchor positive type string string details - min: 16 tokens
- mean: 182.52 tokens
- max: 512 tokens
- min: 14 tokens
- mean: 321.62 tokens
- max: 512 tokens
- Samples:
anchor positive Are there any common expectations from perspective employers when they hire a Perl developer?
For a student who likes Perl and Linux and would like to get a job as Perl developer, what would you recommend to learn?
I am looking for things that are generic and applicable to most/all Perl positions, as opposed to specific details of a given company's requirements.
In other words, what are the things I should be able to to/know to become more attractive to ANY company looking for a Perl developer.Some points:
As a Perl developer, pretty much any company will expect you to know MORE than Perl. Even in pure Perl shop, you need to know (ideally) JavaScript/overall web development; and SQL for back-end work.
And most companies have a mix of languages, so you should be prepared to be Perl/C++ or Perl/Java or whatever else is needed. Much as the fact grates on me, there aren't all that many good "Perl-only" shops I'm aware of.
As with any language, a company would expect you to use the language effectively. This has several facets, some are more important in Perl
Available libraries. This is a MAJOR point for Perl, of course. Great familiarity with CPAN and knowing which libraries are considered "state of the art"/"most common" for specific common tasks is a must.
Can you rattle off - without asking SO - the "standard" library for loading a CSV file? For parsing data out of HTML document? For writing unit tests? For mocking objects? For generating JSON data? For reading simple ...I have performed a repeated measures ANOVA in R, as follows:
aov_velocity = aov(Velocity ~ Material + Error(Subject/(Material)), data=scrd)
summary(aov_velocity)
What syntax in R can be used to perform a post hoc test after an ANOVA with repeated measures?
Would Tukey's test with Bonferroni correction be appropriate? If so, how could this be done in R?What you could do is specify the model with lme and then use glht from the multcomp package to do what you want. However, lme gives slightly different F-values than a standard ANOVA (see also my recent questions here).
lme_velocity = lme(Velocity ~ Material, data=scrd, random = ~1Why are solderless protoboards called "breadboards"? I've used the term for decades but couldn't answer a student's question about the name.
This terminology goes waaaaay back to the days of vacuum tubes.
Generally, you would mount a number of tube-sockets on standoffs to a piece of wood (the actual "breadboard"), and do all the wiring with point-point wire and the components just hanging between the various devices.
If you needed additional connection points, you would use a solder-lug terminal strip.
Image credit: Random googling.
The story goes that an engineer had an idea for a vacuum tube device late one night. Looking around the house, the only base for his prototype that he found was indeed his wife's breadboard, from the breadbox.
Now, I'm not endorsing actually using a real breadboard. It's your marital strife if you do.
I've actually constructed a tube project using the breadboard technique. It works very well. - Loss:
MatryoshkaLoss
with these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: epochper_device_train_batch_size
: 32per_device_eval_batch_size
: 16gradient_accumulation_steps
: 16learning_rate
: 2e-05num_train_epochs
: 4lr_scheduler_type
: cosinewarmup_ratio
: 0.1bf16
: Truetf32
: Trueload_best_model_at_end
: Trueoptim
: adamw_torch_fusedbatch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: epochprediction_loss_only
: Trueper_device_train_batch_size
: 32per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 16eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 4max_steps
: -1lr_scheduler_type
: cosinelr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Truefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Truelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}tp_size
: 0fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torch_fusedoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
---|---|---|---|---|---|---|---|
0.1457 | 10 | 52.8168 | - | - | - | - | - |
0.2914 | 20 | 41.1964 | - | - | - | - | - |
0.4372 | 30 | 35.2022 | - | - | - | - | - |
0.5829 | 40 | 34.046 | - | - | - | - | - |
0.7286 | 50 | 32.9287 | - | - | - | - | - |
0.8743 | 60 | 29.2281 | - | - | - | - | - |
0.9909 | 68 | - | 0.5965 | 0.5903 | 0.5725 | 0.5365 | 0.4658 |
1.0291 | 70 | 32.0647 | - | - | - | - | - |
1.1749 | 80 | 26.1728 | - | - | - | - | - |
1.3206 | 90 | 26.8629 | - | - | - | - | - |
1.4663 | 100 | 25.7405 | - | - | - | - | - |
1.6120 | 110 | 25.7913 | - | - | - | - | - |
1.7577 | 120 | 25.4369 | - | - | - | - | - |
1.9035 | 130 | 25.8973 | - | - | - | - | - |
1.9909 | 136 | - | 0.5996 | 0.5952 | 0.5750 | 0.5424 | 0.4737 |
2.0583 | 140 | 24.895 | - | - | - | - | - |
2.2040 | 150 | 22.9015 | - | - | - | - | - |
2.3497 | 160 | 21.4723 | - | - | - | - | - |
2.4954 | 170 | 22.3792 | - | - | - | - | - |
2.6412 | 180 | 21.1436 | - | - | - | - | - |
2.7869 | 190 | 24.041 | - | - | - | - | - |
2.9326 | 200 | 22.6244 | - | - | - | - | - |
2.9909 | 204 | - | 0.5999 | 0.5958 | 0.5761 | 0.5442 | 0.4756 |
3.0874 | 210 | 24.6957 | - | - | - | - | - |
3.2332 | 220 | 21.462 | - | - | - | - | - |
3.3789 | 230 | 20.3116 | - | - | - | - | - |
3.5246 | 240 | 20.362 | - | - | - | - | - |
3.6703 | 250 | 21.9136 | - | - | - | - | - |
3.8160 | 260 | 22.2618 | - | - | - | - | - |
3.9617 | 270 | 20.3973 | - | - | - | - | - |
3.9909 | 272 | - | 0.6 | 0.5958 | 0.5766 | 0.5443 | 0.4765 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.12.8
- Sentence Transformers: 3.4.1
- Transformers: 4.51.3
- PyTorch: 2.7.0+cu126
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 2
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for cristiano-sartori/bge-base-en-v1.5_finetuned
Base model
BAAI/bge-base-en-v1.5Evaluation results
- Cosine Accuracy@1 on dim 768self-reported0.460
- Cosine Accuracy@3 on dim 768self-reported0.623
- Cosine Accuracy@5 on dim 768self-reported0.682
- Cosine Accuracy@10 on dim 768self-reported0.742
- Cosine Precision@1 on dim 768self-reported0.460
- Cosine Precision@3 on dim 768self-reported0.208
- Cosine Precision@5 on dim 768self-reported0.136
- Cosine Precision@10 on dim 768self-reported0.074
- Cosine Recall@1 on dim 768self-reported0.460
- Cosine Recall@3 on dim 768self-reported0.623