Sparse CSR model trained on Natural Questions

This is a CSR Sparse Encoder model finetuned from mixedbread-ai/mxbai-embed-large-v1 on the gooaq dataset using the sentence-transformers library. It maps sentences & paragraphs to a 4096-dimensional sparse vector space and can be used for semantic search and sparse retrieval.

Model Details

Model Description

  • Model Type: CSR Sparse Encoder
  • Base model: mixedbread-ai/mxbai-embed-large-v1
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 4096 dimensions
  • Similarity Function: Dot Product
  • Training Dataset:
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SparseEncoder(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): CSRSparsity({'input_dim': 1024, 'hidden_dim': 4096, 'k': 256, 'k_aux': 512, 'normalize': False, 'dead_threshold': 30})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SparseEncoder

# Download from the 🤗 Hub
model = SparseEncoder("tomaarsen/csr-mxbai-embed-large-v1-gooaq-2e-4")
# Run inference
sentences = [
    'are you human korean novela?',
    "Are You Human? (Korean: 너도 인간이니; RR: Neodo Inganini; lit. Are You Human Too?) is a 2018 South Korean television series starring Seo Kang-jun and Gong Seung-yeon. It aired on KBS2's Mondays and Tuesdays at 22:00 (KST) time slot, from June 4 to August 7, 2018.",
    'A relative of European pear varieties like Bartlett and Anjou, the Asian pear is great used in recipes or simply eaten out of hand. It retains a crispness that works well in slaws and salads, and it holds its shape better than European pears when baked and cooked.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# (3, 4096)

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Sparse Information Retrieval

Metric NanoMSMARCO_128 NanoNFCorpus_128 NanoNQ_128
dot_accuracy@1 0.42 0.28 0.46
dot_accuracy@3 0.64 0.46 0.62
dot_accuracy@5 0.68 0.58 0.7
dot_accuracy@10 0.8 0.66 0.82
dot_precision@1 0.42 0.28 0.46
dot_precision@3 0.2133 0.2867 0.2067
dot_precision@5 0.136 0.28 0.14
dot_precision@10 0.08 0.246 0.082
dot_recall@1 0.42 0.0101 0.44
dot_recall@3 0.64 0.0497 0.58
dot_recall@5 0.68 0.0768 0.65
dot_recall@10 0.8 0.1079 0.76
dot_ndcg@10 0.6079 0.2711 0.5977
dot_mrr@10 0.5469 0.3952 0.5692
dot_map@100 0.5547 0.1088 0.5513
row_non_zero_mean_query 128.0 128.0 128.0
row_sparsity_mean_query 0.9688 0.9688 0.9688
row_non_zero_mean_corpus 128.0 128.0 128.0
row_sparsity_mean_corpus 0.9688 0.9688 0.9688

Sparse Nano BEIR

  • Dataset: NanoBEIR_mean_128
  • Evaluated with SparseNanoBEIREvaluator with these parameters:
    {
        "dataset_names": [
            "msmarco",
            "nfcorpus",
            "nq"
        ],
        "max_active_dims": 128
    }
    
Metric Value
dot_accuracy@1 0.3867
dot_accuracy@3 0.5733
dot_accuracy@5 0.6533
dot_accuracy@10 0.76
dot_precision@1 0.3867
dot_precision@3 0.2356
dot_precision@5 0.1853
dot_precision@10 0.136
dot_recall@1 0.29
dot_recall@3 0.4232
dot_recall@5 0.4689
dot_recall@10 0.556
dot_ndcg@10 0.4922
dot_mrr@10 0.5038
dot_map@100 0.405
row_non_zero_mean_query 128.0
row_sparsity_mean_query 0.9688
row_non_zero_mean_corpus 128.0
row_sparsity_mean_corpus 0.9688

Sparse Information Retrieval

Metric NanoMSMARCO_256 NanoNFCorpus_256 NanoNQ_256
dot_accuracy@1 0.42 0.32 0.42
dot_accuracy@3 0.7 0.56 0.64
dot_accuracy@5 0.76 0.62 0.68
dot_accuracy@10 0.84 0.7 0.84
dot_precision@1 0.42 0.32 0.42
dot_precision@3 0.2333 0.32 0.22
dot_precision@5 0.152 0.316 0.14
dot_precision@10 0.084 0.262 0.088
dot_recall@1 0.42 0.0304 0.4
dot_recall@3 0.7 0.0717 0.6
dot_recall@5 0.76 0.0931 0.63
dot_recall@10 0.84 0.1333 0.79
dot_ndcg@10 0.6326 0.3071 0.5943
dot_mrr@10 0.5661 0.4525 0.5506
dot_map@100 0.5727 0.143 0.533
row_non_zero_mean_query 256.0 256.0 256.0
row_sparsity_mean_query 0.9375 0.9375 0.9375
row_non_zero_mean_corpus 256.0 256.0 256.0
row_sparsity_mean_corpus 0.9375 0.9375 0.9375

Sparse Nano BEIR

  • Dataset: NanoBEIR_mean_256
  • Evaluated with SparseNanoBEIREvaluator with these parameters:
    {
        "dataset_names": [
            "msmarco",
            "nfcorpus",
            "nq"
        ],
        "max_active_dims": 256
    }
    
Metric Value
dot_accuracy@1 0.3867
dot_accuracy@3 0.6333
dot_accuracy@5 0.6867
dot_accuracy@10 0.7933
dot_precision@1 0.3867
dot_precision@3 0.2578
dot_precision@5 0.2027
dot_precision@10 0.1447
dot_recall@1 0.2835
dot_recall@3 0.4572
dot_recall@5 0.4944
dot_recall@10 0.5878
dot_ndcg@10 0.5113
dot_mrr@10 0.5231
dot_map@100 0.4163
row_non_zero_mean_query 256.0
row_sparsity_mean_query 0.9375
row_non_zero_mean_corpus 256.0
row_sparsity_mean_corpus 0.9375

Training Details

Training Dataset

gooaq

  • Dataset: gooaq at b089f72
  • Size: 3,011,496 training samples
  • Columns: question and answer
  • Approximate statistics based on the first 1000 samples:
    question answer
    type string string
    details
    • min: 8 tokens
    • mean: 11.87 tokens
    • max: 23 tokens
    • min: 14 tokens
    • mean: 60.09 tokens
    • max: 201 tokens
  • Samples:
    question answer
    what is the difference between clay and mud mask? The main difference between the two is that mud is a skin-healing agent, while clay is a cosmetic, drying agent. Clay masks are most useful for someone who has oily skin and is prone to breakouts of acne and blemishes.
    myki how much on card? A full fare myki card costs $6 and a concession, seniors or child myki costs $3. For more information about how to use your myki, visit ptv.vic.gov.au or call 1800 800 007.
    how to find out if someone blocked your phone number on iphone? If you get a notification like "Message Not Delivered" or you get no notification at all, that's a sign of a potential block. Next, you could try calling the person. If the call goes right to voicemail or rings once (or a half ring) then goes to voicemail, that's further evidence you may have been blocked.
  • Loss: CSRLoss with these parameters:
    {
        "beta": 0.1,
        "gamma": 1.0,
        "loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')"
    }
    

Evaluation Dataset

gooaq

  • Dataset: gooaq at b089f72
  • Size: 1,000 evaluation samples
  • Columns: question and answer
  • Approximate statistics based on the first 1000 samples:
    question answer
    type string string
    details
    • min: 8 tokens
    • mean: 11.88 tokens
    • max: 22 tokens
    • min: 14 tokens
    • mean: 61.03 tokens
    • max: 127 tokens
  • Samples:
    question answer
    how do i program my directv remote with my tv? ['Press MENU on your remote.', 'Select Settings & Help > Settings > Remote Control > Program Remote.', 'Choose the device (TV, audio, DVD) you wish to program. ... ', 'Follow the on-screen prompts to complete programming.']
    are rodrigues fruit bats nocturnal? Before its numbers were threatened by habitat destruction, storms, and hunting, some of those groups could number 500 or more members. Sunrise, sunset. Rodrigues fruit bats are most active at dawn, at dusk, and at night.
    why does your heart rate increase during exercise bbc bitesize? During exercise there is an increase in physical activity and muscle cells respire more than they do when the body is at rest. The heart rate increases during exercise. The rate and depth of breathing increases - this makes sure that more oxygen is absorbed into the blood, and more carbon dioxide is removed from it.
  • Loss: CSRLoss with these parameters:
    {
        "beta": 0.1,
        "gamma": 1.0,
        "loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • learning_rate: 0.0002
  • num_train_epochs: 1
  • warmup_ratio: 0.1
  • bf16: True
  • load_best_model_at_end: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 0.0002
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Click to expand
Epoch Step Training Loss Validation Loss NanoMSMARCO_128_dot_ndcg@10 NanoNFCorpus_128_dot_ndcg@10 NanoNQ_128_dot_ndcg@10 NanoBEIR_mean_128_dot_ndcg@10 NanoMSMARCO_256_dot_ndcg@10 NanoNFCorpus_256_dot_ndcg@10 NanoNQ_256_dot_ndcg@10 NanoBEIR_mean_256_dot_ndcg@10
-1 -1 - - 0.6175 0.2875 0.5432 0.4827 0.6158 0.3234 0.5929 0.5107
0.0064 300 0.3621 - - - - - - - - -
0.0128 600 0.3319 - - - - - - - - -
0.0191 900 0.3212 - - - - - - - - -
0.0255 1200 0.3154 - - - - - - - - -
0.0319 1500 0.3129 - - - - - - - - -
0.0383 1800 0.309 - - - - - - - - -
0.0446 2100 0.317 - - - - - - - - -
0.0510 2400 0.2997 - - - - - - - - -
0.0574 2700 0.3409 - - - - - - - - -
0.0638 3000 0.3251 0.3136 0.6049 0.2393 0.5583 0.4675 0.5950 0.2559 0.5555 0.4688
0.0701 3300 0.3291 - - - - - - - - -
0.0765 3600 0.3366 - - - - - - - - -
0.0829 3900 0.3286 - - - - - - - - -
0.0893 4200 0.3264 - - - - - - - - -
0.0956 4500 0.3413 - - - - - - - - -
0.1020 4800 0.3352 - - - - - - - - -
0.1084 5100 0.3323 - - - - - - - - -
0.1148 5400 0.3308 - - - - - - - - -
0.1211 5700 0.3127 - - - - - - - - -
0.1275 6000 0.3224 0.2949 0.5445 0.2155 0.5394 0.4331 0.5911 0.2340 0.5365 0.4539
0.1339 6300 0.3216 - - - - - - - - -
0.1403 6600 0.3202 - - - - - - - - -
0.1466 6900 0.3296 - - - - - - - - -
0.1530 7200 0.3171 - - - - - - - - -
0.1594 7500 0.3141 - - - - - - - - -
0.1658 7800 0.3202 - - - - - - - - -
0.1721 8100 0.3088 - - - - - - - - -
0.1785 8400 0.304 - - - - - - - - -
0.1849 8700 0.3105 - - - - - - - - -
0.1913 9000 0.307 0.2849 0.6038 0.2258 0.5471 0.4589 0.6241 0.2449 0.5498 0.4730
0.1976 9300 0.3043 - - - - - - - - -
0.2040 9600 0.3035 - - - - - - - - -
0.2104 9900 0.3069 - - - - - - - - -
0.2168 10200 0.3174 - - - - - - - - -
0.2231 10500 0.3111 - - - - - - - - -
0.2295 10800 0.295 - - - - - - - - -
0.2359 11100 0.2892 - - - - - - - - -
0.2423 11400 0.3012 - - - - - - - - -
0.2486 11700 0.3061 - - - - - - - - -
0.2550 12000 0.2863 0.2631 0.6190 0.2720 0.5379 0.4763 0.6056 0.2898 0.5419 0.4791
0.2614 12300 0.3008 - - - - - - - - -
0.2678 12600 0.2849 - - - - - - - - -
0.2741 12900 0.2876 - - - - - - - - -
0.2805 13200 0.2963 - - - - - - - - -
0.2869 13500 0.2926 - - - - - - - - -
0.2933 13800 0.2855 - - - - - - - - -
0.2996 14100 0.2868 - - - - - - - - -
0.3060 14400 0.294 - - - - - - - - -
0.3124 14700 0.3008 - - - - - - - - -
0.3188 15000 0.293 0.2745 0.5538 0.2847 0.5422 0.4602 0.5615 0.2976 0.5588 0.4726
0.3252 15300 0.2776 - - - - - - - - -
0.3315 15600 0.2906 - - - - - - - - -
0.3379 15900 0.2874 - - - - - - - - -
0.3443 16200 0.2834 - - - - - - - - -
0.3507 16500 0.2718 - - - - - - - - -
0.3570 16800 0.2834 - - - - - - - - -
0.3634 17100 0.2833 - - - - - - - - -
0.3698 17400 0.281 - - - - - - - - -
0.3762 17700 0.2922 - - - - - - - - -
0.3825 18000 0.279 0.2623 0.5851 0.2696 0.5097 0.4548 0.5849 0.2776 0.5570 0.4732
0.3889 18300 0.2894 - - - - - - - - -
0.3953 18600 0.283 - - - - - - - - -
0.4017 18900 0.2824 - - - - - - - - -
0.4080 19200 0.2758 - - - - - - - - -
0.4144 19500 0.2893 - - - - - - - - -
0.4208 19800 0.278 - - - - - - - - -
0.4272 20100 0.2814 - - - - - - - - -
0.4335 20400 0.278 - - - - - - - - -
0.4399 20700 0.2783 - - - - - - - - -
0.4463 21000 0.2803 0.2510 0.5880 0.2664 0.5664 0.4736 0.6115 0.2734 0.5465 0.4772
0.4527 21300 0.2668 - - - - - - - - -
0.4590 21600 0.2828 - - - - - - - - -
0.4654 21900 0.2815 - - - - - - - - -
0.4718 22200 0.2778 - - - - - - - - -
0.4782 22500 0.271 - - - - - - - - -
0.4845 22800 0.2696 - - - - - - - - -
0.4909 23100 0.2698 - - - - - - - - -
0.4973 23400 0.2768 - - - - - - - - -
0.5037 23700 0.2626 - - - - - - - - -
0.5100 24000 0.2611 0.2414 0.6078 0.2635 0.5668 0.4794 0.6231 0.2942 0.5944 0.5039
0.5164 24300 0.2736 - - - - - - - - -
0.5228 24600 0.2695 - - - - - - - - -
0.5292 24900 0.2673 - - - - - - - - -
0.5355 25200 0.2746 - - - - - - - - -
0.5419 25500 0.2681 - - - - - - - - -
0.5483 25800 0.2676 - - - - - - - - -
0.5547 26100 0.2686 - - - - - - - - -
0.5610 26400 0.2652 - - - - - - - - -
0.5674 26700 0.2596 - - - - - - - - -
0.5738 27000 0.2677 0.2494 0.6018 0.2460 0.5280 0.4586 0.6238 0.2775 0.5673 0.4895
0.5802 27300 0.2621 - - - - - - - - -
0.5865 27600 0.2558 - - - - - - - - -
0.5929 27900 0.251 - - - - - - - - -
0.5993 28200 0.2601 - - - - - - - - -
0.6057 28500 0.2612 - - - - - - - - -
0.6120 28800 0.2695 - - - - - - - - -
0.6184 29100 0.2662 - - - - - - - - -
0.6248 29400 0.2589 - - - - - - - - -
0.6312 29700 0.2602 - - - - - - - - -
0.6376 30000 0.2698 0.2507 0.5892 0.2996 0.5386 0.4758 0.6102 0.2941 0.5535 0.4860
0.6439 30300 0.2625 - - - - - - - - -
0.6503 30600 0.2598 - - - - - - - - -
0.6567 30900 0.2594 - - - - - - - - -
0.6631 31200 0.2618 - - - - - - - - -
0.6694 31500 0.2556 - - - - - - - - -
0.6758 31800 0.2591 - - - - - - - - -
0.6822 32100 0.2544 - - - - - - - - -
0.6886 32400 0.2589 - - - - - - - - -
0.6949 32700 0.2522 - - - - - - - - -
0.7013 33000 0.2521 0.2535 0.6053 0.2650 0.5329 0.4677 0.6115 0.2925 0.6057 0.5032
0.7077 33300 0.2576 - - - - - - - - -
0.7141 33600 0.2582 - - - - - - - - -
0.7204 33900 0.2567 - - - - - - - - -
0.7268 34200 0.2577 - - - - - - - - -
0.7332 34500 0.2568 - - - - - - - - -
0.7396 34800 0.254 - - - - - - - - -
0.7459 35100 0.2489 - - - - - - - - -
0.7523 35400 0.2545 - - - - - - - - -
0.7587 35700 0.2476 - - - - - - - - -
0.7651 36000 0.2637 0.2397 0.6138 0.2726 0.5627 0.4831 0.6056 0.2889 0.5745 0.4897
0.7714 36300 0.2508 - - - - - - - - -
0.7778 36600 0.2569 - - - - - - - - -
0.7842 36900 0.2419 - - - - - - - - -
0.7906 37200 0.2453 - - - - - - - - -
0.7969 37500 0.2456 - - - - - - - - -
0.8033 37800 0.2497 - - - - - - - - -
0.8097 38100 0.2556 - - - - - - - - -
0.8161 38400 0.252 - - - - - - - - -
0.8224 38700 0.2423 - - - - - - - - -
0.8288 39000 0.2545 0.2301 0.5927 0.2895 0.5553 0.4792 0.5979 0.2987 0.5587 0.4851
0.8352 39300 0.2482 - - - - - - - - -
0.8416 39600 0.2429 - - - - - - - - -
0.8479 39900 0.2463 - - - - - - - - -
0.8543 40200 0.2354 - - - - - - - - -
0.8607 40500 0.2466 - - - - - - - - -
0.8671 40800 0.2484 - - - - - - - - -
0.8734 41100 0.2448 - - - - - - - - -
0.8798 41400 0.2448 - - - - - - - - -
0.8862 41700 0.2515 - - - - - - - - -
0.8926 42000 0.2428 0.2392 0.6001 0.2826 0.5857 0.4895 0.6208 0.3019 0.6010 0.5079
0.8989 42300 0.2497 - - - - - - - - -
0.9053 42600 0.2415 - - - - - - - - -
0.9117 42900 0.2408 - - - - - - - - -
0.9181 43200 0.242 - - - - - - - - -
0.9245 43500 0.2412 - - - - - - - - -
0.9308 43800 0.2472 - - - - - - - - -
0.9372 44100 0.2408 - - - - - - - - -
0.9436 44400 0.2374 - - - - - - - - -
0.9500 44700 0.2312 - - - - - - - - -
0.9563 45000 0.2412 0.2379 0.6079 0.2711 0.5977 0.4922 0.6326 0.3071 0.5943 0.5113
0.9627 45300 0.2381 - - - - - - - - -
0.9691 45600 0.2456 - - - - - - - - -
0.9755 45900 0.2418 - - - - - - - - -
0.9818 46200 0.2355 - - - - - - - - -
0.9882 46500 0.2424 - - - - - - - - -
0.9946 46800 0.2389 - - - - - - - - -
  • The bold row denotes the saved checkpoint.

Environmental Impact

Carbon emissions were measured using CodeCarbon.

  • Energy Consumed: 1.202 kWh
  • Carbon Emitted: 0.467 kg of CO2
  • Hours Used: 3.125 hours

Training Hardware

  • On Cloud: No
  • GPU Model: 1 x NVIDIA GeForce RTX 3090
  • CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K
  • RAM Size: 31.78 GB

Framework Versions

  • Python: 3.11.6
  • Sentence Transformers: 4.2.0.dev0
  • Transformers: 4.49.0
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.5.1
  • Datasets: 2.21.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

CSRLoss

@misc{wen2025matryoshkarevisitingsparsecoding,
      title={Beyond Matryoshka: Revisiting Sparse Coding for Adaptive Representation},
      author={Tiansheng Wen and Yifei Wang and Zequn Zeng and Zhong Peng and Yudi Su and Xinyang Liu and Bo Chen and Hongwei Liu and Stefanie Jegelka and Chenyu You},
      year={2025},
      eprint={2503.01776},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2503.01776},
}

SparseMultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
6
Safetensors
Model size
335M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for tomaarsen/csr-mxbai-embed-large-v1-gooaq-2e-4

Finetuned
(20)
this model

Dataset used to train tomaarsen/csr-mxbai-embed-large-v1-gooaq-2e-4

Evaluation results