SentenceTransformer

This is a sentence-transformers model trained. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Maximum Sequence Length: 1024 tokens
  • Output Dimensionality: 1024 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: XLMRobertaModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'When were bluebonnets named the state flower of Texas?',
    'Bluebonnet is a name given to any number of blue-flowered species of the genus "Lupinus" predominantly found in southwestern United States and is collectively the state flower of Texas. The shape of the petals on the flower resembles the bonnet worn by pioneer women to shield them from the sun.\nSpecies often called bluebonnets include:On March 7, 1901, "Lupinus subcarnosus" became the only species of bluebonnet recognized as the state flower of Texas; however, "Lupinus texensis" emerged as the favorite of most Texans. So, in 1971, the Texas Legislature made any similar species of "Lupinus" that could be found in Texas the state flower.',
    'The second major festival hosted in Ennis is the Bluebonnet Trails Festival, celebrating the state flower of Texas and the vibrant bloom of wildflowers in the surrounding countryside. The event attracts tens of thousands of tourists each year to events including sightseeing excursions and a festival in downtown. The festival is held on the third weekend of April, and the Bluebonnet Trails are hosted for the entire month. First hosted along the Kachina Prairie Park\'s historic mile-long trail system in 1938, the Bluebonnet Trails have since expanded into a route map of several dozen miles along rural farm roads throughout the surrounding countryside east and northeast of the city. The routes for these sightseeing excursions have been officially hosted and mapped out by the Ennis Garden Club since 1951. To commemorate the popularity of the Bluebonnet Trails Festival and the efforts made to celebrate and preserve the state flower of Texas, Ennis was designated by the 1997 Texas State Legislature as the "Official Bluebonnet City of Texas" and home to the "Official Bluebonnet Trail of Texas."',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

Unnamed Dataset

  • Size: 105,064 training samples
  • Columns: anchor, positive, negative, negative_2, negative_3, and negative_4
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative negative_2 negative_3 negative_4
    type string string string string string string
    details
    • min: 6 tokens
    • mean: 11.81 tokens
    • max: 26 tokens
    • min: 17 tokens
    • mean: 169.73 tokens
    • max: 986 tokens
    • min: 12 tokens
    • mean: 181.7 tokens
    • max: 759 tokens
    • min: 22 tokens
    • mean: 184.76 tokens
    • max: 817 tokens
    • min: 14 tokens
    • mean: 186.03 tokens
    • max: 859 tokens
    • min: 12 tokens
    • mean: 179.54 tokens
    • max: 759 tokens
  • Samples:
    anchor positive negative negative_2 negative_3 negative_4
    When was quantum field theory developed? The third thread in the development of quantum field theory was the need to handle the statistics of many-particle systems consistently and with ease. In 1927, Pascual Jordan tried to extend the canonical quantization of fields to the many-body wave functions of identical particles using a formalism which is known as statistical transformation theory; this procedure is now sometimes called second quantization. In 1928, Jordan and Eugene Wigner found that the quantum field describing electrons, or other fermions, had to be expanded using anti-commuting creation and annihilation operators due to the Pauli exclusion principle (see Jordan–Wigner transformation). This thread of development was incorporated into many-body theory and strongly influenced condensed matter physics and nuclear physics. The application of the new quantum theory to electromagnetism resulted in quantum field theory, which was developed starting around 1930. Quantum field theory has driven the development of more sophisticated formulations of quantum mechanics, of which the ones presented here are simple special cases. Two classic text-books from the 1960s, James D. Bjorken, Sidney David Drell, "Relativistic Quantum Mechanics" (1964) and J. J. Sakurai, "Advanced Quantum Mechanics" (1967), thoroughly developed the Feynman graph expansion techniques using physically intuitive and practical methods following from the correspondence principle, without worrying about the technicalities involved in deriving the Feynman rules from the superstructure of quantum field theory itself. Although both Feynman's heuristic and pictorial style of dealing with the infinities, as well as the formal methods of Tomonaga and Schwinger, worked extremely well, and gave spectacularly accurate answers, the true analytical nature of the question of "renormalizability", that is, whether ANY theory formulated as a "quantum field theory" would give finite answers, was not worked-out until much later, when the urgency of trying to formulate finite theories for the strong and electro-weak (and gravitational interactions) demanded i... It was evident from the beginning that a proper quantum treatment of the electromagnetic field had to somehow incorporate Einstein's relativity theory, which had grown out of the study of classical electromagnetism. This need to put together relativity and quantum mechanics was the second major motivation in the development of quantum field theory. Pascual Jordan and Wolfgang Pauli showed in 1928 that quantum fields could be made to behave in the way predicted by special relativity during coordinate transformations (specifically, they showed that the field commutators were Lorentz invariant). A further boost for quantum field theory came with the discovery of the Dirac equation, which was originally formulated and interpreted as a single-particle equation analogous to the Schrödinger equation, but unlike the Schrödinger equation, the Dirac equation satisfies both the Lorentz invariance, that is, the requirements of special relativity, and the rules of quantum mechanics.
    The Dirac equa...
    Through the works of Born, Heisenberg, and Pascual Jordan in 1925-1926, a quantum theory of the free electromagnetic field (one with no interactions with matter) was developed via canonical quantization by treating the electromagnetic field as a set of quantum harmonic oscillators. With the exclusion of interactions, however, such a theory was yet incapable of making quantitative predictions about the real world.
    Was there a year 0? Cassini gave the following reasons for using a year 0:
    Fred Espanak of NASA lists 50 phases of the moon within year 0, showing that it is a full year, not an instant in time. Jean Meeus gives the following explanation:
    Although he used the usual French terms "avant J.-C." (before Jesus Christ) and "après J.-C." (after Jesus Christ) to label years elsewhere in his book, the Byzantine historian Venance Grumel used negative years (identified by a minus sign, −) to label BC years and unsigned positive years to label AD years in a table. He did so possibly to save space and put no year 0 between them.
    Games Def Interceptions Fumbles Sacks & Tackles
    Year Age Tm Pos No. G GS Int Yds TD Lng PD FF Fmb FR Yds TD Sk Tkl Ast Sfty AV
    2004 23 NWE ss 42 13 2 0 0 0 0 2 1 0 1 0 0 15 8 2
    2005 24 IND 36 16 0 1 0 1 0 0 8 2 1
    2006 25 IND ss 36 10 1 0 0 0 0 2 11 0 1
    Career 39 3 0 0 0 0 4 2 0 2 0 0 34 10 4
    2 yrs IND 26 1 0 0 0 0 2 1 0 1 0 0 19 2 2
    1 yr NWE 13 2 0 0 0 0 2 1 0 1 0 0 15 8 2
    After pleading guilty in January 2008 to drug charges in Virginia Beach, VA stemming from a March 2007 incident, Reid was initially sentenced to two years in prison for possessing marijuana with the intent to distribute but had the sentence suspended with the agreement he would stay out of trouble for two years. His license was also suspended for six months and ordered to attend drug treatment and counseling.
    This enzyme belongs to the family of oxidoreductases, specifically those acting on paired donors, with O2 as oxidant and incorporation or reduction of oxygen. The oxygen incorporated need not be derived from O2 with 2-oxoglutarate as one donor, and incorporation of one atom o oxygen into each donor. The systematic name of this enzyme class is N6,N6,N6-trimethyl-L-lysine,2-oxoglutarate:oxygen oxidoreductase (3-hydroxylating). Other names in common use include trimethyllysine alpha-ketoglutarate dioxygenase, TML-alpha-ketoglutarate dioxygenase, TML hydroxylase, 6-N,6-N,6-N-trimethyl-L-lysine,2-oxoglutarate:oxygen oxidoreductase, and (3-hydroxylating). This enzyme participates in lysine degradation and L-carnitine biosynthesis and requires the presence of iron and ascorbate. ㅜ is one of the Korean hangul. The Unicode for ㅜ is U+315C. ㅌ is one of the Korean hangul. The Unicode for ㅌ is U+314C.
    When is the dialectical method used? The Dialect Test was created by A.J. Ellis in February 1879, and was used in the fieldwork for his work "On Early English Pronunciation". It stands as one of the earliest methods of identifying vowel sounds and features of speech. The aim was to capture the main vowel sounds of an individual dialect by listening to the reading of a short passage. All the categories of West Saxon words and vowels were included in the test so that comparisons could be made with the historic West Saxon speech as well as with various other dialects. Karl Popper has attacked the dialectic repeatedly. In 1937, he wrote and delivered a paper entitled "What Is Dialectic?" in which he attacked the dialectical method for its willingness "to put up with contradictions". Popper concluded the essay with these words: "The whole development of dialectic should be a warning against the dangers inherent in philosophical system-building. It should remind us that philosophy should not be made a basis for any sort of scientific system and that philosophers should be much more modest in their claims. One task which they can fulfill quite usefully is the study of the critical methods of science" (Ibid., p. 335). He was one of the first to apply Labovian methods in Britain with his research in 1970-1 on the speech of Bradford, Halifax and Huddersfield. He concluded that the speech detailed in most of dialectology (e.g. A. J. Ellis, the Survey of English Dialects) had virtually disappeared, having found only one speaker out of his sample of 106 speakers who regularly used dialect. However, he found that differences in speech persisted as an indicator of social class, age and gender. This PhD dissertation was later adapted into a book, "Dialect and Accent in Industrial West Yorkshire". The work was criticised by Graham Shorrocks on the grounds that the sociolinguistic methods used were inappropriate for recording the traditional vernacular and that there was an inadequate basis for comparison with earlier dialect studies in West Yorkshire. The Institute also attempted to reformulate dialectics as a concrete method. The use of such a dialectical method can be traced back to the philosophy of Hegel, who conceived dialectic as the tendency of a notion to pass over into its own negation as the result of conflict between its inherent contradictory aspects. In opposition to previous modes of thought, which viewed things in abstraction, each by itself and as though endowed with fixed properties, Hegelian dialectic has the ability to consider ideas according to their movement and change in time, as well as according to their interrelations and interactions. For Marx, dialectics is not a formula for generating predetermined outcomes but is a method for the empirical study of social processes in terms of interrelations, development, and transformation. In his introduction to the Penguin edition of Marx's "Capital", Ernest Mandel writes, "When the dialectical method is applied to the study of economic problems, economic phenomena are not viewed separately from each other, by bits and pieces, but in their inner connection as an integrated totality, structured around, and by, a basic predominant mode of production."
  • Loss: CachedGISTEmbedLoss with these parameters:
    {'guide': SentenceTransformer(
      (0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: XLMRobertaModel 
      (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
      (2): Normalize()
    ), 'temperature': 0.01}
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 1024
  • learning_rate: 3e-05
  • weight_decay: 0.01
  • num_train_epochs: 8
  • warmup_ratio: 0.05
  • bf16: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 1024
  • per_device_eval_batch_size: 8
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 3e-05
  • weight_decay: 0.01
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 8
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.05
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: True
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Click to expand
Epoch Step Training Loss
0.04 1 0.1495
0.08 2 0.1625
0.12 3 0.1622
0.16 4 0.1877
0.2 5 0.1561
0.24 6 0.1495
0.28 7 0.1502
0.32 8 0.1634
0.36 9 0.1592
0.4 10 0.1744
0.44 11 0.1503
0.48 12 0.1618
0.52 13 0.1863
0.56 14 0.1782
0.6 15 0.1599
0.64 16 0.1513
0.68 17 0.1608
0.72 18 0.1771
0.76 19 0.1595
0.8 20 0.1701
0.84 21 0.1426
0.88 22 0.1749
0.92 23 0.1591
0.96 24 0.1735
1.0 25 0.174
1.04 26 0.1246
1.08 27 0.114
1.12 28 0.1176
1.16 29 0.1206
1.2 30 0.1202
1.24 31 0.1197
1.28 32 0.1134
1.32 33 0.1155
1.3600 34 0.0978
1.4 35 0.1197
1.44 36 0.1038
1.48 37 0.1254
1.52 38 0.1083
1.56 39 0.1192
1.6 40 0.1026
1.6400 41 0.1041
1.6800 42 0.1139
1.72 43 0.1045
1.76 44 0.0997
1.8 45 0.1183
1.8400 46 0.0952
1.88 47 0.0941
1.92 48 0.1075
1.96 49 0.1093
2.0 50 0.0975
2.04 51 0.0839
2.08 52 0.0795
2.12 53 0.0809
2.16 54 0.0798
2.2 55 0.0698
2.24 56 0.0878
2.2800 57 0.0807
2.32 58 0.0748
2.36 59 0.0796
2.4 60 0.0846
2.44 61 0.0821
2.48 62 0.0831
2.52 63 0.0826
2.56 64 0.0667
2.6 65 0.0792
2.64 66 0.0688
2.68 67 0.0774
2.7200 68 0.077
2.76 69 0.0746
2.8 70 0.0738
2.84 71 0.0772
2.88 72 0.0853
2.92 73 0.0643
2.96 74 0.0775
3.0 75 0.0686
3.04 76 0.0499
3.08 77 0.056
3.12 78 0.0607
3.16 79 0.0616
3.2 80 0.0528
3.24 81 0.0585
3.2800 82 0.0597
3.32 83 0.0655
3.36 84 0.0634
3.4 85 0.0568
3.44 86 0.06
3.48 87 0.0581
3.52 88 0.0499
3.56 89 0.0524
3.6 90 0.0593
3.64 91 0.0558
3.68 92 0.0497
3.7200 93 0.057
3.76 94 0.0526
3.8 95 0.0615
3.84 96 0.0532
3.88 97 0.0514
3.92 98 0.0569
3.96 99 0.053
4.0 100 0.0546
4.04 101 0.0457
4.08 102 0.0445
4.12 103 0.0466
4.16 104 0.0485
4.2 105 0.0434
4.24 106 0.0474
4.28 107 0.0495
4.32 108 0.0443
4.36 109 0.0471
4.4 110 0.0429
4.44 111 0.0511
4.48 112 0.037
4.52 113 0.047
4.5600 114 0.0466
4.6 115 0.0451
4.64 116 0.0466
4.68 117 0.0358
4.72 118 0.0386
4.76 119 0.0474
4.8 120 0.0417
4.84 121 0.0433
4.88 122 0.0477
4.92 123 0.0513
4.96 124 0.0468
5.0 125 0.0387
5.04 126 0.0425
5.08 127 0.0393
5.12 128 0.0418
5.16 129 0.0414
5.2 130 0.0355
5.24 131 0.0423
5.28 132 0.0369
5.32 133 0.0319
5.36 134 0.0395
5.4 135 0.0417
5.44 136 0.0366
5.48 137 0.0419
5.52 138 0.0382
5.5600 139 0.0379
5.6 140 0.0382
5.64 141 0.0382
5.68 142 0.0365
5.72 143 0.0377
5.76 144 0.0362
5.8 145 0.0311
5.84 146 0.0408
5.88 147 0.0367
5.92 148 0.0386
5.96 149 0.039
6.0 150 0.0402
6.04 151 0.038
6.08 152 0.0395
6.12 153 0.0351
6.16 154 0.0377
6.2 155 0.0387
6.24 156 0.0306
6.28 157 0.038
6.32 158 0.0404
6.36 159 0.0356
6.4 160 0.0256
6.44 161 0.0336
6.48 162 0.0332
6.52 163 0.0324
6.5600 164 0.0345
6.6 165 0.0374
6.64 166 0.0335
6.68 167 0.0313
6.72 168 0.0348
6.76 169 0.0386
6.8 170 0.035
6.84 171 0.0354
6.88 172 0.0319
6.92 173 0.0303
6.96 174 0.0312
7.0 175 0.0368
7.04 176 0.0297
7.08 177 0.031
7.12 178 0.0315
7.16 179 0.034
7.2 180 0.0415
7.24 181 0.0338
7.28 182 0.0296
7.32 183 0.0299
7.36 184 0.0305
7.4 185 0.0318
7.44 186 0.0303
7.48 187 0.0302
7.52 188 0.0323
7.5600 189 0.031
7.6 190 0.0343
7.64 191 0.0344
7.68 192 0.0407
7.72 193 0.0332
7.76 194 0.0298
7.8 195 0.0301
7.84 196 0.0296
7.88 197 0.0342
7.92 198 0.0316
7.96 199 0.0307
8.0 200 0.034

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.4.1
  • Transformers: 4.49.0
  • PyTorch: 2.5.1+cu124
  • Accelerate: 1.4.0
  • Datasets: 2.21.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month
3
Safetensors
Model size
568M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support