rasyosef commited on
Commit
42728a5
·
verified ·
1 Parent(s): e00d850

Add new SparseEncoder model

Browse files
Files changed (4) hide show
  1. README.md +104 -110
  2. model.safetensors +1 -1
  3. special_tokens_map.json +35 -5
  4. tokenizer_config.json +8 -1
README.md CHANGED
@@ -8,41 +8,32 @@ tags:
8
  - sparse
9
  - splade
10
  - generated_from_trainer
11
- - dataset_size:250000
12
  - loss:SpladeLoss
13
  - loss:SparseMultipleNegativesRankingLoss
14
  - loss:FlopsLoss
15
  base_model: prajjwal1/bert-mini
16
  widget:
17
- - text: icd medication reaction
18
- - text: >-
19
- Report Abuse. An egg lives for around 12 hours after ovulation. Sperm can
20
- live for about five days inside the uterus, so providing those two time
21
- frames collide, it can be pretty soon after sex. Hours, usually.
22
- Implantation occurs between 6 - 12 days.
23
- - text: >-
24
- A warm-up is important for many reasons. Some of these reasons include: -
25
- Facilitates transition from rest to exercise-Stretches postural
26
- muscles-Augments blood flow-Ele … vates body temperature-Allows body to
27
- adjust to changing physiologic, biomechanical and bioenergetic demands
28
- placed on it during conditioning phase. warm-up helps your body prepare for
29
- any physical activity. Without a warm-up, your muscles will be cold and
30
- stiff, the oxygen won't be flowing to your muscles and joints and you will
31
- not perform the activity well. Also, when you do a warm up your recovery
32
- from exercising will be more comfortable and shorter.
33
- - text: >-
34
- First, you need to have a Kindle Fire HD or HDX as these are the Kindle
35
- Fires that have bluetooth. The very first generation doesn't have this
36
- capability. (If you don't know which tablet you have, see this article.).
37
- Second, you need to have a bluetooth keyboard or other device, like
38
- headphones, speakers, or earbuds. This is a picture of a Jawbone earpiece
39
- I've successfully paired to my Kindle Fire and been able to listen to music
40
- with.
41
- - text: >-
42
- Cantigny Park. Cantigny is a 500-acre (2.0 km2) park in Wheaton, Illinois,
43
- 30 miles west of Chicago. It is the former estate of Joseph Medill and his
44
- grandson Colonel Robert R. McCormick, publishers of the Chicago Tribune, and
45
- is open to the public.
46
  pipeline_tag: feature-extraction
47
  library_name: sentence-transformers
48
  metrics:
@@ -76,64 +67,62 @@ model-index:
76
  type: unknown
77
  metrics:
78
  - type: dot_accuracy@1
79
- value: 0.63028
80
  name: Dot Accuracy@1
81
  - type: dot_accuracy@3
82
- value: 0.79716
83
  name: Dot Accuracy@3
84
  - type: dot_accuracy@5
85
- value: 0.85096
86
  name: Dot Accuracy@5
87
  - type: dot_accuracy@10
88
- value: 0.90548
89
  name: Dot Accuracy@10
90
  - type: dot_precision@1
91
- value: 0.63028
92
  name: Dot Precision@1
93
  - type: dot_precision@3
94
- value: 0.26571999999999996
95
  name: Dot Precision@3
96
  - type: dot_precision@5
97
- value: 0.170192
98
  name: Dot Precision@5
99
  - type: dot_precision@10
100
- value: 0.09054800000000002
101
  name: Dot Precision@10
102
  - type: dot_recall@1
103
- value: 0.63028
104
  name: Dot Recall@1
105
  - type: dot_recall@3
106
- value: 0.79716
107
  name: Dot Recall@3
108
  - type: dot_recall@5
109
- value: 0.85096
110
  name: Dot Recall@5
111
  - type: dot_recall@10
112
- value: 0.90548
113
  name: Dot Recall@10
114
  - type: dot_ndcg@10
115
- value: 0.7689713558276354
116
  name: Dot Ndcg@10
117
  - type: dot_mrr@10
118
- value: 0.7250807142857304
119
  name: Dot Mrr@10
120
  - type: dot_map@100
121
- value: 0.728785316630622
122
  name: Dot Map@100
123
  - type: query_active_dims
124
- value: 26.435239791870117
125
  name: Query Active Dims
126
  - type: query_sparsity_ratio
127
- value: 0.9991338955575693
128
  name: Query Sparsity Ratio
129
  - type: corpus_active_dims
130
- value: 326.6760399121094
131
  name: Corpus Active Dims
132
  - type: corpus_sparsity_ratio
133
- value: 0.9892970303416517
134
  name: Corpus Sparsity Ratio
135
- datasets:
136
- - microsoft/ms_marco
137
  ---
138
 
139
  # SPLADE-BERT-Mini
@@ -182,15 +171,15 @@ Then you can load this model and run inference.
182
  from sentence_transformers import SparseEncoder
183
 
184
  # Download from the 🤗 Hub
185
- model = SparseEncoder("rasyosef/SPLADE-BERT-Mini")
186
  # Run inference
187
  queries = [
188
- "cantigny gardens cost",
189
  ]
190
  documents = [
191
- 'The fee for a ceremony ranges from $400 to $2,500 with reception rental or $3,000 for a ceremony-only wedding. Please inquire about discounted rates for ceremony guest counts under 75. The average wedding cost at Cantigny Park is estimated at between $12,881 and $22,238 for a ceremony & reception for 100 guests.',
192
- 'Nestled in a serene setting, Cantigny Park is a scenic realm where you will create a unique wedding, the memories of which you will always cherish. This expansive estate encompasses 500 acres of beautiful gardens, colorful botanicals and tranquil water features, creating an idyllic background for this ideal day.',
193
- 'Cantigny Park. Cantigny is a 500-acre (2.0 km2) park in Wheaton, Illinois, 30 miles west of Chicago. It is the former estate of Joseph Medill and his grandson Colonel Robert R. McCormick, publishers of the Chicago Tribune, and is open to the public.',
194
  ]
195
  query_embeddings = model.encode_query(queries)
196
  document_embeddings = model.encode_document(documents)
@@ -200,7 +189,7 @@ print(query_embeddings.shape, document_embeddings.shape)
200
  # Get the similarity scores for the embeddings
201
  similarities = model.similarity(query_embeddings, document_embeddings)
202
  print(similarities)
203
- # tensor([[18.8703, 13.8253, 13.4587]])
204
  ```
205
 
206
  <!--
@@ -235,27 +224,27 @@ You can finetune this model on your own dataset.
235
 
236
  * Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator)
237
 
238
- | Metric | Value |
239
- |:----------------------|:----------|
240
- | dot_accuracy@1 | 0.6303 |
241
- | dot_accuracy@3 | 0.7972 |
242
- | dot_accuracy@5 | 0.851 |
243
- | dot_accuracy@10 | 0.9055 |
244
- | dot_precision@1 | 0.6303 |
245
- | dot_precision@3 | 0.2657 |
246
- | dot_precision@5 | 0.1702 |
247
- | dot_precision@10 | 0.0905 |
248
- | dot_recall@1 | 0.6303 |
249
- | dot_recall@3 | 0.7972 |
250
- | dot_recall@5 | 0.851 |
251
- | dot_recall@10 | 0.9055 |
252
- | **dot_ndcg@10** | **0.769** |
253
- | dot_mrr@10 | 0.7251 |
254
- | dot_map@100 | 0.7288 |
255
- | query_active_dims | 26.4352 |
256
- | query_sparsity_ratio | 0.9991 |
257
- | corpus_active_dims | 326.676 |
258
- | corpus_sparsity_ratio | 0.9893 |
259
 
260
  <!--
261
  ## Bias, Risks and Limitations
@@ -275,25 +264,25 @@ You can finetune this model on your own dataset.
275
 
276
  #### Unnamed Dataset
277
 
278
- * Size: 250,000 training samples
279
  * Columns: <code>query</code>, <code>positive</code>, <code>negative_1</code>, and <code>negative_2</code>
280
  * Approximate statistics based on the first 1000 samples:
281
- | | query | positive | negative_1 | negative_2 |
282
- |:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
283
- | type | string | string | string | string |
284
- | details | <ul><li>min: 4 tokens</li><li>mean: 8.87 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 82.54 tokens</li><li>max: 218 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 79.98 tokens</li><li>max: 252 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 80.55 tokens</li><li>max: 211 tokens</li></ul> |
285
  * Samples:
286
- | query | positive | negative_1 | negative_2 |
287
- |:------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
288
- | <code>how do automotive technicians get paid</code> | <code>104 months ago. The amount of pay from company to company does not vary too much, but you do have a wide variety of compensation methods. There are various combinations of hourly and commission pay rates, which depending on what type of work you specialize in can vary your bottom line considerably.04 months ago. The amount of pay from company to company does not vary too much, but you do have a wide variety of compensation methods. There are various combinations of hourly and commission pay rates, which depending on what type of work you specialize in can vary your bottom line considerably.</code> | <code>Bureau of Labor Statistics figures indicate that automotive technicians earned an average annual salary of $38,560 and an average hourly wage of $18.54 as of May 2011.Half of auto technicians reported annual salaries of between $26,850 and $47,540 and hourly wages of between $12.91 and $22.86.The 10 percent of automotive techs who earned the lowest made $20,620 or less per year, and the top 10 percent of earners made $59,600 or more per year.ver one-third of all automotive technicians employed as of May 2011 worked in the automotive repair and maintenance industry, where they earned an average of $35,090 per year.</code> | <code>It really depends on what automaker your working for, how much experience you have, and how long you've been in the industry. Obviously if you're working for a highend company(BMW,Mercedes,Ferrari) you can expect to be paid more per hour. And automotive technicians don't get paid by the hour.We get paid per FLAT RATE hour. Which basically means that we get paid by the job. Which could range from 0.2 of an hour for replacing a headlight bulb to 10hours for a transmission overhaul. Then there's a difference between warranty jobs and cash jobs.ut I won't get into too much detail. Automotive technicians get paid around $12-$15/hr at entry level. But can make around $18-$26/hr with much more experience. Which means you can expect to make 30,000 to 60,000/year. Though most technicians don't see past 45,000 a year.</code> |
289
- | <code>how far is steamboat springs from golden?</code> | <code>The distance between Steamboat Springs and Golden in a straight line is 100 miles or 160.9 Kilometers. Driving Directions & Drive Times from Steamboat Springs to Golden can be found further down the page.</code> | <code>Steamboat Springs Vacation Rentals Steamboat Springs Vacations Steamboat Springs Restaurants Things to Do in Steamboat Springs Steamboat Springs Travel Forum Steamboat Springs Photos Steamboat Springs Map Steamboat Springs Travel Guide All Steamboat Springs Hotels; Steamboat Springs Hotel Deals; Last Minute Hotels in Steamboat Springs; By Hotel Type Steamboat Springs Family Hotels</code> | <code>There are 98.92 miles from Golden to Steamboat Springs in northwest direction and 143 miles (230.14 kilometers) by car, following the US-40 route. Golden and Steamboat Springs are 3 hours 20 mins far apart, if you drive non-stop. This is the fastest route from Golden, CO to Steamboat Springs, CO. The halfway point is Heeney, CO. Golden, CO and Steamboat Springs, CO are in the same time zone (MDT). Current time in both locations is 1:26 pm.</code> |
290
- | <code>incoming wire routing number for california bank and trust</code> | <code>Please call California Bank And Trust representative at (888) 315-2271 for more information. 1 Routing Number: 122003396. 2 250 EAST FIRST STREET # 700. LOS ANGELES, CA 90012-0000. 3 Phone Number: (888) 315-2271.</code> | <code>When asked to provide a routing number for incoming wire transfers to Union Bank accounts, the routing number to use is: 122000496. back to top What options do I have to send wires?</code> | <code>Business Contracting Officers (BCO) have access to Online Banking wires. Simply sign on to Online Banking, click “Send Wires”, and then complete the required information. This particular service is limited to sending wires to U.S. banks only.</code> |
291
  * Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
292
  ```json
293
  {
294
  "loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')",
295
- "document_regularizer_weight": 0.001,
296
- "query_regularizer_weight": 0.002
297
  }
298
  ```
299
 
@@ -301,14 +290,17 @@ You can finetune this model on your own dataset.
301
  #### Non-Default Hyperparameters
302
 
303
  - `eval_strategy`: epoch
304
- - `per_device_train_batch_size`: 64
305
- - `per_device_eval_batch_size`: 64
 
306
  - `learning_rate`: 6e-05
307
- - `num_train_epochs`: 4
308
  - `lr_scheduler_type`: cosine
309
  - `warmup_ratio`: 0.025
310
  - `fp16`: True
 
311
  - `optim`: adamw_torch_fused
 
312
  - `batch_sampler`: no_duplicates
313
 
314
  #### All Hyperparameters
@@ -318,11 +310,11 @@ You can finetune this model on your own dataset.
318
  - `do_predict`: False
319
  - `eval_strategy`: epoch
320
  - `prediction_loss_only`: True
321
- - `per_device_train_batch_size`: 64
322
- - `per_device_eval_batch_size`: 64
323
  - `per_gpu_train_batch_size`: None
324
  - `per_gpu_eval_batch_size`: None
325
- - `gradient_accumulation_steps`: 1
326
  - `eval_accumulation_steps`: None
327
  - `torch_empty_cache_steps`: None
328
  - `learning_rate`: 6e-05
@@ -331,7 +323,7 @@ You can finetune this model on your own dataset.
331
  - `adam_beta2`: 0.999
332
  - `adam_epsilon`: 1e-08
333
  - `max_grad_norm`: 1.0
334
- - `num_train_epochs`: 4
335
  - `max_steps`: -1
336
  - `lr_scheduler_type`: cosine
337
  - `lr_scheduler_kwargs`: {}
@@ -371,7 +363,7 @@ You can finetune this model on your own dataset.
371
  - `disable_tqdm`: False
372
  - `remove_unused_columns`: True
373
  - `label_names`: None
374
- - `load_best_model_at_end`: False
375
  - `ignore_data_skip`: False
376
  - `fsdp`: []
377
  - `fsdp_min_num_params`: 0
@@ -392,7 +384,7 @@ You can finetune this model on your own dataset.
392
  - `dataloader_persistent_workers`: False
393
  - `skip_memory_metrics`: True
394
  - `use_legacy_prediction_loop`: False
395
- - `push_to_hub`: False
396
  - `resume_from_checkpoint`: None
397
  - `hub_model_id`: None
398
  - `hub_strategy`: every_save
@@ -435,23 +427,25 @@ You can finetune this model on your own dataset.
435
  </details>
436
 
437
  ### Training Logs
438
- | Epoch | Step | Training Loss | dot_ndcg@10 |
439
- |:-----:|:-----:|:-------------:|:-----------:|
440
- | 1.0 | 3907 | 23.8846 | 0.7509 |
441
- | 2.0 | 7814 | 0.785 | 0.7670 |
442
- | 3.0 | 11721 | 0.6873 | 0.7685 |
443
- | 4.0 | 15628 | 0.6283 | 0.7690 |
444
- | -1 | -1 | - | 0.7690 |
 
445
 
 
446
 
447
  ### Framework Versions
448
- - Python: 3.11.13
449
  - Sentence Transformers: 5.0.0
450
  - Transformers: 4.53.1
451
  - PyTorch: 2.6.0+cu124
452
- - Accelerate: 1.8.1
453
  - Datasets: 3.6.0
454
- - Tokenizers: 0.21.2
455
 
456
  ## Citation
457
 
@@ -521,4 +515,4 @@ You can finetune this model on your own dataset.
521
  ## Model Card Contact
522
 
523
  *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
524
- -->
 
8
  - sparse
9
  - splade
10
  - generated_from_trainer
11
+ - dataset_size:500000
12
  - loss:SpladeLoss
13
  - loss:SparseMultipleNegativesRankingLoss
14
  - loss:FlopsLoss
15
  base_model: prajjwal1/bert-mini
16
  widget:
17
+ - text: When to File. Generally, the estate tax return is due nine months after the
18
+ date of death. A six month extension is available if requested prior to the due
19
+ date and the estimated correct amount of tax is paid before the due date. The
20
+ gift tax return is due on April 15th following the year in which the gift is made.
21
+ Where to File.
22
+ - text: what is a vermouth
23
+ - text: Stomach ulcers are the most visible sign of peptic ulcer disease. They occur
24
+ when the thick layer of mucus that protects your stomach from digestive juices
25
+ is reduced, thus enabling the digestive acids to eat away at the lining tissues
26
+ of the stomach.
27
+ - text: 'acronym - a word formed from the initial letters of the several words in
28
+ the name. 1 snafu - an acronym often used by soldiers in World War II: situation
29
+ normal all fucked up.'
30
+ - text: Estrogens, in females, are produced primarily by the ovaries, and during pregnancy,
31
+ the placenta. Follicle-stimulating hormone (FSH) stimulates the ovarian production
32
+ of estrogens by the granulosa cells of the ovarian follicles and corpora lutea.strogen
33
+ or oestrogen (see spelling differences) is the primary female sex hormone and
34
+ is responsible for development and regulation of the female reproductive system
35
+ and secondary sex characteristics. Estrogen may also refer to any substance, natural
36
+ or synthetic that mimics the effects of the natural hormone.
 
 
 
 
 
 
 
 
 
37
  pipeline_tag: feature-extraction
38
  library_name: sentence-transformers
39
  metrics:
 
67
  type: unknown
68
  metrics:
69
  - type: dot_accuracy@1
70
+ value: 0.4748
71
  name: Dot Accuracy@1
72
  - type: dot_accuracy@3
73
+ value: 0.7852
74
  name: Dot Accuracy@3
75
  - type: dot_accuracy@5
76
+ value: 0.882
77
  name: Dot Accuracy@5
78
  - type: dot_accuracy@10
79
+ value: 0.9418
80
  name: Dot Accuracy@10
81
  - type: dot_precision@1
82
+ value: 0.4748
83
  name: Dot Precision@1
84
  - type: dot_precision@3
85
+ value: 0.2687333333333333
86
  name: Dot Precision@3
87
  - type: dot_precision@5
88
+ value: 0.18272
89
  name: Dot Precision@5
90
  - type: dot_precision@10
91
+ value: 0.09860000000000001
92
  name: Dot Precision@10
93
  - type: dot_recall@1
94
+ value: 0.4596833333333333
95
  name: Dot Recall@1
96
  - type: dot_recall@3
97
+ value: 0.772
98
  name: Dot Recall@3
99
  - type: dot_recall@5
100
+ value: 0.871
101
  name: Dot Recall@5
102
  - type: dot_recall@10
103
+ value: 0.9357166666666666
104
  name: Dot Recall@10
105
  - type: dot_ndcg@10
106
+ value: 0.7128845623564422
107
  name: Dot Ndcg@10
108
  - type: dot_mrr@10
109
+ value: 0.6443483333333355
110
  name: Dot Mrr@10
111
  - type: dot_map@100
112
+ value: 0.6400603704296839
113
  name: Dot Map@100
114
  - type: query_active_dims
115
+ value: 27.214799880981445
116
  name: Query Active Dims
117
  - type: query_sparsity_ratio
118
+ value: 0.999108354633347
119
  name: Query Sparsity Ratio
120
  - type: corpus_active_dims
121
+ value: 153.67085411618822
122
  name: Corpus Active Dims
123
  - type: corpus_sparsity_ratio
124
+ value: 0.9949652429684753
125
  name: Corpus Sparsity Ratio
 
 
126
  ---
127
 
128
  # SPLADE-BERT-Mini
 
171
  from sentence_transformers import SparseEncoder
172
 
173
  # Download from the 🤗 Hub
174
+ model = SparseEncoder("yosefw/SPLADE-BERT-Mini-v2")
175
  # Run inference
176
  queries = [
177
+ "where is oestrogen produced",
178
  ]
179
  documents = [
180
+ 'Estrogens, in females, are produced primarily by the ovaries, and during pregnancy, the placenta. Follicle-stimulating hormone (FSH) stimulates the ovarian production of estrogens by the granulosa cells of the ovarian follicles and corpora lutea.strogen or oestrogen (see spelling differences) is the primary female sex hormone and is responsible for development and regulation of the female reproductive system and secondary sex characteristics. Estrogen may also refer to any substance, natural or synthetic that mimics the effects of the natural hormone.',
181
+ "Making the world better, one answer at a time. Estrogen is produced in the ovaries, primarily the theca (wall) of developing follicles in the ovary, though also to a lesser extent the corpus luteum (remaining out 'shell' which previously contained an egg) and, during certain stages of pregnancy, the placenta.he production of the estrogen in the ovaries is stimulated by the lutenizing hormone. Some estrogens are produced in smaller quantities by liver adrenal glands and brests. Estrogen is produced in the ovaries but if you wish to go back further than that is is based on the cholesterol molecule. ovary.",
182
+ 'The pituitary gland secretes a hormone which induces the production of estrogen in the ovaries. Estrogens are primarily produced by (and released from) the follicles in the ovaries (the corpus luterum) and the placenta (the organ that connects the developing fetus to the uterine wall).The production of the estrogen in the ovaries is stimulated by the lutenizing hormone.Some estrogens are produced in smaller quantities by liver adrenal glands and brests. Estrogen is produced in the ovaries but if you wish to go back further than that is is based on the cholesterol molecule. ovary.he production of the estrogen in the ovaries is stimulated by the lutenizing hormone. Some estrogens are produced in smaller quantities by liver adrenal glands and brests. Estrogen is produced in the ovaries but if you wish to go back further than that is is based on the cholesterol molecule. ovary.',
183
  ]
184
  query_embeddings = model.encode_query(queries)
185
  document_embeddings = model.encode_document(documents)
 
189
  # Get the similarity scores for the embeddings
190
  similarities = model.similarity(query_embeddings, document_embeddings)
191
  print(similarities)
192
+ # tensor([[17.0112, 13.5808, 13.2221]])
193
  ```
194
 
195
  <!--
 
224
 
225
  * Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator)
226
 
227
+ | Metric | Value |
228
+ |:----------------------|:-----------|
229
+ | dot_accuracy@1 | 0.4748 |
230
+ | dot_accuracy@3 | 0.7852 |
231
+ | dot_accuracy@5 | 0.882 |
232
+ | dot_accuracy@10 | 0.9418 |
233
+ | dot_precision@1 | 0.4748 |
234
+ | dot_precision@3 | 0.2687 |
235
+ | dot_precision@5 | 0.1827 |
236
+ | dot_precision@10 | 0.0986 |
237
+ | dot_recall@1 | 0.4597 |
238
+ | dot_recall@3 | 0.772 |
239
+ | dot_recall@5 | 0.871 |
240
+ | dot_recall@10 | 0.9357 |
241
+ | **dot_ndcg@10** | **0.7129** |
242
+ | dot_mrr@10 | 0.6443 |
243
+ | dot_map@100 | 0.6401 |
244
+ | query_active_dims | 27.2148 |
245
+ | query_sparsity_ratio | 0.9991 |
246
+ | corpus_active_dims | 153.6709 |
247
+ | corpus_sparsity_ratio | 0.995 |
248
 
249
  <!--
250
  ## Bias, Risks and Limitations
 
264
 
265
  #### Unnamed Dataset
266
 
267
+ * Size: 500,000 training samples
268
  * Columns: <code>query</code>, <code>positive</code>, <code>negative_1</code>, and <code>negative_2</code>
269
  * Approximate statistics based on the first 1000 samples:
270
+ | | query | positive | negative_1 | negative_2 |
271
+ |:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
272
+ | type | string | string | string | string |
273
+ | details | <ul><li>min: 4 tokens</li><li>mean: 9.01 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 78.72 tokens</li><li>max: 230 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 76.0 tokens</li><li>max: 251 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 76.42 tokens</li><li>max: 222 tokens</li></ul> |
274
  * Samples:
275
+ | query | positive | negative_1 | negative_2 |
276
+ |:--------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
277
+ | <code>what is download upload speed</code> | <code>Almost every speed test site tests for download speed, upload speed, and the ping rate. The upload rate is always lower than the download rate. This is a configuration set by the local cable carrier it is not dependent on the user’s bandwidth or Internet speed.he Difference. There is none. Download speed is the rate at which data is transferred from the Internet to the user’s computer. The upload speed is the rate that data is transferred from the user’s computer to the Internet.</code> | <code>Speed Limits. The download speed is typically much faster than the upload speed. The price you pay for Internet access with most devices is based on the maximum number of bytes per second the service provides, although cellular carriers charge by the total bytes transmitted.hristopher Robbins/Photodisc/Getty Images. Internet speed refers to the speed at which you send or receive data from your computer, phone or other device. Download speed is the rate your connection receives data. Upload speed is the number of bytes per second you can send.</code> | <code>If you find that your download or upload speed is not equal to what your Internet service provider promised, there are a couple of easy fixes you can perform. Use a wired connection to the router instead of wireless. Performing a speed test across a wireless connection will always give slower results.he Difference. There is none. Download speed is the rate at which data is transferred from the Internet to the user’s computer. The upload speed is the rate that data is transferred from the user’s computer to the Internet.</code> |
278
+ | <code>what is sdn</code> | <code>CompanyCase Studies. Software-defined networking (SDN) is an approach to network virtualization that seeks to optimize network resources and quickly adapt networks to changing business needs, applications, and traffic.</code> | <code>Historically, networking has been performed through two abstractions, a Data plane and a Control plane. The data plane rapidly processes packets: it looks at the state and packet header, then makes a forwarding decision. The control plane is what puts that forwarding state there.</code> | <code>(Learn how and when to remove these template messages) Software-defined networking (SDN) is an approach to computer networking that allows network administrators to programmatically initialize, control, change, and manage network behavior dynamically via open interfaces and abstraction of lower-level functionality.</code> |
279
+ | <code>can vacuuming every day lessen fleas</code> | <code>Thoroughly and regularly clean areas where you find adult fleas, flea larvae, and flea eggs. Vacuum floors, rugs, carpets, upholstered furniture, and crevices around baseboards and cabinets daily or every other day to remove flea eggs, larvae, and adults.</code> | <code>LIFE CYCLE. Unlike most fleas, adult cat fleas remain on the host where feeding, mating, and egg laying occur. Females lay about 20 to 50 eggs per day. Cat flea eggs are pearly white, oval, and about 1/32 inch long (Figure 3).</code> | <code>I wash my sheets every day , vacuum , shampoo , and even wash the pets , with different shampoo every time and use different sprays every time as I learned fleas become resistant if you constantly use the same all the time . I’m at wits end and I am scared to even enter my house. December 13, 2015 at 12:57 PM #44900.</code> |
280
  * Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
281
  ```json
282
  {
283
  "loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')",
284
+ "document_regularizer_weight": 0.003,
285
+ "query_regularizer_weight": 0.005
286
  }
287
  ```
288
 
 
290
  #### Non-Default Hyperparameters
291
 
292
  - `eval_strategy`: epoch
293
+ - `per_device_train_batch_size`: 16
294
+ - `per_device_eval_batch_size`: 16
295
+ - `gradient_accumulation_steps`: 8
296
  - `learning_rate`: 6e-05
297
+ - `num_train_epochs`: 6
298
  - `lr_scheduler_type`: cosine
299
  - `warmup_ratio`: 0.025
300
  - `fp16`: True
301
+ - `load_best_model_at_end`: True
302
  - `optim`: adamw_torch_fused
303
+ - `push_to_hub`: True
304
  - `batch_sampler`: no_duplicates
305
 
306
  #### All Hyperparameters
 
310
  - `do_predict`: False
311
  - `eval_strategy`: epoch
312
  - `prediction_loss_only`: True
313
+ - `per_device_train_batch_size`: 16
314
+ - `per_device_eval_batch_size`: 16
315
  - `per_gpu_train_batch_size`: None
316
  - `per_gpu_eval_batch_size`: None
317
+ - `gradient_accumulation_steps`: 8
318
  - `eval_accumulation_steps`: None
319
  - `torch_empty_cache_steps`: None
320
  - `learning_rate`: 6e-05
 
323
  - `adam_beta2`: 0.999
324
  - `adam_epsilon`: 1e-08
325
  - `max_grad_norm`: 1.0
326
+ - `num_train_epochs`: 6
327
  - `max_steps`: -1
328
  - `lr_scheduler_type`: cosine
329
  - `lr_scheduler_kwargs`: {}
 
363
  - `disable_tqdm`: False
364
  - `remove_unused_columns`: True
365
  - `label_names`: None
366
+ - `load_best_model_at_end`: True
367
  - `ignore_data_skip`: False
368
  - `fsdp`: []
369
  - `fsdp_min_num_params`: 0
 
384
  - `dataloader_persistent_workers`: False
385
  - `skip_memory_metrics`: True
386
  - `use_legacy_prediction_loop`: False
387
+ - `push_to_hub`: True
388
  - `resume_from_checkpoint`: None
389
  - `hub_model_id`: None
390
  - `hub_strategy`: every_save
 
427
  </details>
428
 
429
  ### Training Logs
430
+ | Epoch | Step | Training Loss | dot_ndcg@10 |
431
+ |:-------:|:---------:|:-------------:|:-----------:|
432
+ | 1.0 | 3907 | 19.5833 | 0.7041 |
433
+ | 2.0 | 7814 | 0.7032 | 0.7125 |
434
+ | 3.0 | 11721 | 0.6323 | 0.7149 |
435
+ | **4.0** | **15628** | **0.5691** | **0.7192** |
436
+ | 5.0 | 19535 | 0.5214 | 0.7128 |
437
+ | 6.0 | 23442 | 0.4996 | 0.7129 |
438
 
439
+ * The bold row denotes the saved checkpoint.
440
 
441
  ### Framework Versions
442
+ - Python: 3.11.11
443
  - Sentence Transformers: 5.0.0
444
  - Transformers: 4.53.1
445
  - PyTorch: 2.6.0+cu124
446
+ - Accelerate: 1.5.2
447
  - Datasets: 3.6.0
448
+ - Tokenizers: 0.21.1
449
 
450
  ## Citation
451
 
 
515
  ## Model Card Contact
516
 
517
  *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
518
+ -->
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:02ac0b450c721891da6146541abdcbb030bc4969b0f9817c3a9d4073a720241a
3
  size 44814856
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d9467c548cbc95a14f457d9779068a9ba85f10daefeac58aec84e7b17d95d42
3
  size 44814856
special_tokens_map.json CHANGED
@@ -1,7 +1,37 @@
1
  {
2
- "cls_token": "[CLS]",
3
- "mask_token": "[MASK]",
4
- "pad_token": "[PAD]",
5
- "sep_token": "[SEP]",
6
- "unk_token": "[UNK]"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  }
 
1
  {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
  }
tokenizer_config.json CHANGED
@@ -47,12 +47,19 @@
47
  "do_lower_case": true,
48
  "extra_special_tokens": {},
49
  "mask_token": "[MASK]",
50
- "model_max_length": 1000000000000000019884624838656,
 
51
  "never_split": null,
 
52
  "pad_token": "[PAD]",
 
 
53
  "sep_token": "[SEP]",
 
54
  "strip_accents": null,
55
  "tokenize_chinese_chars": true,
56
  "tokenizer_class": "BertTokenizer",
 
 
57
  "unk_token": "[UNK]"
58
  }
 
47
  "do_lower_case": true,
48
  "extra_special_tokens": {},
49
  "mask_token": "[MASK]",
50
+ "max_length": 512,
51
+ "model_max_length": 512,
52
  "never_split": null,
53
+ "pad_to_multiple_of": null,
54
  "pad_token": "[PAD]",
55
+ "pad_token_type_id": 0,
56
+ "padding_side": "right",
57
  "sep_token": "[SEP]",
58
+ "stride": 0,
59
  "strip_accents": null,
60
  "tokenize_chinese_chars": true,
61
  "tokenizer_class": "BertTokenizer",
62
+ "truncation_side": "right",
63
+ "truncation_strategy": "longest_first",
64
  "unk_token": "[UNK]"
65
  }