tomaarsen HF staff commited on
Commit
0dd8013
·
verified ·
1 Parent(s): 0823e5a

Add new CrossEncoder model

Browse files
README.md ADDED
@@ -0,0 +1,530 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - sentence-transformers
6
+ - cross-encoder
7
+ - generated_from_trainer
8
+ - dataset_size:78704
9
+ - loss:RankNetLoss
10
+ base_model: microsoft/MiniLM-L12-H384-uncased
11
+ datasets:
12
+ - microsoft/ms_marco
13
+ pipeline_tag: text-ranking
14
+ library_name: sentence-transformers
15
+ metrics:
16
+ - map
17
+ - mrr@10
18
+ - ndcg@10
19
+ co2_eq_emissions:
20
+ emissions: 88.25456122369188
21
+ energy_consumed: 0.22704941375061585
22
+ source: codecarbon
23
+ training_type: fine-tuning
24
+ on_cloud: false
25
+ cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
26
+ ram_total_size: 31.777088165283203
27
+ hours_used: 0.737
28
+ hardware_used: 1 x NVIDIA GeForce RTX 3090
29
+ model-index:
30
+ - name: CrossEncoder based on microsoft/MiniLM-L12-H384-uncased
31
+ results:
32
+ - task:
33
+ type: cross-encoder-reranking
34
+ name: Cross Encoder Reranking
35
+ dataset:
36
+ name: NanoMSMARCO R100
37
+ type: NanoMSMARCO_R100
38
+ metrics:
39
+ - type: map
40
+ value: 0.5117
41
+ name: Map
42
+ - type: mrr@10
43
+ value: 0.5006
44
+ name: Mrr@10
45
+ - type: ndcg@10
46
+ value: 0.5666
47
+ name: Ndcg@10
48
+ - task:
49
+ type: cross-encoder-reranking
50
+ name: Cross Encoder Reranking
51
+ dataset:
52
+ name: NanoNFCorpus R100
53
+ type: NanoNFCorpus_R100
54
+ metrics:
55
+ - type: map
56
+ value: 0.3404
57
+ name: Map
58
+ - type: mrr@10
59
+ value: 0.583
60
+ name: Mrr@10
61
+ - type: ndcg@10
62
+ value: 0.3866
63
+ name: Ndcg@10
64
+ - task:
65
+ type: cross-encoder-reranking
66
+ name: Cross Encoder Reranking
67
+ dataset:
68
+ name: NanoNQ R100
69
+ type: NanoNQ_R100
70
+ metrics:
71
+ - type: map
72
+ value: 0.5205
73
+ name: Map
74
+ - type: mrr@10
75
+ value: 0.5252
76
+ name: Mrr@10
77
+ - type: ndcg@10
78
+ value: 0.5972
79
+ name: Ndcg@10
80
+ - task:
81
+ type: cross-encoder-nano-beir
82
+ name: Cross Encoder Nano BEIR
83
+ dataset:
84
+ name: NanoBEIR R100 mean
85
+ type: NanoBEIR_R100_mean
86
+ metrics:
87
+ - type: map
88
+ value: 0.4575
89
+ name: Map
90
+ - type: mrr@10
91
+ value: 0.5362
92
+ name: Mrr@10
93
+ - type: ndcg@10
94
+ value: 0.5168
95
+ name: Ndcg@10
96
+ ---
97
+
98
+ # CrossEncoder based on microsoft/MiniLM-L12-H384-uncased
99
+
100
+ This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco) dataset using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
101
+
102
+ ## Model Details
103
+
104
+ ### Model Description
105
+ - **Model Type:** Cross Encoder
106
+ - **Base model:** [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) <!-- at revision 44acabbec0ef496f6dbc93adadea57f376b7c0ec -->
107
+ - **Maximum Sequence Length:** 512 tokens
108
+ - **Number of Output Labels:** 1 label
109
+ - **Training Dataset:**
110
+ - [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco)
111
+ - **Language:** en
112
+ <!-- - **License:** Unknown -->
113
+
114
+ ### Model Sources
115
+
116
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
117
+ - **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
118
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
119
+ - **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
120
+
121
+ ## Usage
122
+
123
+ ### Direct Usage (Sentence Transformers)
124
+
125
+ First install the Sentence Transformers library:
126
+
127
+ ```bash
128
+ pip install -U sentence-transformers
129
+ ```
130
+
131
+ Then you can load this model and run inference.
132
+ ```python
133
+ from sentence_transformers import CrossEncoder
134
+
135
+ # Download from the 🤗 Hub
136
+ model = CrossEncoder("tomaarsen/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-ranknetloss")
137
+ # Get scores for pairs of texts
138
+ pairs = [
139
+ ['How many calories in an egg', 'There are on average between 55 and 80 calories in an egg depending on its size.'],
140
+ ['How many calories in an egg', 'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.'],
141
+ ['How many calories in an egg', 'Most of the calories in an egg come from the yellow yolk in the center.'],
142
+ ]
143
+ scores = model.predict(pairs)
144
+ print(scores.shape)
145
+ # (3,)
146
+
147
+ # Or rank different texts based on similarity to a single text
148
+ ranks = model.rank(
149
+ 'How many calories in an egg',
150
+ [
151
+ 'There are on average between 55 and 80 calories in an egg depending on its size.',
152
+ 'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.',
153
+ 'Most of the calories in an egg come from the yellow yolk in the center.',
154
+ ]
155
+ )
156
+ # [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
157
+ ```
158
+
159
+ <!--
160
+ ### Direct Usage (Transformers)
161
+
162
+ <details><summary>Click to see the direct usage in Transformers</summary>
163
+
164
+ </details>
165
+ -->
166
+
167
+ <!--
168
+ ### Downstream Usage (Sentence Transformers)
169
+
170
+ You can finetune this model on your own dataset.
171
+
172
+ <details><summary>Click to expand</summary>
173
+
174
+ </details>
175
+ -->
176
+
177
+ <!--
178
+ ### Out-of-Scope Use
179
+
180
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
181
+ -->
182
+
183
+ ## Evaluation
184
+
185
+ ### Metrics
186
+
187
+ #### Cross Encoder Reranking
188
+
189
+ * Datasets: `NanoMSMARCO_R100`, `NanoNFCorpus_R100` and `NanoNQ_R100`
190
+ * Evaluated with [<code>CrossEncoderRerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderRerankingEvaluator) with these parameters:
191
+ ```json
192
+ {
193
+ "at_k": 10,
194
+ "always_rerank_positives": true
195
+ }
196
+ ```
197
+
198
+ | Metric | NanoMSMARCO_R100 | NanoNFCorpus_R100 | NanoNQ_R100 |
199
+ |:------------|:---------------------|:---------------------|:---------------------|
200
+ | map | 0.5117 (+0.0221) | 0.3404 (+0.0794) | 0.5205 (+0.1009) |
201
+ | mrr@10 | 0.5006 (+0.0231) | 0.5830 (+0.0832) | 0.5252 (+0.0985) |
202
+ | **ndcg@10** | **0.5666 (+0.0262)** | **0.3866 (+0.0615)** | **0.5972 (+0.0965)** |
203
+
204
+ #### Cross Encoder Nano BEIR
205
+
206
+ * Dataset: `NanoBEIR_R100_mean`
207
+ * Evaluated with [<code>CrossEncoderNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderNanoBEIREvaluator) with these parameters:
208
+ ```json
209
+ {
210
+ "dataset_names": [
211
+ "msmarco",
212
+ "nfcorpus",
213
+ "nq"
214
+ ],
215
+ "rerank_k": 100,
216
+ "at_k": 10,
217
+ "always_rerank_positives": true
218
+ }
219
+ ```
220
+
221
+ | Metric | Value |
222
+ |:------------|:---------------------|
223
+ | map | 0.4575 (+0.0675) |
224
+ | mrr@10 | 0.5362 (+0.0682) |
225
+ | **ndcg@10** | **0.5168 (+0.0614)** |
226
+
227
+ <!--
228
+ ## Bias, Risks and Limitations
229
+
230
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
231
+ -->
232
+
233
+ <!--
234
+ ### Recommendations
235
+
236
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
237
+ -->
238
+
239
+ ## Training Details
240
+
241
+ ### Training Dataset
242
+
243
+ #### ms_marco
244
+
245
+ * Dataset: [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco) at [a47ee7a](https://huggingface.co/datasets/microsoft/ms_marco/tree/a47ee7aae8d7d466ba15f9f0bfac3b3681087b3a)
246
+ * Size: 78,704 training samples
247
+ * Columns: <code>query</code>, <code>docs</code>, and <code>labels</code>
248
+ * Approximate statistics based on the first 1000 samples:
249
+ | | query | docs | labels |
250
+ |:--------|:-----------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|
251
+ | type | string | list | list |
252
+ | details | <ul><li>min: 11 characters</li><li>mean: 32.93 characters</li><li>max: 95 characters</li></ul> | <ul><li>min: 3 elements</li><li>mean: 6.50 elements</li><li>max: 10 elements</li></ul> | <ul><li>min: 3 elements</li><li>mean: 6.50 elements</li><li>max: 10 elements</li></ul> |
253
+ * Samples:
254
+ | query | docs | labels |
255
+ |:----------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------|
256
+ | <code>what does vegan mean</code> | <code>['A vegan, a person who practices veganism, is an individual who actively avoids the use of animal products for food, clothing or any other purpose. As with many diets and lifestyles, not all vegans approach animal product avoidance in the same ways. For example, some vegans completely avoid all animal by-products, while others consider it acceptable to use honey, silk, and other by-products produced from insects.', 'Fruitarian: Eats only raw fruit, including raw nuts and seeds. Vegan. Does not eat dairy products, eggs, or any other animal product. So in a nutshell, a vegetarian diet excludes flesh, but includes other animal products: A vegan diet is one that excludes all animal products. And I have to say that I have met very few vegans who stop with what they put in their mouths. ', 'Animal Ingredients and Their Alternatives. Adopting a vegan diet means saying “no” to cruelty to animals and environmental destruction and “yes” to compassion and good health. It also means paying attent...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
257
+ | <code>difference between viral and bacterial conjunctivitis symptoms</code> | <code>["Viral and bacterial conjunctivitis. Viral conjunctivitis and bacterial conjunctivitis may affect one or both eyes. Viral conjunctivitis usually produces a watery discharge. Bacterial conjunctivitis often produces a thicker, yellow-green discharge. Both types can be associated with colds or symptoms of a respiratory infection, such as a sore throat. Both viral and bacterial types are very contagious. They are spread through direct or indirect contact with the eye secretions of someone who's infected", 'A Honor Society of Nursing (STTI) answered. Viral and bacterial conjunctivitis are similar, but differ in several key ways. First, bacterial conjunctivitis can be cured with antibiotics, while the viral form cannot. Second, there is a slight variation in symptoms. With viral conjunctivitis, the discharge from the eye is clearer and less thick than with the bacterial infection. Viral conjunctivitis can also cause painful swelling in the lymph node nearest the ear, a symptom not experienc...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
258
+ | <code>can single member llc be taxed as s corp</code> | <code>['A single-member limited liability company, as a solely owned LLC is called, gives the owner a choice of how to be taxed -- as a sole proprietorship, an S corporation or a C corporation. The legal structure of the business itself doesn’t change with any of the choices. Under an S corporation classification, a single-member LLC needs to have a large enough profit in excess of the owner’s salary to realize any tax savings on passive income.', 'An S corp may own up to 100 percent of an LLC, or limited liability company. While all but single-member LLCs cannot be shareholders in S corporations, the reverse -- an S corporation owning an LLC -- is legal. The similarity of tax treatment for S corps and LLCs eliminates most of the common concerns about IRS issues. There is, however, one way for an LLC to own stock in an S corp. A single member LLC, taxed as a sole proprietorship, is called a disregarded entity by the IRS. Treated like an unincorporated individual, this LLC could own stock in ...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
259
+ * Loss: [<code>RankNetLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#ranknetloss) with these parameters:
260
+ ```json
261
+ {
262
+ "weighting_scheme": "sentence_transformers.cross_encoder.losses.LambdaLoss.NoWeightingScheme",
263
+ "k": null,
264
+ "sigma": 1.0,
265
+ "eps": 1e-10,
266
+ "reduction_log": "binary",
267
+ "activation_fct": "torch.nn.modules.linear.Identity",
268
+ "mini_batch_size": 16
269
+ }
270
+ ```
271
+
272
+ ### Evaluation Dataset
273
+
274
+ #### ms_marco
275
+
276
+ * Dataset: [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco) at [a47ee7a](https://huggingface.co/datasets/microsoft/ms_marco/tree/a47ee7aae8d7d466ba15f9f0bfac3b3681087b3a)
277
+ * Size: 1,000 evaluation samples
278
+ * Columns: <code>query</code>, <code>docs</code>, and <code>labels</code>
279
+ * Approximate statistics based on the first 1000 samples:
280
+ | | query | docs | labels |
281
+ |:--------|:-----------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|
282
+ | type | string | list | list |
283
+ | details | <ul><li>min: 11 characters</li><li>mean: 33.63 characters</li><li>max: 99 characters</li></ul> | <ul><li>min: 3 elements</li><li>mean: 6.50 elements</li><li>max: 10 elements</li></ul> | <ul><li>min: 3 elements</li><li>mean: 6.50 elements</li><li>max: 10 elements</li></ul> |
284
+ * Samples:
285
+ | query | docs | labels |
286
+ |:----------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------|
287
+ | <code>define monogenic trait</code> | <code>['An allele is a version of a gene. For example, in fruitflies there is a gene which determines eye colour: one allele gives red eyes, and another gives white eyes; it is the same *gene*, just different versions of that gene. A monogenic trait is one which is encoded by a single gene. e.g. - cystic fibrosis in humans. There is a single gene which determines this trait: the wild-type allele is healthy, while the disease allele gives you cystic fibrosis', 'Abstract. Monogenic inheritance refers to genetic control of a phenotype or trait by a single gene. For a monogenic trait, mutations in one (dominant) or both (recessive) copies of the gene are sufficient for the trait to be expressed. Digenic inheritance refers to mutation on two genes interacting to cause a genetic phenotype or disease. Triallelic inheritance is a special case of digenic inheritance that requires homozygous mutations at one locus and heterozygous mutations at a second locus to express a phenotype.', 'A trait that is ...</code> | <code>[1, 1, 0, 0, 0, ...]</code> |
288
+ | <code>behavioral theory definition</code> | <code>["Not to be confused with Behavioralism. Behaviorism (or behaviourism) is an approach to psychology that focuses on an individual's behavior. It combines elements of philosophy, methodology, and psychological theory", 'The initial assumption is that behavior can be explained and further described using behavioral theories. For instance, John Watson and B.F. Skinner advocate the theory that behavior can be acquired through conditioning. Also known as general behavior theory. BEHAVIOR THEORY: Each behavioral theory is an advantage to learning, because it provides teachers with a new and different approach.. No related posts. ', 'behaviorism. noun be·hav·ior·ism. : a school of psychology that takes the objective evidence of behavior (as measured responses to stimuli) as the only concern of its research and the only basis of its theory without reference to conscious experience—compare cognitive psychology. : a school of psychology that takes the objective evidence of behavior (as measured ...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
289
+ | <code>What is a disease that is pleiotropic?</code> | <code>['Unsourced material may be challenged and removed. (September 2013). Pleiotropy occurs when one gene influences two or more seemingly unrelated phenotypic traits, an example being phenylketonuria, which is a human disease that affects multiple systems but is caused by one gene defect. Consequently, a mutation in a pleiotropic gene may have an effect on some or all traits simultaneously. The underlying mechanism is that the gene codes for a product that is, for example, used by various cells, or has a signaling function on various targets. A classic example of pleiotropy is the human disease phenylketonuria (PKU).', 'Pleiotropic, autosomal dominant disorder affecting connective tissue: Related Diseases. Pleiotropic, autosomal dominant disorder affecting connective tissue: Pleiotropic, autosomal dominant disorder affecting connective tissue is listed as a type of (or associated with) the following medical conditions in our database: 1 Heart conditions. Office of Rare Diseases (ORD) of ...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
290
+ * Loss: [<code>RankNetLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#ranknetloss) with these parameters:
291
+ ```json
292
+ {
293
+ "weighting_scheme": "sentence_transformers.cross_encoder.losses.LambdaLoss.NoWeightingScheme",
294
+ "k": null,
295
+ "sigma": 1.0,
296
+ "eps": 1e-10,
297
+ "reduction_log": "binary",
298
+ "activation_fct": "torch.nn.modules.linear.Identity",
299
+ "mini_batch_size": 16
300
+ }
301
+ ```
302
+
303
+ ### Training Hyperparameters
304
+ #### Non-Default Hyperparameters
305
+
306
+ - `eval_strategy`: steps
307
+ - `per_device_train_batch_size`: 16
308
+ - `per_device_eval_batch_size`: 16
309
+ - `learning_rate`: 2e-05
310
+ - `num_train_epochs`: 1
311
+ - `warmup_ratio`: 0.1
312
+ - `seed`: 12
313
+ - `bf16`: True
314
+ - `load_best_model_at_end`: True
315
+
316
+ #### All Hyperparameters
317
+ <details><summary>Click to expand</summary>
318
+
319
+ - `overwrite_output_dir`: False
320
+ - `do_predict`: False
321
+ - `eval_strategy`: steps
322
+ - `prediction_loss_only`: True
323
+ - `per_device_train_batch_size`: 16
324
+ - `per_device_eval_batch_size`: 16
325
+ - `per_gpu_train_batch_size`: None
326
+ - `per_gpu_eval_batch_size`: None
327
+ - `gradient_accumulation_steps`: 1
328
+ - `eval_accumulation_steps`: None
329
+ - `torch_empty_cache_steps`: None
330
+ - `learning_rate`: 2e-05
331
+ - `weight_decay`: 0.0
332
+ - `adam_beta1`: 0.9
333
+ - `adam_beta2`: 0.999
334
+ - `adam_epsilon`: 1e-08
335
+ - `max_grad_norm`: 1.0
336
+ - `num_train_epochs`: 1
337
+ - `max_steps`: -1
338
+ - `lr_scheduler_type`: linear
339
+ - `lr_scheduler_kwargs`: {}
340
+ - `warmup_ratio`: 0.1
341
+ - `warmup_steps`: 0
342
+ - `log_level`: passive
343
+ - `log_level_replica`: warning
344
+ - `log_on_each_node`: True
345
+ - `logging_nan_inf_filter`: True
346
+ - `save_safetensors`: True
347
+ - `save_on_each_node`: False
348
+ - `save_only_model`: False
349
+ - `restore_callback_states_from_checkpoint`: False
350
+ - `no_cuda`: False
351
+ - `use_cpu`: False
352
+ - `use_mps_device`: False
353
+ - `seed`: 12
354
+ - `data_seed`: None
355
+ - `jit_mode_eval`: False
356
+ - `use_ipex`: False
357
+ - `bf16`: True
358
+ - `fp16`: False
359
+ - `fp16_opt_level`: O1
360
+ - `half_precision_backend`: auto
361
+ - `bf16_full_eval`: False
362
+ - `fp16_full_eval`: False
363
+ - `tf32`: None
364
+ - `local_rank`: 0
365
+ - `ddp_backend`: None
366
+ - `tpu_num_cores`: None
367
+ - `tpu_metrics_debug`: False
368
+ - `debug`: []
369
+ - `dataloader_drop_last`: False
370
+ - `dataloader_num_workers`: 0
371
+ - `dataloader_prefetch_factor`: None
372
+ - `past_index`: -1
373
+ - `disable_tqdm`: False
374
+ - `remove_unused_columns`: True
375
+ - `label_names`: None
376
+ - `load_best_model_at_end`: True
377
+ - `ignore_data_skip`: False
378
+ - `fsdp`: []
379
+ - `fsdp_min_num_params`: 0
380
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
381
+ - `fsdp_transformer_layer_cls_to_wrap`: None
382
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
383
+ - `deepspeed`: None
384
+ - `label_smoothing_factor`: 0.0
385
+ - `optim`: adamw_torch
386
+ - `optim_args`: None
387
+ - `adafactor`: False
388
+ - `group_by_length`: False
389
+ - `length_column_name`: length
390
+ - `ddp_find_unused_parameters`: None
391
+ - `ddp_bucket_cap_mb`: None
392
+ - `ddp_broadcast_buffers`: False
393
+ - `dataloader_pin_memory`: True
394
+ - `dataloader_persistent_workers`: False
395
+ - `skip_memory_metrics`: True
396
+ - `use_legacy_prediction_loop`: False
397
+ - `push_to_hub`: False
398
+ - `resume_from_checkpoint`: None
399
+ - `hub_model_id`: None
400
+ - `hub_strategy`: every_save
401
+ - `hub_private_repo`: None
402
+ - `hub_always_push`: False
403
+ - `gradient_checkpointing`: False
404
+ - `gradient_checkpointing_kwargs`: None
405
+ - `include_inputs_for_metrics`: False
406
+ - `include_for_metrics`: []
407
+ - `eval_do_concat_batches`: True
408
+ - `fp16_backend`: auto
409
+ - `push_to_hub_model_id`: None
410
+ - `push_to_hub_organization`: None
411
+ - `mp_parameters`:
412
+ - `auto_find_batch_size`: False
413
+ - `full_determinism`: False
414
+ - `torchdynamo`: None
415
+ - `ray_scope`: last
416
+ - `ddp_timeout`: 1800
417
+ - `torch_compile`: False
418
+ - `torch_compile_backend`: None
419
+ - `torch_compile_mode`: None
420
+ - `dispatch_batches`: None
421
+ - `split_batches`: None
422
+ - `include_tokens_per_second`: False
423
+ - `include_num_input_tokens_seen`: False
424
+ - `neftune_noise_alpha`: None
425
+ - `optim_target_modules`: None
426
+ - `batch_eval_metrics`: False
427
+ - `eval_on_start`: False
428
+ - `use_liger_kernel`: False
429
+ - `eval_use_gather_object`: False
430
+ - `average_tokens_across_devices`: False
431
+ - `prompts`: None
432
+ - `batch_sampler`: batch_sampler
433
+ - `multi_dataset_batch_sampler`: proportional
434
+
435
+ </details>
436
+
437
+ ### Training Logs
438
+ | Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_R100_ndcg@10 | NanoNFCorpus_R100_ndcg@10 | NanoNQ_R100_ndcg@10 | NanoBEIR_R100_mean_ndcg@10 |
439
+ |:----------:|:--------:|:-------------:|:---------------:|:------------------------:|:-------------------------:|:--------------------:|:--------------------------:|
440
+ | -1 | -1 | - | - | 0.0300 (-0.5104) | 0.2528 (-0.0723) | 0.0168 (-0.4839) | 0.0999 (-0.3555) |
441
+ | 0.0002 | 1 | 0.9988 | - | - | - | - | - |
442
+ | 0.0508 | 250 | 0.9926 | - | - | - | - | - |
443
+ | 0.1016 | 500 | 0.8625 | 0.7519 | 0.4085 (-0.1319) | 0.3086 (-0.0164) | 0.4801 (-0.0205) | 0.3991 (-0.0563) |
444
+ | 0.1525 | 750 | 0.7698 | - | - | - | - | - |
445
+ | 0.2033 | 1000 | 0.736 | 0.6956 | 0.4861 (-0.0543) | 0.2964 (-0.0286) | 0.6195 (+0.1188) | 0.4674 (+0.0120) |
446
+ | 0.2541 | 1250 | 0.7116 | - | - | - | - | - |
447
+ | 0.3049 | 1500 | 0.7044 | 0.6713 | 0.5369 (-0.0036) | 0.3516 (+0.0265) | 0.5804 (+0.0798) | 0.4896 (+0.0343) |
448
+ | 0.3558 | 1750 | 0.6877 | - | - | - | - | - |
449
+ | 0.4066 | 2000 | 0.6727 | 0.6600 | 0.5582 (+0.0178) | 0.3725 (+0.0474) | 0.5699 (+0.0693) | 0.5002 (+0.0448) |
450
+ | 0.4574 | 2250 | 0.6781 | - | - | - | - | - |
451
+ | 0.5082 | 2500 | 0.6697 | 0.6538 | 0.5344 (-0.0061) | 0.3889 (+0.0639) | 0.5605 (+0.0599) | 0.4946 (+0.0392) |
452
+ | 0.5591 | 2750 | 0.6523 | - | - | - | - | - |
453
+ | **0.6099** | **3000** | **0.6649** | **0.6471** | **0.5666 (+0.0262)** | **0.3866 (+0.0615)** | **0.5972 (+0.0965)** | **0.5168 (+0.0614)** |
454
+ | 0.6607 | 3250 | 0.659 | - | - | - | - | - |
455
+ | 0.7115 | 3500 | 0.6566 | 0.6449 | 0.5744 (+0.0340) | 0.3637 (+0.0387) | 0.5469 (+0.0463) | 0.4950 (+0.0397) |
456
+ | 0.7624 | 3750 | 0.6472 | - | - | - | - | - |
457
+ | 0.8132 | 4000 | 0.6553 | 0.6420 | 0.5734 (+0.0329) | 0.3878 (+0.0628) | 0.5717 (+0.0710) | 0.5110 (+0.0556) |
458
+ | 0.8640 | 4250 | 0.6386 | - | - | - | - | - |
459
+ | 0.9148 | 4500 | 0.6477 | 0.6347 | 0.5664 (+0.0260) | 0.3854 (+0.0604) | 0.5824 (+0.0818) | 0.5114 (+0.0560) |
460
+ | 0.9656 | 4750 | 0.6493 | - | - | - | - | - |
461
+ | -1 | -1 | - | - | 0.5666 (+0.0262) | 0.3866 (+0.0615) | 0.5972 (+0.0965) | 0.5168 (+0.0614) |
462
+
463
+ * The bold row denotes the saved checkpoint.
464
+
465
+ ### Environmental Impact
466
+ Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
467
+ - **Energy Consumed**: 0.227 kWh
468
+ - **Carbon Emitted**: 0.088 kg of CO2
469
+ - **Hours Used**: 0.737 hours
470
+
471
+ ### Training Hardware
472
+ - **On Cloud**: No
473
+ - **GPU Model**: 1 x NVIDIA GeForce RTX 3090
474
+ - **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
475
+ - **RAM Size**: 31.78 GB
476
+
477
+ ### Framework Versions
478
+ - Python: 3.11.6
479
+ - Sentence Transformers: 3.5.0.dev0
480
+ - Transformers: 4.49.0
481
+ - PyTorch: 2.6.0+cu124
482
+ - Accelerate: 1.5.1
483
+ - Datasets: 3.3.2
484
+ - Tokenizers: 0.21.0
485
+
486
+ ## Citation
487
+
488
+ ### BibTeX
489
+
490
+ #### Sentence Transformers
491
+ ```bibtex
492
+ @inproceedings{reimers-2019-sentence-bert,
493
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
494
+ author = "Reimers, Nils and Gurevych, Iryna",
495
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
496
+ month = "11",
497
+ year = "2019",
498
+ publisher = "Association for Computational Linguistics",
499
+ url = "https://arxiv.org/abs/1908.10084",
500
+ }
501
+ ```
502
+
503
+ #### RankNetLoss
504
+ ```bibtex
505
+ @inproceedings{burges2005learning,
506
+ title={Learning to rank using gradient descent},
507
+ author={Burges, Chris and Shaked, Tal and Renshaw, Erin and Lazier, Ari and Deeds, Matt and Hamilton, Nicole and Hullender, Greg},
508
+ booktitle={Proceedings of the 22nd international conference on Machine learning},
509
+ pages={89--96},
510
+ year={2005}
511
+ }
512
+ ```
513
+
514
+ <!--
515
+ ## Glossary
516
+
517
+ *Clearly define terms in order to be accessible across audiences.*
518
+ -->
519
+
520
+ <!--
521
+ ## Model Card Authors
522
+
523
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
524
+ -->
525
+
526
+ <!--
527
+ ## Model Card Contact
528
+
529
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
530
+ -->
config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "microsoft/MiniLM-L12-H384-uncased",
3
+ "architectures": [
4
+ "BertForSequenceClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 384,
11
+ "id2label": {
12
+ "0": "LABEL_0"
13
+ },
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 1536,
16
+ "label2id": {
17
+ "LABEL_0": 0
18
+ },
19
+ "layer_norm_eps": 1e-12,
20
+ "max_position_embeddings": 512,
21
+ "model_type": "bert",
22
+ "num_attention_heads": 12,
23
+ "num_hidden_layers": 12,
24
+ "pad_token_id": 0,
25
+ "position_embedding_type": "absolute",
26
+ "sentence_transformers": {
27
+ "activation_fn": "torch.nn.modules.activation.Sigmoid"
28
+ },
29
+ "torch_dtype": "float32",
30
+ "transformers_version": "4.49.0",
31
+ "type_vocab_size": 2,
32
+ "use_cache": true,
33
+ "vocab_size": 30522
34
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aac440a1cc4bc14fbf8cf397bdc337b776fcc4c0dc414996921d03b3cf477f55
3
+ size 133464836
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "extra_special_tokens": {},
49
+ "mask_token": "[MASK]",
50
+ "model_max_length": 512,
51
+ "never_split": null,
52
+ "pad_token": "[PAD]",
53
+ "sep_token": "[SEP]",
54
+ "strip_accents": null,
55
+ "tokenize_chinese_chars": true,
56
+ "tokenizer_class": "BertTokenizer",
57
+ "unk_token": "[UNK]"
58
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff