yjoonjang commited on
Commit
07b76a5
·
verified ·
1 Parent(s): 04822f4

Add new CrossEncoder model

Browse files
README.md ADDED
@@ -0,0 +1,503 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - sentence-transformers
6
+ - cross-encoder
7
+ - generated_from_trainer
8
+ - dataset_size:78704
9
+ - loss:PListMLELoss
10
+ base_model: microsoft/MiniLM-L12-H384-uncased
11
+ datasets:
12
+ - microsoft/ms_marco
13
+ pipeline_tag: text-ranking
14
+ library_name: sentence-transformers
15
+ metrics:
16
+ - map
17
+ - mrr@10
18
+ - ndcg@10
19
+ model-index:
20
+ - name: CrossEncoder based on microsoft/MiniLM-L12-H384-uncased
21
+ results:
22
+ - task:
23
+ type: cross-encoder-reranking
24
+ name: Cross Encoder Reranking
25
+ dataset:
26
+ name: NanoMSMARCO R100
27
+ type: NanoMSMARCO_R100
28
+ metrics:
29
+ - type: map
30
+ value: 0.5257
31
+ name: Map
32
+ - type: mrr@10
33
+ value: 0.5139
34
+ name: Mrr@10
35
+ - type: ndcg@10
36
+ value: 0.5778
37
+ name: Ndcg@10
38
+ - task:
39
+ type: cross-encoder-reranking
40
+ name: Cross Encoder Reranking
41
+ dataset:
42
+ name: NanoNFCorpus R100
43
+ type: NanoNFCorpus_R100
44
+ metrics:
45
+ - type: map
46
+ value: 0.3387
47
+ name: Map
48
+ - type: mrr@10
49
+ value: 0.5921
50
+ name: Mrr@10
51
+ - type: ndcg@10
52
+ value: 0.366
53
+ name: Ndcg@10
54
+ - task:
55
+ type: cross-encoder-reranking
56
+ name: Cross Encoder Reranking
57
+ dataset:
58
+ name: NanoNQ R100
59
+ type: NanoNQ_R100
60
+ metrics:
61
+ - type: map
62
+ value: 0.5581
63
+ name: Map
64
+ - type: mrr@10
65
+ value: 0.5648
66
+ name: Mrr@10
67
+ - type: ndcg@10
68
+ value: 0.6325
69
+ name: Ndcg@10
70
+ - task:
71
+ type: cross-encoder-nano-beir
72
+ name: Cross Encoder Nano BEIR
73
+ dataset:
74
+ name: NanoBEIR R100 mean
75
+ type: NanoBEIR_R100_mean
76
+ metrics:
77
+ - type: map
78
+ value: 0.4742
79
+ name: Map
80
+ - type: mrr@10
81
+ value: 0.5569
82
+ name: Mrr@10
83
+ - type: ndcg@10
84
+ value: 0.5254
85
+ name: Ndcg@10
86
+ ---
87
+
88
+ # CrossEncoder based on microsoft/MiniLM-L12-H384-uncased
89
+
90
+ This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco) dataset using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
91
+
92
+ ## Model Details
93
+
94
+ ### Model Description
95
+ - **Model Type:** Cross Encoder
96
+ - **Base model:** [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) <!-- at revision 44acabbec0ef496f6dbc93adadea57f376b7c0ec -->
97
+ - **Maximum Sequence Length:** 512 tokens
98
+ - **Number of Output Labels:** 1 label
99
+ - **Training Dataset:**
100
+ - [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco)
101
+ - **Language:** en
102
+ <!-- - **License:** Unknown -->
103
+
104
+ ### Model Sources
105
+
106
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
107
+ - **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
108
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
109
+ - **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
110
+
111
+ ## Usage
112
+
113
+ ### Direct Usage (Sentence Transformers)
114
+
115
+ First install the Sentence Transformers library:
116
+
117
+ ```bash
118
+ pip install -U sentence-transformers
119
+ ```
120
+
121
+ Then you can load this model and run inference.
122
+ ```python
123
+ from sentence_transformers import CrossEncoder
124
+
125
+ # Download from the 🤗 Hub
126
+ model = CrossEncoder("yjoonjang/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-plistmle-normalize-temperature-2")
127
+ # Get scores for pairs of texts
128
+ pairs = [
129
+ ['How many calories in an egg', 'There are on average between 55 and 80 calories in an egg depending on its size.'],
130
+ ['How many calories in an egg', 'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.'],
131
+ ['How many calories in an egg', 'Most of the calories in an egg come from the yellow yolk in the center.'],
132
+ ]
133
+ scores = model.predict(pairs)
134
+ print(scores.shape)
135
+ # (3,)
136
+
137
+ # Or rank different texts based on similarity to a single text
138
+ ranks = model.rank(
139
+ 'How many calories in an egg',
140
+ [
141
+ 'There are on average between 55 and 80 calories in an egg depending on its size.',
142
+ 'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.',
143
+ 'Most of the calories in an egg come from the yellow yolk in the center.',
144
+ ]
145
+ )
146
+ # [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
147
+ ```
148
+
149
+ <!--
150
+ ### Direct Usage (Transformers)
151
+
152
+ <details><summary>Click to see the direct usage in Transformers</summary>
153
+
154
+ </details>
155
+ -->
156
+
157
+ <!--
158
+ ### Downstream Usage (Sentence Transformers)
159
+
160
+ You can finetune this model on your own dataset.
161
+
162
+ <details><summary>Click to expand</summary>
163
+
164
+ </details>
165
+ -->
166
+
167
+ <!--
168
+ ### Out-of-Scope Use
169
+
170
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
171
+ -->
172
+
173
+ ## Evaluation
174
+
175
+ ### Metrics
176
+
177
+ #### Cross Encoder Reranking
178
+
179
+ * Datasets: `NanoMSMARCO_R100`, `NanoNFCorpus_R100` and `NanoNQ_R100`
180
+ * Evaluated with [<code>CrossEncoderRerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderRerankingEvaluator) with these parameters:
181
+ ```json
182
+ {
183
+ "at_k": 10,
184
+ "always_rerank_positives": true
185
+ }
186
+ ```
187
+
188
+ | Metric | NanoMSMARCO_R100 | NanoNFCorpus_R100 | NanoNQ_R100 |
189
+ |:------------|:---------------------|:---------------------|:---------------------|
190
+ | map | 0.5257 (+0.0362) | 0.3387 (+0.0777) | 0.5581 (+0.1385) |
191
+ | mrr@10 | 0.5139 (+0.0364) | 0.5921 (+0.0923) | 0.5648 (+0.1381) |
192
+ | **ndcg@10** | **0.5778 (+0.0374)** | **0.3660 (+0.0410)** | **0.6325 (+0.1319)** |
193
+
194
+ #### Cross Encoder Nano BEIR
195
+
196
+ * Dataset: `NanoBEIR_R100_mean`
197
+ * Evaluated with [<code>CrossEncoderNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderNanoBEIREvaluator) with these parameters:
198
+ ```json
199
+ {
200
+ "dataset_names": [
201
+ "msmarco",
202
+ "nfcorpus",
203
+ "nq"
204
+ ],
205
+ "rerank_k": 100,
206
+ "at_k": 10,
207
+ "always_rerank_positives": true
208
+ }
209
+ ```
210
+
211
+ | Metric | Value |
212
+ |:------------|:---------------------|
213
+ | map | 0.4742 (+0.0841) |
214
+ | mrr@10 | 0.5569 (+0.0889) |
215
+ | **ndcg@10** | **0.5254 (+0.0701)** |
216
+
217
+ <!--
218
+ ## Bias, Risks and Limitations
219
+
220
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
221
+ -->
222
+
223
+ <!--
224
+ ### Recommendations
225
+
226
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
227
+ -->
228
+
229
+ ## Training Details
230
+
231
+ ### Training Dataset
232
+
233
+ #### ms_marco
234
+
235
+ * Dataset: [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco) at [a47ee7a](https://huggingface.co/datasets/microsoft/ms_marco/tree/a47ee7aae8d7d466ba15f9f0bfac3b3681087b3a)
236
+ * Size: 78,704 training samples
237
+ * Columns: <code>query</code>, <code>docs</code>, and <code>labels</code>
238
+ * Approximate statistics based on the first 1000 samples:
239
+ | | query | docs | labels |
240
+ |:--------|:-----------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|
241
+ | type | string | list | list |
242
+ | details | <ul><li>min: 12 characters</li><li>mean: 33.99 characters</li><li>max: 98 characters</li></ul> | <ul><li>min: 3 elements</li><li>mean: 6.50 elements</li><li>max: 10 elements</li></ul> | <ul><li>min: 3 elements</li><li>mean: 6.50 elements</li><li>max: 10 elements</li></ul> |
243
+ * Samples:
244
+ | query | docs | labels |
245
+ |:------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------|
246
+ | <code>what does a business development consultant do</code> | <code>['Duties and Responsibilities. An organizational development consultant is a person called in to a company, be it a large corporation or a small business, to evaluate how it operates and make recommendations for improvement.', 'Many sales businesses use business development consultants to help generate leads and show them how to do so. In a business such as sales, lead generation can make or break a company. Having someone show a business owner how to successfully acquire this key piece of information is very important.', 'Development of a marketing strategy is another area covered by a business development consultant. Many businesses struggle with devising ways to effectively market their business to prospective clients.', 'A Good Business Consultant Has Extensive Experience. A good Business Consultant has experience working in and working with a broad range of businesses. It is the accumulated business history of a Business Consultant which makes the consultant valuable.', "A busines...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
247
+ | <code>did soren kjeldsen ever play in the masters</code> | <code>["Recent News. Soeren Søren kjeldsen stalled on the back nine and finished joint-runner up in The British masters At woburn with a-2-under par-32=37. 69 The, dane who Captured'may S Irish, open looked good for his second win of the season when shooting four birdies against a single bogey on The' marquess'course s front. nine", "Latest News. Soeren Søren kjeldsen stalled on the back nine and finished joint-runner up in The British masters At woburn with a-2-under par-32=37. 69 The, dane who Captured'may S Irish, open looked good for his second win of the season when shooting four birdies against a single bogey on The' marquess'course s front. nine", "Soren Kjeldsen of Denmark (L) celebrates winning the Irish Open with Austria's Bernd Wiesberger …. It is amazing to be holding the trophy but then I felt good coming into the tournament, he said. I played well in my last two tournaments and while I was not in contention I had the chance today to change all that.", "Denmark. Soeren Søren kje...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
248
+ | <code>what is a sell limit order</code> | <code>["Sell Limit Order. A Sell Limit Order is an order to sell a specified number of shares of a stock that you own at a designated price or higher, at a price that is above the current market price. This is your limit price, in other words, the minimum price you are willing to accept to sell your shares. The main benefit of a Sell Limit Order is that you may be able to sell the shares that you own at a minimum price that you specify IF the stock's price raises to that price. Sell Limit Orders are great for maximizing profit-taking.", 'Stop-Limit Order. A stop-limit order is an order to buy or sell a stock that combines the features of a stop order and a limit order. Once the stop price is reached, a stop-limit order becomes a limit order that will be executed at a specified price (or better)', "You place a Sell Limit Order @ $50 on 100 shares of TGT. Now suppose the price trades up to $50. As long as the price remains above $50 per share, your shares would then be sold at the next best av...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
249
+ * Loss: [<code>PListMLELoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#plistmleloss) with these parameters:
250
+ ```json
251
+ {
252
+ "lambda_weight": "sentence_transformers.cross_encoder.losses.PListMLELoss.PListMLELambdaWeight",
253
+ "activation_fct": "torch.nn.modules.linear.Identity",
254
+ "mini_batch_size": null,
255
+ "respect_input_order": true
256
+ }
257
+ ```
258
+
259
+ ### Evaluation Dataset
260
+
261
+ #### ms_marco
262
+
263
+ * Dataset: [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco) at [a47ee7a](https://huggingface.co/datasets/microsoft/ms_marco/tree/a47ee7aae8d7d466ba15f9f0bfac3b3681087b3a)
264
+ * Size: 1,000 evaluation samples
265
+ * Columns: <code>query</code>, <code>docs</code>, and <code>labels</code>
266
+ * Approximate statistics based on the first 1000 samples:
267
+ | | query | docs | labels |
268
+ |:--------|:------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|
269
+ | type | string | list | list |
270
+ | details | <ul><li>min: 11 characters</li><li>mean: 33.46 characters</li><li>max: 108 characters</li></ul> | <ul><li>min: 2 elements</li><li>mean: 6.00 elements</li><li>max: 10 elements</li></ul> | <ul><li>min: 2 elements</li><li>mean: 6.00 elements</li><li>max: 10 elements</li></ul> |
271
+ * Samples:
272
+ | query | docs | labels |
273
+ |:----------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------|
274
+ | <code>how much to spend on wordpress hosting</code> | <code>['Shared server. This will cost as little as $3 per month to around $10 per month depending on how you want to pay for it (by the month or by the year). The performance of your site will suffer from shared hosting. It’s a good choice for a personal blog or for getting you started. 1 Courses – You can take courses online that are free for extremely basic information or spend up to $200 or more for mid to advanced topics. 2 Some courses cost from $20 – $50 for a monthly subscription. 3 This allows you to pay for as much training as you want. 4 This can still cost hundreds of dollars', 'You may also choose to invest in customization, SEO or other factors along the way. If your interest is simply to start a blog on WordPress, you can start with a minimal cost of $60 for unlimited hosting and free domain with Bluehost. You can learn all about which hosting service is best for WordPress here. Domain: Cost – $10. The first element you need to shop for is a domain. Having a domain name is g...</code> | <code>[1, 1, 0, 0, 0, ...]</code> |
275
+ | <code>what type blood is the universal donor</code> | <code>["At one time, type O negative blood was considered the universal blood donor type. This implied that anyone — regardless of blood type — could receive type O negative blood without risking a transfusion reaction. Even then, small samples of the recipient's and donor's blood are mixed to check compatibility in a process known as crossmatching. In an emergency, however, type O negative red blood cells may be given to anyone — especially if the situation is life-threatening or the matching blood type is in short supply.", 'People with type O Rh D negative blood are often called universal donors. O Rh D negative is the universal donor because it does not contain any antigens (markers). When you … get donated blood that has antigens that are not the same as those of the recipient the blood will clot in the body. AB is a universal acceptor because RBC (red blood cells) contain the A and B antigen (simply put, it is a marker on the cell) so the body a … ccepts any blood type because it recog...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
276
+ | <code>dental crown costs average</code> | <code>['The prices for dental crowns range from $500 to $2,500 per crown and are dependent upon the materials used, location of tooth and geographic location. The average cost of a crown is $825, with or without dental insurance coverage. The cheapest cost of a dental crown is $500 for a simple metal crown. Dental crowns are specifically shaped shells that fit over damaged or broken teeth for either cosmetic or structural purposes. 1 People with insurance typically paid $520 – $1,140 out of pocket with an average of $882 per crown. 2 Those without insurance generally paid between $830 and $2,465 per crown with an average cost of $1,350.', '1 All-porcelain crowns require a higher level of skill and take more time to install than metal or porcelain-fused-to-metal crowns, and can cost $800-$3,000 or more per tooth. 2 CostHelper readers without insurance report paying $860-$3,000, at an average cost of $1,430. 1 CostHelper readers without insurance report paying $860-$3,000, at an average cost...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
277
+ * Loss: [<code>PListMLELoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#plistmleloss) with these parameters:
278
+ ```json
279
+ {
280
+ "lambda_weight": "sentence_transformers.cross_encoder.losses.PListMLELoss.PListMLELambdaWeight",
281
+ "activation_fct": "torch.nn.modules.linear.Identity",
282
+ "mini_batch_size": null,
283
+ "respect_input_order": true
284
+ }
285
+ ```
286
+
287
+ ### Training Hyperparameters
288
+ #### Non-Default Hyperparameters
289
+
290
+ - `eval_strategy`: steps
291
+ - `per_device_train_batch_size`: 16
292
+ - `per_device_eval_batch_size`: 16
293
+ - `learning_rate`: 2e-05
294
+ - `num_train_epochs`: 1
295
+ - `warmup_ratio`: 0.1
296
+ - `seed`: 12
297
+ - `bf16`: True
298
+ - `load_best_model_at_end`: True
299
+
300
+ #### All Hyperparameters
301
+ <details><summary>Click to expand</summary>
302
+
303
+ - `overwrite_output_dir`: False
304
+ - `do_predict`: False
305
+ - `eval_strategy`: steps
306
+ - `prediction_loss_only`: True
307
+ - `per_device_train_batch_size`: 16
308
+ - `per_device_eval_batch_size`: 16
309
+ - `per_gpu_train_batch_size`: None
310
+ - `per_gpu_eval_batch_size`: None
311
+ - `gradient_accumulation_steps`: 1
312
+ - `eval_accumulation_steps`: None
313
+ - `torch_empty_cache_steps`: None
314
+ - `learning_rate`: 2e-05
315
+ - `weight_decay`: 0.0
316
+ - `adam_beta1`: 0.9
317
+ - `adam_beta2`: 0.999
318
+ - `adam_epsilon`: 1e-08
319
+ - `max_grad_norm`: 1.0
320
+ - `num_train_epochs`: 1
321
+ - `max_steps`: -1
322
+ - `lr_scheduler_type`: linear
323
+ - `lr_scheduler_kwargs`: {}
324
+ - `warmup_ratio`: 0.1
325
+ - `warmup_steps`: 0
326
+ - `log_level`: passive
327
+ - `log_level_replica`: warning
328
+ - `log_on_each_node`: True
329
+ - `logging_nan_inf_filter`: True
330
+ - `save_safetensors`: True
331
+ - `save_on_each_node`: False
332
+ - `save_only_model`: False
333
+ - `restore_callback_states_from_checkpoint`: False
334
+ - `no_cuda`: False
335
+ - `use_cpu`: False
336
+ - `use_mps_device`: False
337
+ - `seed`: 12
338
+ - `data_seed`: None
339
+ - `jit_mode_eval`: False
340
+ - `use_ipex`: False
341
+ - `bf16`: True
342
+ - `fp16`: False
343
+ - `fp16_opt_level`: O1
344
+ - `half_precision_backend`: auto
345
+ - `bf16_full_eval`: False
346
+ - `fp16_full_eval`: False
347
+ - `tf32`: None
348
+ - `local_rank`: 0
349
+ - `ddp_backend`: None
350
+ - `tpu_num_cores`: None
351
+ - `tpu_metrics_debug`: False
352
+ - `debug`: []
353
+ - `dataloader_drop_last`: False
354
+ - `dataloader_num_workers`: 0
355
+ - `dataloader_prefetch_factor`: None
356
+ - `past_index`: -1
357
+ - `disable_tqdm`: False
358
+ - `remove_unused_columns`: True
359
+ - `label_names`: None
360
+ - `load_best_model_at_end`: True
361
+ - `ignore_data_skip`: False
362
+ - `fsdp`: []
363
+ - `fsdp_min_num_params`: 0
364
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
365
+ - `fsdp_transformer_layer_cls_to_wrap`: None
366
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
367
+ - `deepspeed`: None
368
+ - `label_smoothing_factor`: 0.0
369
+ - `optim`: adamw_torch
370
+ - `optim_args`: None
371
+ - `adafactor`: False
372
+ - `group_by_length`: False
373
+ - `length_column_name`: length
374
+ - `ddp_find_unused_parameters`: None
375
+ - `ddp_bucket_cap_mb`: None
376
+ - `ddp_broadcast_buffers`: False
377
+ - `dataloader_pin_memory`: True
378
+ - `dataloader_persistent_workers`: False
379
+ - `skip_memory_metrics`: True
380
+ - `use_legacy_prediction_loop`: False
381
+ - `push_to_hub`: False
382
+ - `resume_from_checkpoint`: None
383
+ - `hub_model_id`: None
384
+ - `hub_strategy`: every_save
385
+ - `hub_private_repo`: None
386
+ - `hub_always_push`: False
387
+ - `gradient_checkpointing`: False
388
+ - `gradient_checkpointing_kwargs`: None
389
+ - `include_inputs_for_metrics`: False
390
+ - `include_for_metrics`: []
391
+ - `eval_do_concat_batches`: True
392
+ - `fp16_backend`: auto
393
+ - `push_to_hub_model_id`: None
394
+ - `push_to_hub_organization`: None
395
+ - `mp_parameters`:
396
+ - `auto_find_batch_size`: False
397
+ - `full_determinism`: False
398
+ - `torchdynamo`: None
399
+ - `ray_scope`: last
400
+ - `ddp_timeout`: 1800
401
+ - `torch_compile`: False
402
+ - `torch_compile_backend`: None
403
+ - `torch_compile_mode`: None
404
+ - `dispatch_batches`: None
405
+ - `split_batches`: None
406
+ - `include_tokens_per_second`: False
407
+ - `include_num_input_tokens_seen`: False
408
+ - `neftune_noise_alpha`: None
409
+ - `optim_target_modules`: None
410
+ - `batch_eval_metrics`: False
411
+ - `eval_on_start`: False
412
+ - `use_liger_kernel`: False
413
+ - `eval_use_gather_object`: False
414
+ - `average_tokens_across_devices`: False
415
+ - `prompts`: None
416
+ - `batch_sampler`: batch_sampler
417
+ - `multi_dataset_batch_sampler`: proportional
418
+
419
+ </details>
420
+
421
+ ### Training Logs
422
+ | Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_R100_ndcg@10 | NanoNFCorpus_R100_ndcg@10 | NanoNQ_R100_ndcg@10 | NanoBEIR_R100_mean_ndcg@10 |
423
+ |:----------:|:--------:|:-------------:|:---------------:|:------------------------:|:-------------------------:|:--------------------:|:--------------------------:|
424
+ | -1 | -1 | - | - | 0.0375 (-0.5029) | 0.2604 (-0.0646) | 0.0219 (-0.4788) | 0.1066 (-0.3488) |
425
+ | 0.0002 | 1 | 1713.1071 | - | - | - | - | - |
426
+ | 0.0508 | 250 | 1833.4537 | - | - | - | - | - |
427
+ | 0.1016 | 500 | 1790.301 | 1707.9830 | 0.1182 (-0.4222) | 0.2072 (-0.1178) | 0.3276 (-0.1730) | 0.2177 (-0.2377) |
428
+ | 0.1525 | 750 | 1775.4549 | - | - | - | - | - |
429
+ | 0.2033 | 1000 | 1716.7897 | 1638.4917 | 0.5203 (-0.0201) | 0.3349 (+0.0099) | 0.6145 (+0.1138) | 0.4899 (+0.0345) |
430
+ | 0.2541 | 1250 | 1734.1811 | - | - | - | - | - |
431
+ | 0.3049 | 1500 | 1707.1166 | 1619.5133 | 0.5134 (-0.0270) | 0.3245 (-0.0005) | 0.6225 (+0.1218) | 0.4868 (+0.0314) |
432
+ | 0.3558 | 1750 | 1715.8994 | - | - | - | - | - |
433
+ | 0.4066 | 2000 | 1682.5393 | 1630.9360 | 0.5278 (-0.0127) | 0.3434 (+0.0184) | 0.5907 (+0.0900) | 0.4873 (+0.0319) |
434
+ | 0.4574 | 2250 | 1705.7818 | - | - | - | - | - |
435
+ | **0.5082** | **2500** | **1650.1962** | **1599.1906** | **0.5778 (+0.0374)** | **0.3660 (+0.0410)** | **0.6325 (+0.1319)** | **0.5254 (+0.0701)** |
436
+ | 0.5591 | 2750 | 1651.8559 | - | - | - | - | - |
437
+ | 0.6099 | 3000 | 1677.6405 | 1594.7935 | 0.5657 (+0.0253) | 0.3514 (+0.0263) | 0.6304 (+0.1298) | 0.5158 (+0.0605) |
438
+ | 0.6607 | 3250 | 1690.9901 | - | - | - | - | - |
439
+ | 0.7115 | 3500 | 1647.8661 | 1597.9960 | 0.5553 (+0.0149) | 0.3582 (+0.0331) | 0.6342 (+0.1335) | 0.5159 (+0.0605) |
440
+ | 0.7624 | 3750 | 1657.8038 | - | - | - | - | - |
441
+ | 0.8132 | 4000 | 1670.0114 | 1591.1512 | 0.5429 (+0.0025) | 0.3617 (+0.0367) | 0.6377 (+0.1370) | 0.5141 (+0.0587) |
442
+ | 0.8640 | 4250 | 1678.4298 | - | - | - | - | - |
443
+ | 0.9148 | 4500 | 1687.3654 | 1587.0916 | 0.5427 (+0.0023) | 0.3549 (+0.0299) | 0.6317 (+0.1310) | 0.5098 (+0.0544) |
444
+ | 0.9656 | 4750 | 1645.7461 | - | - | - | - | - |
445
+ | -1 | -1 | - | - | 0.5778 (+0.0374) | 0.3660 (+0.0410) | 0.6325 (+0.1319) | 0.5254 (+0.0701) |
446
+
447
+ * The bold row denotes the saved checkpoint.
448
+
449
+ ### Framework Versions
450
+ - Python: 3.11.11
451
+ - Sentence Transformers: 3.5.0.dev0
452
+ - Transformers: 4.49.0
453
+ - PyTorch: 2.6.0+cu124
454
+ - Accelerate: 1.5.2
455
+ - Datasets: 3.4.0
456
+ - Tokenizers: 0.21.1
457
+
458
+ ## Citation
459
+
460
+ ### BibTeX
461
+
462
+ #### Sentence Transformers
463
+ ```bibtex
464
+ @inproceedings{reimers-2019-sentence-bert,
465
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
466
+ author = "Reimers, Nils and Gurevych, Iryna",
467
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
468
+ month = "11",
469
+ year = "2019",
470
+ publisher = "Association for Computational Linguistics",
471
+ url = "https://arxiv.org/abs/1908.10084",
472
+ }
473
+ ```
474
+
475
+ #### PListMLELoss
476
+ ```bibtex
477
+ @inproceedings{lan2014position,
478
+ title={Position-Aware ListMLE: A Sequential Learning Process for Ranking.},
479
+ author={Lan, Yanyan and Zhu, Yadong and Guo, Jiafeng and Niu, Shuzi and Cheng, Xueqi},
480
+ booktitle={UAI},
481
+ volume={14},
482
+ pages={449--458},
483
+ year={2014}
484
+ }
485
+ ```
486
+
487
+ <!--
488
+ ## Glossary
489
+
490
+ *Clearly define terms in order to be accessible across audiences.*
491
+ -->
492
+
493
+ <!--
494
+ ## Model Card Authors
495
+
496
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
497
+ -->
498
+
499
+ <!--
500
+ ## Model Card Contact
501
+
502
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
503
+ -->
config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "microsoft/MiniLM-L12-H384-uncased",
3
+ "architectures": [
4
+ "BertForSequenceClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 384,
11
+ "id2label": {
12
+ "0": "LABEL_0"
13
+ },
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 1536,
16
+ "label2id": {
17
+ "LABEL_0": 0
18
+ },
19
+ "layer_norm_eps": 1e-12,
20
+ "max_position_embeddings": 512,
21
+ "model_type": "bert",
22
+ "num_attention_heads": 12,
23
+ "num_hidden_layers": 12,
24
+ "pad_token_id": 0,
25
+ "position_embedding_type": "absolute",
26
+ "sentence_transformers": {
27
+ "activation_fn": "torch.nn.modules.activation.Sigmoid"
28
+ },
29
+ "torch_dtype": "float32",
30
+ "transformers_version": "4.49.0",
31
+ "type_vocab_size": 2,
32
+ "use_cache": true,
33
+ "vocab_size": 30522
34
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5471f2a74a673a0200c71249f8469ca8347a5292a36523077fe89e10f476eb7f
3
+ size 133464836
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "extra_special_tokens": {},
49
+ "mask_token": "[MASK]",
50
+ "model_max_length": 512,
51
+ "never_split": null,
52
+ "pad_token": "[PAD]",
53
+ "sep_token": "[SEP]",
54
+ "strip_accents": null,
55
+ "tokenize_chinese_chars": true,
56
+ "tokenizer_class": "BertTokenizer",
57
+ "unk_token": "[UNK]"
58
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff