tomaarsen HF Staff commited on
Commit
b751c7c
·
verified ·
1 Parent(s): a5bc4a1

Add new CrossEncoder model

Browse files
README.md ADDED
@@ -0,0 +1,525 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - sentence-transformers
6
+ - cross-encoder
7
+ - generated_from_trainer
8
+ - dataset_size:78704
9
+ - loss:PListMLELoss
10
+ base_model: microsoft/MiniLM-L12-H384-uncased
11
+ datasets:
12
+ - microsoft/ms_marco
13
+ pipeline_tag: text-ranking
14
+ library_name: sentence-transformers
15
+ metrics:
16
+ - map
17
+ - mrr@10
18
+ - ndcg@10
19
+ co2_eq_emissions:
20
+ emissions: 92.04622402434568
21
+ energy_consumed: 0.23680409162892313
22
+ source: codecarbon
23
+ training_type: fine-tuning
24
+ on_cloud: false
25
+ cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
26
+ ram_total_size: 31.777088165283203
27
+ hours_used: 0.766
28
+ hardware_used: 1 x NVIDIA GeForce RTX 3090
29
+ model-index:
30
+ - name: CrossEncoder based on microsoft/MiniLM-L12-H384-uncased
31
+ results:
32
+ - task:
33
+ type: cross-encoder-reranking
34
+ name: Cross Encoder Reranking
35
+ dataset:
36
+ name: NanoMSMARCO R100
37
+ type: NanoMSMARCO_R100
38
+ metrics:
39
+ - type: map
40
+ value: 0.4782
41
+ name: Map
42
+ - type: mrr@10
43
+ value: 0.4685
44
+ name: Mrr@10
45
+ - type: ndcg@10
46
+ value: 0.5464
47
+ name: Ndcg@10
48
+ - task:
49
+ type: cross-encoder-reranking
50
+ name: Cross Encoder Reranking
51
+ dataset:
52
+ name: NanoNFCorpus R100
53
+ type: NanoNFCorpus_R100
54
+ metrics:
55
+ - type: map
56
+ value: 0.3347
57
+ name: Map
58
+ - type: mrr@10
59
+ value: 0.5293
60
+ name: Mrr@10
61
+ - type: ndcg@10
62
+ value: 0.358
63
+ name: Ndcg@10
64
+ - task:
65
+ type: cross-encoder-reranking
66
+ name: Cross Encoder Reranking
67
+ dataset:
68
+ name: NanoNQ R100
69
+ type: NanoNQ_R100
70
+ metrics:
71
+ - type: map
72
+ value: 0.6353
73
+ name: Map
74
+ - type: mrr@10
75
+ value: 0.6425
76
+ name: Mrr@10
77
+ - type: ndcg@10
78
+ value: 0.6876
79
+ name: Ndcg@10
80
+ - task:
81
+ type: cross-encoder-nano-beir
82
+ name: Cross Encoder Nano BEIR
83
+ dataset:
84
+ name: NanoBEIR R100 mean
85
+ type: NanoBEIR_R100_mean
86
+ metrics:
87
+ - type: map
88
+ value: 0.4827
89
+ name: Map
90
+ - type: mrr@10
91
+ value: 0.5468
92
+ name: Mrr@10
93
+ - type: ndcg@10
94
+ value: 0.5307
95
+ name: Ndcg@10
96
+ ---
97
+
98
+ # CrossEncoder based on microsoft/MiniLM-L12-H384-uncased
99
+
100
+ This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco) dataset using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
101
+
102
+ ## Model Details
103
+
104
+ ### Model Description
105
+ - **Model Type:** Cross Encoder
106
+ - **Base model:** [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) <!-- at revision 44acabbec0ef496f6dbc93adadea57f376b7c0ec -->
107
+ - **Maximum Sequence Length:** 512 tokens
108
+ - **Number of Output Labels:** 1 label
109
+ - **Training Dataset:**
110
+ - [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco)
111
+ - **Language:** en
112
+ <!-- - **License:** Unknown -->
113
+
114
+ ### Model Sources
115
+
116
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
117
+ - **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
118
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
119
+ - **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
120
+
121
+ ## Usage
122
+
123
+ ### Direct Usage (Sentence Transformers)
124
+
125
+ First install the Sentence Transformers library:
126
+
127
+ ```bash
128
+ pip install -U sentence-transformers
129
+ ```
130
+
131
+ Then you can load this model and run inference.
132
+ ```python
133
+ from sentence_transformers import CrossEncoder
134
+
135
+ # Download from the 🤗 Hub
136
+ model = CrossEncoder("tomaarsen/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-plistmle-sum-to-1-weight-plus-1")
137
+ # Get scores for pairs of texts
138
+ pairs = [
139
+ ['How many calories in an egg', 'There are on average between 55 and 80 calories in an egg depending on its size.'],
140
+ ['How many calories in an egg', 'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.'],
141
+ ['How many calories in an egg', 'Most of the calories in an egg come from the yellow yolk in the center.'],
142
+ ]
143
+ scores = model.predict(pairs)
144
+ print(scores.shape)
145
+ # (3,)
146
+
147
+ # Or rank different texts based on similarity to a single text
148
+ ranks = model.rank(
149
+ 'How many calories in an egg',
150
+ [
151
+ 'There are on average between 55 and 80 calories in an egg depending on its size.',
152
+ 'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.',
153
+ 'Most of the calories in an egg come from the yellow yolk in the center.',
154
+ ]
155
+ )
156
+ # [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
157
+ ```
158
+
159
+ <!--
160
+ ### Direct Usage (Transformers)
161
+
162
+ <details><summary>Click to see the direct usage in Transformers</summary>
163
+
164
+ </details>
165
+ -->
166
+
167
+ <!--
168
+ ### Downstream Usage (Sentence Transformers)
169
+
170
+ You can finetune this model on your own dataset.
171
+
172
+ <details><summary>Click to expand</summary>
173
+
174
+ </details>
175
+ -->
176
+
177
+ <!--
178
+ ### Out-of-Scope Use
179
+
180
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
181
+ -->
182
+
183
+ ## Evaluation
184
+
185
+ ### Metrics
186
+
187
+ #### Cross Encoder Reranking
188
+
189
+ * Datasets: `NanoMSMARCO_R100`, `NanoNFCorpus_R100` and `NanoNQ_R100`
190
+ * Evaluated with [<code>CrossEncoderRerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderRerankingEvaluator) with these parameters:
191
+ ```json
192
+ {
193
+ "at_k": 10,
194
+ "always_rerank_positives": true
195
+ }
196
+ ```
197
+
198
+ | Metric | NanoMSMARCO_R100 | NanoNFCorpus_R100 | NanoNQ_R100 |
199
+ |:------------|:---------------------|:---------------------|:---------------------|
200
+ | map | 0.4782 (-0.0114) | 0.3347 (+0.0737) | 0.6353 (+0.2157) |
201
+ | mrr@10 | 0.4685 (-0.0090) | 0.5293 (+0.0294) | 0.6425 (+0.2158) |
202
+ | **ndcg@10** | **0.5464 (+0.0060)** | **0.3580 (+0.0330)** | **0.6876 (+0.1870)** |
203
+
204
+ #### Cross Encoder Nano BEIR
205
+
206
+ * Dataset: `NanoBEIR_R100_mean`
207
+ * Evaluated with [<code>CrossEncoderNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderNanoBEIREvaluator) with these parameters:
208
+ ```json
209
+ {
210
+ "dataset_names": [
211
+ "msmarco",
212
+ "nfcorpus",
213
+ "nq"
214
+ ],
215
+ "rerank_k": 100,
216
+ "at_k": 10,
217
+ "always_rerank_positives": true
218
+ }
219
+ ```
220
+
221
+ | Metric | Value |
222
+ |:------------|:---------------------|
223
+ | map | 0.4827 (+0.0927) |
224
+ | mrr@10 | 0.5468 (+0.0788) |
225
+ | **ndcg@10** | **0.5307 (+0.0753)** |
226
+
227
+ <!--
228
+ ## Bias, Risks and Limitations
229
+
230
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
231
+ -->
232
+
233
+ <!--
234
+ ### Recommendations
235
+
236
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
237
+ -->
238
+
239
+ ## Training Details
240
+
241
+ ### Training Dataset
242
+
243
+ #### ms_marco
244
+
245
+ * Dataset: [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco) at [a47ee7a](https://huggingface.co/datasets/microsoft/ms_marco/tree/a47ee7aae8d7d466ba15f9f0bfac3b3681087b3a)
246
+ * Size: 78,704 training samples
247
+ * Columns: <code>query</code>, <code>docs</code>, and <code>labels</code>
248
+ * Approximate statistics based on the first 1000 samples:
249
+ | | query | docs | labels |
250
+ |:--------|:-----------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|
251
+ | type | string | list | list |
252
+ | details | <ul><li>min: 9 characters</li><li>mean: 33.83 characters</li><li>max: 100 characters</li></ul> | <ul><li>min: 3 elements</li><li>mean: 6.50 elements</li><li>max: 10 elements</li></ul> | <ul><li>min: 3 elements</li><li>mean: 6.50 elements</li><li>max: 10 elements</li></ul> |
253
+ * Samples:
254
+ | query | docs | labels |
255
+ |:-------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------|
256
+ | <code>cost for taking care of elderly parents at home</code> | <code>['Google+. Learn about dealing with and caring for your elderly parents as they age. When it comes to caring for elderly parents, there are a number of options out there. Your first decision is where your parent will live – in his or her own home, with you, or in an elder care facility. The cost of elder care can be daunting, and for some families out-of-home care is simply an impossibility. According to the 2011 MetLife Market Survey of Long-Term Care Costs, the cost of caring for aging parents is the following: 1 Nursing home (Semi-private room): $214/day or $78,110 per year', "The cost to care for a parent in your home can vary depending on their needs. You can expect to pay between $15 and $25 per hour for home care personnel and $300+ per day for round the clock care (live-in) Some people utilize housekeepers and/or family members to bring the cost down. Some caregivers have to decrease work hours or even quit their job in order to provide care for an aging parent, and when you a...</code> | <code>[1, 1, 0, 0, 0, ...]</code> |
257
+ | <code>what is a pharmacist</code> | <code>['pharmacist, n a person prepared to formulate and dispense drugs or medications through completion of an accredited university program in pharmacy. Licensure is required upon completion of the program and prior to serving the public as a pharma-cist.', 'Pharmacists, also known as chemists (Commonwealth English) or druggists (North American and, archaically, Commonwealth English), are healthcare professionals who practice in pharmacy, the field of health sciences focusing on safe and effective medication use.', 'The most common pharmacist positions are that of a community pharmacist (also referred to as a retail pharmacist, first-line pharmacist or dispensing chemist), or a hospital pharmacist, where they instruct and counsel on the proper use and adverse effects of medically prescribed drugs and medicines.', 'A pharmacist is a member of the health care team directly involved with patient care. Pharmacists undergo university-level education to understand the biochemical mechanisms and ...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
258
+ | <code>carissa name meaning</code> | <code>['Greek Meaning: The name Carissa is a Greek baby name. In Greek the meaning of the name Carissa is: Very dear. American Meaning: The name Carissa is an American baby name. In American the meaning of the name Carissa is: Very dear. Latin Meaning: The name Carissa is a Latin baby name. In Latin the meaning of the name Carissa is: Artistic or giving; Very dear.', 'The meaning of Carissa has more than one different etymologies. It has same or different meanings in other countries and languages. The different meanings of the name Carissa are: 1 French Meaning: Caress. Form of: Caressa. Keep in mind that many names may have different meanings in other countries and languages, so be careful that the name that you choose doesn’t mean something bad or unpleasant. Search comprehensively and find the name meaning of Carissa and its name origin or of any other name in our database. Also note the spelling and the pronunciation of the name Carissa and check the initials of the name with your last ...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
259
+ * Loss: [<code>PListMLELoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#plistmleloss) with these parameters:
260
+ ```json
261
+ {
262
+ "lambda_weight": "sentence_transformers.cross_encoder.losses.PListMLELoss.PListMLELambdaWeight",
263
+ "activation_fct": "torch.nn.modules.linear.Identity",
264
+ "mini_batch_size": 16,
265
+ "respect_input_order": true
266
+ }
267
+ ```
268
+
269
+ ### Evaluation Dataset
270
+
271
+ #### ms_marco
272
+
273
+ * Dataset: [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco) at [a47ee7a](https://huggingface.co/datasets/microsoft/ms_marco/tree/a47ee7aae8d7d466ba15f9f0bfac3b3681087b3a)
274
+ * Size: 1,000 evaluation samples
275
+ * Columns: <code>query</code>, <code>docs</code>, and <code>labels</code>
276
+ * Approximate statistics based on the first 1000 samples:
277
+ | | query | docs | labels |
278
+ |:--------|:----------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|
279
+ | type | string | list | list |
280
+ | details | <ul><li>min: 11 characters</li><li>mean: 33.1 characters</li><li>max: 88 characters</li></ul> | <ul><li>min: 2 elements</li><li>mean: 6.00 elements</li><li>max: 10 elements</li></ul> | <ul><li>min: 2 elements</li><li>mean: 6.00 elements</li><li>max: 10 elements</li></ul> |
281
+ * Samples:
282
+ | query | docs | labels |
283
+ |:---------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------|
284
+ | <code>what county is nagold in</code> | <code>["detailed map of Nagold and near places. Welcome to the Nagold google satellite map! This place is situated in Calw, Karlsruhe, Baden-Wurttemberg, Germany, its geographical coordinates are 48° 33' 0 North, 8° 43' 0 East and its original name (with diacritics) is Nagold. See Nagold photos and images from satellite below, explore the aerial photographs of Nagold in Germany. 3D map of Nagold in Germany. You can also dive right into Nagold on unique 3D satellite map provided by Google Earth. With new GoogLe Earth plugin you can enjoy the interactive Nagold 3D map within your web browser.", 'The location of each Nagold hotel listed is shown on the detailed zoomable map. Moreover, Nagold hotel map is available where all hotels in Nagold are marked. You can easily choose your hotel by location. 3D map of Nagold in Germany. You can also dive right into Nagold on unique 3D satellite map provided by Google Earth. With new GoogLe Earth plugin you can enjoy the interactive Nagold 3D map within your web browser.', "Jonagold is high quality American apple, developed in the 1940s. As its name suggests, this is a cross between a Jonathan and a Golden Delicious. It is quite widely grown, and unusually for a Golden Delicious cross, is not limited to the warm apple regions, although it is not often found in the UK. The colouring is yellow of Golden Delicious, with large flushes of red. This is a crisp apple to bite into, with gleaming white flesh. The flavour is sweet but with a lot of balancing acidity-a very pleasant apple. Jonagold's other parent, Jonathan, is an old American variety which was discovered in the 1820s.", "Jonagold is widely-grown by commercial growers, and there are a number of more highly-coloured sports. Jonagored is probably the most widely known of these. Others include: Decosta, Primo, Rubinstar, Red Jonaprince. The colouring is yellow of Golden Delicious, with large flushes of red. This is a crisp apple to bite into, with gleaming white flesh. The flavour is sweet but with a lot of balancing acidity-a very pleasant apple. Jonagold's other parent, Jonathan, is an old American variety which was discovered in the 1820s.", "As a result it is a poor pollinator of other apple varieties, and needs two different nearby compatible pollinating apple varieties. Golden Delicious is well-known as a good pollinator of other apple varieties, but cannot pollinate Jonagold. The colouring is yellow of Golden Delicious, with large flushes of red. This is a crisp apple to bite into, with gleaming white flesh. The flavour is sweet but with a lot of balancing acidity-a very pleasant apple. Jonagold's other parent, Jonathan, is an old American variety which was discovered in the 1820s. In the UK Jonagold sometimes appears in supermarkets in the spring packaged as value apples, often from Holland, and at a very low price"]</code> | <code>[1, 0, 0, 0, 0]</code> |
285
+ | <code>what is the pay scale for ups drivers</code> | <code>["According to American job and career site Glassdoor, the average salary of a UPS truck driver is about $56,000 a year. What is the average salary for a ups driver? currently in southern California its 29.71/hr plus full medical, dental and vision. also paid are a legal plan, pension, major holiday's off paid, and up to 6 weeks paid vacat … ion and 4 floating holidays. also yearly guaranteed prenegotiated raises.", 'Average UPS Driver salaries for job postings nationwide are 57% lower than average salaries for all job postings nationwide. ', "I was told that the starting salary for a full time package delivery driver is $72,000/year and are in the midst of negotiations for a salary increase to $75,000/ year. Is this true? I appologize I am new but am curious as I would like to, some day, obtain a full time driver position. 70K is gross pay for a senior Package Driver in my area that averages working 9.5 hours a day. That is $28.19 an hour straight pay and about $42.38 an hour for over...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
286
+ | <code>what is coliform</code> | <code>['Coliform is a rod-shaped bacteria which are always present in the digestive tract of warm-blooded animals, including humans. Coliforms are found in human and animal waste, and are also found in water, plants and soil. Fecal coliforms are the coliform that is found in the digestive tract of animals and humans, and is an indicator of fecal contamination. A high count of fecal coliforms is considered an accurate indicator of animal or human waste. Escherichia coli (E. coli) is a major species of the fecal coliforms', 'Coliforms are a broad class of bacteria found in our environment, including the feces of man and other warm-blooded animals. The presence of coliform bacteria in drinking water may indicate a possible presence of harmful, disease-causing organisms. Why use coliforms to indicate water quality? ', 'Coliform bacteria include a large group of many types of bacteria that occur throughout the environment. They are common in soil and surface water and may even occur on your skin....</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
287
+ * Loss: [<code>PListMLELoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#plistmleloss) with these parameters:
288
+ ```json
289
+ {
290
+ "lambda_weight": "sentence_transformers.cross_encoder.losses.PListMLELoss.PListMLELambdaWeight",
291
+ "activation_fct": "torch.nn.modules.linear.Identity",
292
+ "mini_batch_size": 16,
293
+ "respect_input_order": true
294
+ }
295
+ ```
296
+
297
+ ### Training Hyperparameters
298
+ #### Non-Default Hyperparameters
299
+
300
+ - `eval_strategy`: steps
301
+ - `per_device_train_batch_size`: 16
302
+ - `per_device_eval_batch_size`: 16
303
+ - `learning_rate`: 2e-05
304
+ - `num_train_epochs`: 1
305
+ - `warmup_ratio`: 0.1
306
+ - `seed`: 12
307
+ - `bf16`: True
308
+ - `load_best_model_at_end`: True
309
+
310
+ #### All Hyperparameters
311
+ <details><summary>Click to expand</summary>
312
+
313
+ - `overwrite_output_dir`: False
314
+ - `do_predict`: False
315
+ - `eval_strategy`: steps
316
+ - `prediction_loss_only`: True
317
+ - `per_device_train_batch_size`: 16
318
+ - `per_device_eval_batch_size`: 16
319
+ - `per_gpu_train_batch_size`: None
320
+ - `per_gpu_eval_batch_size`: None
321
+ - `gradient_accumulation_steps`: 1
322
+ - `eval_accumulation_steps`: None
323
+ - `torch_empty_cache_steps`: None
324
+ - `learning_rate`: 2e-05
325
+ - `weight_decay`: 0.0
326
+ - `adam_beta1`: 0.9
327
+ - `adam_beta2`: 0.999
328
+ - `adam_epsilon`: 1e-08
329
+ - `max_grad_norm`: 1.0
330
+ - `num_train_epochs`: 1
331
+ - `max_steps`: -1
332
+ - `lr_scheduler_type`: linear
333
+ - `lr_scheduler_kwargs`: {}
334
+ - `warmup_ratio`: 0.1
335
+ - `warmup_steps`: 0
336
+ - `log_level`: passive
337
+ - `log_level_replica`: warning
338
+ - `log_on_each_node`: True
339
+ - `logging_nan_inf_filter`: True
340
+ - `save_safetensors`: True
341
+ - `save_on_each_node`: False
342
+ - `save_only_model`: False
343
+ - `restore_callback_states_from_checkpoint`: False
344
+ - `no_cuda`: False
345
+ - `use_cpu`: False
346
+ - `use_mps_device`: False
347
+ - `seed`: 12
348
+ - `data_seed`: None
349
+ - `jit_mode_eval`: False
350
+ - `use_ipex`: False
351
+ - `bf16`: True
352
+ - `fp16`: False
353
+ - `fp16_opt_level`: O1
354
+ - `half_precision_backend`: auto
355
+ - `bf16_full_eval`: False
356
+ - `fp16_full_eval`: False
357
+ - `tf32`: None
358
+ - `local_rank`: 0
359
+ - `ddp_backend`: None
360
+ - `tpu_num_cores`: None
361
+ - `tpu_metrics_debug`: False
362
+ - `debug`: []
363
+ - `dataloader_drop_last`: False
364
+ - `dataloader_num_workers`: 0
365
+ - `dataloader_prefetch_factor`: None
366
+ - `past_index`: -1
367
+ - `disable_tqdm`: False
368
+ - `remove_unused_columns`: True
369
+ - `label_names`: None
370
+ - `load_best_model_at_end`: True
371
+ - `ignore_data_skip`: False
372
+ - `fsdp`: []
373
+ - `fsdp_min_num_params`: 0
374
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
375
+ - `fsdp_transformer_layer_cls_to_wrap`: None
376
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
377
+ - `deepspeed`: None
378
+ - `label_smoothing_factor`: 0.0
379
+ - `optim`: adamw_torch
380
+ - `optim_args`: None
381
+ - `adafactor`: False
382
+ - `group_by_length`: False
383
+ - `length_column_name`: length
384
+ - `ddp_find_unused_parameters`: None
385
+ - `ddp_bucket_cap_mb`: None
386
+ - `ddp_broadcast_buffers`: False
387
+ - `dataloader_pin_memory`: True
388
+ - `dataloader_persistent_workers`: False
389
+ - `skip_memory_metrics`: True
390
+ - `use_legacy_prediction_loop`: False
391
+ - `push_to_hub`: False
392
+ - `resume_from_checkpoint`: None
393
+ - `hub_model_id`: None
394
+ - `hub_strategy`: every_save
395
+ - `hub_private_repo`: None
396
+ - `hub_always_push`: False
397
+ - `gradient_checkpointing`: False
398
+ - `gradient_checkpointing_kwargs`: None
399
+ - `include_inputs_for_metrics`: False
400
+ - `include_for_metrics`: []
401
+ - `eval_do_concat_batches`: True
402
+ - `fp16_backend`: auto
403
+ - `push_to_hub_model_id`: None
404
+ - `push_to_hub_organization`: None
405
+ - `mp_parameters`:
406
+ - `auto_find_batch_size`: False
407
+ - `full_determinism`: False
408
+ - `torchdynamo`: None
409
+ - `ray_scope`: last
410
+ - `ddp_timeout`: 1800
411
+ - `torch_compile`: False
412
+ - `torch_compile_backend`: None
413
+ - `torch_compile_mode`: None
414
+ - `dispatch_batches`: None
415
+ - `split_batches`: None
416
+ - `include_tokens_per_second`: False
417
+ - `include_num_input_tokens_seen`: False
418
+ - `neftune_noise_alpha`: None
419
+ - `optim_target_modules`: None
420
+ - `batch_eval_metrics`: False
421
+ - `eval_on_start`: False
422
+ - `use_liger_kernel`: False
423
+ - `eval_use_gather_object`: False
424
+ - `average_tokens_across_devices`: False
425
+ - `prompts`: None
426
+ - `batch_sampler`: batch_sampler
427
+ - `multi_dataset_batch_sampler`: proportional
428
+
429
+ </details>
430
+
431
+ ### Training Logs
432
+ | Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_R100_ndcg@10 | NanoNFCorpus_R100_ndcg@10 | NanoNQ_R100_ndcg@10 | NanoBEIR_R100_mean_ndcg@10 |
433
+ |:----------:|:--------:|:-------------:|:---------------:|:------------------------:|:-------------------------:|:--------------------:|:--------------------------:|
434
+ | -1 | -1 | - | - | 0.0293 (-0.5112) | 0.2960 (-0.0290) | 0.0263 (-0.4743) | 0.1172 (-0.3382) |
435
+ | 0.0002 | 1 | 2.2004 | - | - | - | - | - |
436
+ | 0.0508 | 250 | 2.1006 | - | - | - | - | - |
437
+ | 0.1016 | 500 | 1.9618 | 1.9431 | 0.0989 (-0.4415) | 0.3091 (-0.0159) | 0.1591 (-0.3416) | 0.1890 (-0.2663) |
438
+ | 0.1525 | 750 | 1.9038 | - | - | - | - | - |
439
+ | 0.2033 | 1000 | 1.8656 | 1.8699 | 0.4825 (-0.0579) | 0.3387 (+0.0136) | 0.5589 (+0.0582) | 0.4600 (+0.0046) |
440
+ | 0.2541 | 1250 | 1.8568 | - | - | - | - | - |
441
+ | 0.3049 | 1500 | 1.8491 | 1.8546 | 0.5522 (+0.0118) | 0.3395 (+0.0144) | 0.6046 (+0.1040) | 0.4988 (+0.0434) |
442
+ | 0.3558 | 1750 | 1.8455 | - | - | - | - | - |
443
+ | 0.4066 | 2000 | 1.8337 | 1.8389 | 0.5163 (-0.0242) | 0.3749 (+0.0499) | 0.6346 (+0.1340) | 0.5086 (+0.0532) |
444
+ | 0.4574 | 2250 | 1.8433 | - | - | - | - | - |
445
+ | **0.5082** | **2500** | **1.8311** | **1.8265** | **0.5464 (+0.0060)** | **0.3580 (+0.0330)** | **0.6876 (+0.1870)** | **0.5307 (+0.0753)** |
446
+ | 0.5591 | 2750 | 1.8157 | - | - | - | - | - |
447
+ | 0.6099 | 3000 | 1.8111 | 1.8187 | 0.5313 (-0.0091) | 0.3582 (+0.0332) | 0.6415 (+0.1409) | 0.5103 (+0.0550) |
448
+ | 0.6607 | 3250 | 1.8183 | - | - | - | - | - |
449
+ | 0.7115 | 3500 | 1.8155 | 1.8160 | 0.5417 (+0.0013) | 0.3751 (+0.0501) | 0.6412 (+0.1405) | 0.5193 (+0.0640) |
450
+ | 0.7624 | 3750 | 1.8101 | - | - | - | - | - |
451
+ | 0.8132 | 4000 | 1.8124 | 1.8105 | 0.5468 (+0.0063) | 0.3616 (+0.0366) | 0.6357 (+0.1351) | 0.5147 (+0.0593) |
452
+ | 0.8640 | 4250 | 1.8158 | - | - | - | - | - |
453
+ | 0.9148 | 4500 | 1.8082 | 1.8129 | 0.5380 (-0.0024) | 0.3564 (+0.0313) | 0.6629 (+0.1623) | 0.5191 (+0.0637) |
454
+ | 0.9656 | 4750 | 1.8042 | - | - | - | - | - |
455
+ | -1 | -1 | - | - | 0.5464 (+0.0060) | 0.3580 (+0.0330) | 0.6876 (+0.1870) | 0.5307 (+0.0753) |
456
+
457
+ * The bold row denotes the saved checkpoint.
458
+
459
+ ### Environmental Impact
460
+ Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
461
+ - **Energy Consumed**: 0.237 kWh
462
+ - **Carbon Emitted**: 0.092 kg of CO2
463
+ - **Hours Used**: 0.766 hours
464
+
465
+ ### Training Hardware
466
+ - **On Cloud**: No
467
+ - **GPU Model**: 1 x NVIDIA GeForce RTX 3090
468
+ - **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
469
+ - **RAM Size**: 31.78 GB
470
+
471
+ ### Framework Versions
472
+ - Python: 3.11.6
473
+ - Sentence Transformers: 3.5.0.dev0
474
+ - Transformers: 4.49.0
475
+ - PyTorch: 2.6.0+cu124
476
+ - Accelerate: 1.5.1
477
+ - Datasets: 3.3.2
478
+ - Tokenizers: 0.21.0
479
+
480
+ ## Citation
481
+
482
+ ### BibTeX
483
+
484
+ #### Sentence Transformers
485
+ ```bibtex
486
+ @inproceedings{reimers-2019-sentence-bert,
487
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
488
+ author = "Reimers, Nils and Gurevych, Iryna",
489
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
490
+ month = "11",
491
+ year = "2019",
492
+ publisher = "Association for Computational Linguistics",
493
+ url = "https://arxiv.org/abs/1908.10084",
494
+ }
495
+ ```
496
+
497
+ #### PListMLELoss
498
+ ```bibtex
499
+ @inproceedings{lan2014position,
500
+ title={Position-Aware ListMLE: A Sequential Learning Process for Ranking.},
501
+ author={Lan, Yanyan and Zhu, Yadong and Guo, Jiafeng and Niu, Shuzi and Cheng, Xueqi},
502
+ booktitle={UAI},
503
+ volume={14},
504
+ pages={449--458},
505
+ year={2014}
506
+ }
507
+ ```
508
+
509
+ <!--
510
+ ## Glossary
511
+
512
+ *Clearly define terms in order to be accessible across audiences.*
513
+ -->
514
+
515
+ <!--
516
+ ## Model Card Authors
517
+
518
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
519
+ -->
520
+
521
+ <!--
522
+ ## Model Card Contact
523
+
524
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
525
+ -->
config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "microsoft/MiniLM-L12-H384-uncased",
3
+ "architectures": [
4
+ "BertForSequenceClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 384,
11
+ "id2label": {
12
+ "0": "LABEL_0"
13
+ },
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 1536,
16
+ "label2id": {
17
+ "LABEL_0": 0
18
+ },
19
+ "layer_norm_eps": 1e-12,
20
+ "max_position_embeddings": 512,
21
+ "model_type": "bert",
22
+ "num_attention_heads": 12,
23
+ "num_hidden_layers": 12,
24
+ "pad_token_id": 0,
25
+ "position_embedding_type": "absolute",
26
+ "sentence_transformers": {
27
+ "activation_fn": "torch.nn.modules.activation.Sigmoid"
28
+ },
29
+ "torch_dtype": "float32",
30
+ "transformers_version": "4.49.0",
31
+ "type_vocab_size": 2,
32
+ "use_cache": true,
33
+ "vocab_size": 30522
34
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7751d35df846099515839a01e91693fdf0f522efeb474e692012c3301d7bcd04
3
+ size 133464836
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "extra_special_tokens": {},
49
+ "mask_token": "[MASK]",
50
+ "model_max_length": 512,
51
+ "never_split": null,
52
+ "pad_token": "[PAD]",
53
+ "sep_token": "[SEP]",
54
+ "strip_accents": null,
55
+ "tokenize_chinese_chars": true,
56
+ "tokenizer_class": "BertTokenizer",
57
+ "unk_token": "[UNK]"
58
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff