llm-wizard commited on
Commit
d2a97fc
·
verified ·
1 Parent(s): bec88ff

Add new SentenceTransformer model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 1024,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,594 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sentence-similarity
5
+ - feature-extraction
6
+ - generated_from_trainer
7
+ - dataset_size:400
8
+ - loss:MatryoshkaLoss
9
+ - loss:MultipleNegativesRankingLoss
10
+ base_model: Snowflake/snowflake-arctic-embed-l
11
+ widget:
12
+ - source_sentence: Why is the use of AI systems particularly important for individuals
13
+ applying for or receiving public assistance benefits?
14
+ sentences:
15
+ - (48)
16
+ - Another area in which the use of AI systems deserves special consideration is
17
+ the access to and enjoyment of certain essential private and public services and
18
+ benefits necessary for people to fully participate in society or to improve one’s
19
+ standard of living. In particular, natural persons applying for or receiving essential
20
+ public assistance benefits and services from public authorities namely healthcare
21
+ services, social security benefits, social services providing protection in cases
22
+ such as maternity, illness, industrial accidents, dependency or old age and loss
23
+ of employment and social and housing assistance, are typically dependent on those
24
+ benefits and services and in a vulnerable position in relation to the responsible
25
+ authorities.
26
+ - used for biometric verification, which includes authentication, the sole purpose
27
+ of which is to confirm that a specific natural person is the person he or she
28
+ claims to be and to confirm the identity of a natural person for the sole purpose
29
+ of having access to a service, unlocking a device or having security access to
30
+ premises. That exclusion is justified by the fact that such systems are likely
31
+ to have a minor impact on fundamental rights of natural persons compared to the
32
+ remote biometric identification systems which may be used for the processing of
33
+ the biometric data of a large number of persons without their active involvement.
34
+ In the case of ‘real-time’ systems, the capturing of the biometric data, the comparison
35
+ and the
36
+ - source_sentence: How does the context ensure that existing Union law on personal
37
+ data processing remains unaffected?
38
+ sentences:
39
+ - does not seek to affect the application of existing Union law governing the processing
40
+ of personal data, including the tasks and powers of the independent supervisory
41
+ authorities competent to monitor compliance with those instruments. It also does
42
+ not affect the obligations of providers and deployers of AI systems in their role
43
+ as data controllers or processors stemming from Union or national law on the protection
44
+ of personal data in so far as the design, the development or the use of AI systems
45
+ involves the processing of personal data. It is also appropriate to clarify that
46
+ data subjects continue to enjoy all the rights and guarantees awarded to them
47
+ by such Union law, including the rights related to solely automated individual
48
+ - to operate without human intervention. The adaptiveness that an AI system could
49
+ exhibit after deployment, refers to self-learning capabilities, allowing the system
50
+ to change while in use. AI systems can be used on a stand-alone basis or as a component
51
+ of a product, irrespective of whether the system is physically integrated into
52
+ the product (embedded) or serves the functionality of the product without being
53
+ integrated therein (non-embedded).
54
+ - requested by the European Parliament (6).
55
+ - source_sentence: How does the context surrounding the number 33 influence its interpretation?
56
+ sentences:
57
+ - race, sex or disabilities. In addition, the immediacy of the impact and the limited
58
+ opportunities for further checks or corrections in relation to the use of such
59
+ systems operating in real-time carry heightened risks for the rights and freedoms
60
+ of the persons concerned in the context of, or impacted by, law enforcement activities.
61
+ - (33)
62
+ - (61)
63
+ - source_sentence: What are the potential consequences of a serious disruption of
64
+ critical infrastructure as defined in Directive (EU) 2022/2557?
65
+ sentences:
66
+ - to highly varying degrees for the practical pursuit of the localisation or identification
67
+ of a perpetrator or suspect of the different criminal offences listed and having
68
+ regard to the likely differences in the seriousness, probability and scale of
69
+ the harm or possible negative consequences. An imminent threat to life or the
70
+ physical safety of natural persons could also result from a serious disruption
71
+ of critical infrastructure, as defined in Article 2, point (4) of Directive (EU)
72
+ 2022/2557 of the European Parliament and of the Council (19), where the disruption
73
+ or destruction of such critical infrastructure would result in an imminent threat
74
+ to life or the physical safety of a person, including through serious harm to
75
+ the provision of
76
+ - (53)
77
+ - '(66)
78
+
79
+
80
+
81
+ Requirements should apply to high-risk AI systems as regards risk management,
82
+ the quality and relevance of data sets used, technical documentation and record-keeping,
83
+ transparency and the provision of information to deployers, human oversight, and
84
+ robustness, accuracy and cybersecurity. Those requirements are necessary to effectively
85
+ mitigate the risks for health, safety and fundamental rights. As no other less
86
+ trade restrictive measures are reasonably available those requirements are not
87
+ unjustified restrictions to trade.
88
+
89
+
90
+
91
+
92
+
93
+
94
+
95
+
96
+
97
+
98
+
99
+
100
+
101
+ (67)'
102
+ - source_sentence: What criteria determine whether an AI system used in the administration
103
+ of justice is classified as high-risk?
104
+ sentences:
105
+ - which one or more of the following conditions are fulfilled. The first such condition
106
+ should be that the AI system is intended to perform a narrow procedural task,
107
+ such as an AI system that transforms unstructured data into structured data, an
108
+ AI system that classifies incoming documents into categories or an AI system that
109
+ is used to detect duplicates among a large number of applications. Those tasks
110
+ are of such narrow and limited nature that they pose only limited risks which
111
+ are not increased through the use of an AI system in a context that is listed
112
+ as a high-risk use in an annex to this Regulation. The second condition should
113
+ be that the task performed by the AI system is intended to improve the result
114
+ of a previously completed human
115
+ - Certain AI systems intended for the administration of justice and democratic processes
116
+ should be classified as high-risk, considering their potentially significant impact
117
+ on democracy, the rule of law, individual freedoms as well as the right to an
118
+ effective remedy and to a fair trial. In particular, to address the risks of potential
119
+ biases, errors and opacity, it is appropriate to qualify as high-risk AI systems
120
+ intended to be used by a judicial authority or on its behalf to assist judicial
121
+ authorities in researching and interpreting facts and the law and in applying
122
+ the law to a concrete set of facts. AI systems intended to be used by alternative
123
+ dispute resolution bodies for those purposes should also be considered to be high-risk
124
+ when
125
+ - As regards AI systems that are safety components of products, or which are themselves
126
+ products, falling within the scope of certain Union harmonisation legislation
127
+ listed in an annex to this Regulation, it is appropriate to classify them as high-risk
128
+ under this Regulation if the product concerned undergoes the conformity assessment
129
+ procedure with a third-party conformity assessment body pursuant to that relevant
130
+ Union harmonisation legislation. In particular, such products are machinery, toys,
131
+ lifts, equipment and protective systems intended for use in potentially explosive
132
+ atmospheres, radio equipment, pressure equipment, recreational craft equipment,
133
+ cableway installations, appliances burning gaseous fuels, medical devices, in
134
+ vitro
135
+ pipeline_tag: sentence-similarity
136
+ library_name: sentence-transformers
137
+ metrics:
138
+ - cosine_accuracy@1
139
+ - cosine_accuracy@3
140
+ - cosine_accuracy@5
141
+ - cosine_accuracy@10
142
+ - cosine_precision@1
143
+ - cosine_precision@3
144
+ - cosine_precision@5
145
+ - cosine_precision@10
146
+ - cosine_recall@1
147
+ - cosine_recall@3
148
+ - cosine_recall@5
149
+ - cosine_recall@10
150
+ - cosine_ndcg@10
151
+ - cosine_mrr@10
152
+ - cosine_map@100
153
+ model-index:
154
+ - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
155
+ results:
156
+ - task:
157
+ type: information-retrieval
158
+ name: Information Retrieval
159
+ dataset:
160
+ name: Unknown
161
+ type: unknown
162
+ metrics:
163
+ - type: cosine_accuracy@1
164
+ value: 0.9583333333333334
165
+ name: Cosine Accuracy@1
166
+ - type: cosine_accuracy@3
167
+ value: 1.0
168
+ name: Cosine Accuracy@3
169
+ - type: cosine_accuracy@5
170
+ value: 1.0
171
+ name: Cosine Accuracy@5
172
+ - type: cosine_accuracy@10
173
+ value: 1.0
174
+ name: Cosine Accuracy@10
175
+ - type: cosine_precision@1
176
+ value: 0.9583333333333334
177
+ name: Cosine Precision@1
178
+ - type: cosine_precision@3
179
+ value: 0.3333333333333333
180
+ name: Cosine Precision@3
181
+ - type: cosine_precision@5
182
+ value: 0.19999999999999998
183
+ name: Cosine Precision@5
184
+ - type: cosine_precision@10
185
+ value: 0.09999999999999999
186
+ name: Cosine Precision@10
187
+ - type: cosine_recall@1
188
+ value: 0.9583333333333334
189
+ name: Cosine Recall@1
190
+ - type: cosine_recall@3
191
+ value: 1.0
192
+ name: Cosine Recall@3
193
+ - type: cosine_recall@5
194
+ value: 1.0
195
+ name: Cosine Recall@5
196
+ - type: cosine_recall@10
197
+ value: 1.0
198
+ name: Cosine Recall@10
199
+ - type: cosine_ndcg@10
200
+ value: 0.9791666666666666
201
+ name: Cosine Ndcg@10
202
+ - type: cosine_mrr@10
203
+ value: 0.9722222222222223
204
+ name: Cosine Mrr@10
205
+ - type: cosine_map@100
206
+ value: 0.9722222222222222
207
+ name: Cosine Map@100
208
+ ---
209
+
210
+ # SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
211
+
212
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
213
+
214
+ ## Model Details
215
+
216
+ ### Model Description
217
+ - **Model Type:** Sentence Transformer
218
+ - **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b -->
219
+ - **Maximum Sequence Length:** 512 tokens
220
+ - **Output Dimensionality:** 1024 dimensions
221
+ - **Similarity Function:** Cosine Similarity
222
+ <!-- - **Training Dataset:** Unknown -->
223
+ <!-- - **Language:** Unknown -->
224
+ <!-- - **License:** Unknown -->
225
+
226
+ ### Model Sources
227
+
228
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
229
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
230
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
231
+
232
+ ### Full Model Architecture
233
+
234
+ ```
235
+ SentenceTransformer(
236
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
237
+ (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
238
+ (2): Normalize()
239
+ )
240
+ ```
241
+
242
+ ## Usage
243
+
244
+ ### Direct Usage (Sentence Transformers)
245
+
246
+ First install the Sentence Transformers library:
247
+
248
+ ```bash
249
+ pip install -U sentence-transformers
250
+ ```
251
+
252
+ Then you can load this model and run inference.
253
+ ```python
254
+ from sentence_transformers import SentenceTransformer
255
+
256
+ # Download from the 🤗 Hub
257
+ model = SentenceTransformer("llm-wizard/legal-ft-1")
258
+ # Run inference
259
+ sentences = [
260
+ 'What criteria determine whether an AI system used in the administration of justice is classified as high-risk?',
261
+ 'Certain AI systems intended for the administration of justice and democratic processes should be classified as high-risk, considering their potentially significant impact on democracy, the rule of law, individual freedoms as well as the right to an effective remedy and to a\xa0fair trial. In particular, to address the risks of potential biases, errors and opacity, it is appropriate to qualify as high-risk AI systems intended to be used by a\xa0judicial authority or on its behalf to assist judicial authorities in researching and interpreting facts and the law and in applying the law to a\xa0concrete set of facts. AI systems intended to be used by alternative dispute resolution bodies for those purposes should also be considered to be high-risk when',
262
+ 'which one or more of the following conditions are fulfilled. The first such condition should be that the AI system is intended to perform a\xa0narrow procedural task, such as an AI system that transforms unstructured data into structured data, an AI system that classifies incoming documents into categories or an AI system that is used to detect duplicates among a\xa0large number of applications. Those tasks are of such narrow and limited nature that they pose only limited risks which are not increased through the use of an AI system in a\xa0context that is listed as a\xa0high-risk use in an annex to this Regulation. The second condition should be that the task performed by the AI system is intended to improve the result of a\xa0previously completed human',
263
+ ]
264
+ embeddings = model.encode(sentences)
265
+ print(embeddings.shape)
266
+ # [3, 1024]
267
+
268
+ # Get the similarity scores for the embeddings
269
+ similarities = model.similarity(embeddings, embeddings)
270
+ print(similarities.shape)
271
+ # [3, 3]
272
+ ```
273
+
274
+ <!--
275
+ ### Direct Usage (Transformers)
276
+
277
+ <details><summary>Click to see the direct usage in Transformers</summary>
278
+
279
+ </details>
280
+ -->
281
+
282
+ <!--
283
+ ### Downstream Usage (Sentence Transformers)
284
+
285
+ You can finetune this model on your own dataset.
286
+
287
+ <details><summary>Click to expand</summary>
288
+
289
+ </details>
290
+ -->
291
+
292
+ <!--
293
+ ### Out-of-Scope Use
294
+
295
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
296
+ -->
297
+
298
+ ## Evaluation
299
+
300
+ ### Metrics
301
+
302
+ #### Information Retrieval
303
+
304
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
305
+
306
+ | Metric | Value |
307
+ |:--------------------|:-----------|
308
+ | cosine_accuracy@1 | 0.9583 |
309
+ | cosine_accuracy@3 | 1.0 |
310
+ | cosine_accuracy@5 | 1.0 |
311
+ | cosine_accuracy@10 | 1.0 |
312
+ | cosine_precision@1 | 0.9583 |
313
+ | cosine_precision@3 | 0.3333 |
314
+ | cosine_precision@5 | 0.2 |
315
+ | cosine_precision@10 | 0.1 |
316
+ | cosine_recall@1 | 0.9583 |
317
+ | cosine_recall@3 | 1.0 |
318
+ | cosine_recall@5 | 1.0 |
319
+ | cosine_recall@10 | 1.0 |
320
+ | **cosine_ndcg@10** | **0.9792** |
321
+ | cosine_mrr@10 | 0.9722 |
322
+ | cosine_map@100 | 0.9722 |
323
+
324
+ <!--
325
+ ## Bias, Risks and Limitations
326
+
327
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
328
+ -->
329
+
330
+ <!--
331
+ ### Recommendations
332
+
333
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
334
+ -->
335
+
336
+ ## Training Details
337
+
338
+ ### Training Dataset
339
+
340
+ #### Unnamed Dataset
341
+
342
+ * Size: 400 training samples
343
+ * Columns: <code>sentence_0</code> and <code>sentence_1</code>
344
+ * Approximate statistics based on the first 400 samples:
345
+ | | sentence_0 | sentence_1 |
346
+ |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
347
+ | type | string | string |
348
+ | details | <ul><li>min: 10 tokens</li><li>mean: 20.52 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 93.01 tokens</li><li>max: 186 tokens</li></ul> |
349
+ * Samples:
350
+ | sentence_0 | sentence_1 |
351
+ |:--------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
352
+ | <code>What are the intended uses of AI systems by tax and customs authorities according to the context?</code> | <code>natural persons or groups, for profiling in the course of detection, investigation or prosecution of criminal offences. AI systems specifically intended to be used for administrative proceedings by tax and customs authorities as well as by financial intelligence units carrying out administrative tasks analysing information pursuant to Union anti-money laundering law should not be classified as high-risk AI systems used by law enforcement authorities for the purpose of prevention, detection, investigation and prosecution of criminal offences. The use of AI tools by law enforcement and other relevant authorities should not become a factor of inequality, or exclusion. The impact of the use of AI tools on the defence rights of suspects should</code> |
353
+ | <code>How should the use of AI tools by law enforcement authorities be managed to prevent inequality or exclusion?</code> | <code>natural persons or groups, for profiling in the course of detection, investigation or prosecution of criminal offences. AI systems specifically intended to be used for administrative proceedings by tax and customs authorities as well as by financial intelligence units carrying out administrative tasks analysing information pursuant to Union anti-money laundering law should not be classified as high-risk AI systems used by law enforcement authorities for the purpose of prevention, detection, investigation and prosecution of criminal offences. The use of AI tools by law enforcement and other relevant authorities should not become a factor of inequality, or exclusion. The impact of the use of AI tools on the defence rights of suspects should</code> |
354
+ | <code>What was requested by the European Parliament?</code> | <code>requested by the European Parliament (6).</code> |
355
+ * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
356
+ ```json
357
+ {
358
+ "loss": "MultipleNegativesRankingLoss",
359
+ "matryoshka_dims": [
360
+ 768,
361
+ 512,
362
+ 256,
363
+ 128,
364
+ 64
365
+ ],
366
+ "matryoshka_weights": [
367
+ 1,
368
+ 1,
369
+ 1,
370
+ 1,
371
+ 1
372
+ ],
373
+ "n_dims_per_step": -1
374
+ }
375
+ ```
376
+
377
+ ### Training Hyperparameters
378
+ #### Non-Default Hyperparameters
379
+
380
+ - `eval_strategy`: steps
381
+ - `per_device_train_batch_size`: 10
382
+ - `per_device_eval_batch_size`: 10
383
+ - `num_train_epochs`: 10
384
+ - `multi_dataset_batch_sampler`: round_robin
385
+
386
+ #### All Hyperparameters
387
+ <details><summary>Click to expand</summary>
388
+
389
+ - `overwrite_output_dir`: False
390
+ - `do_predict`: False
391
+ - `eval_strategy`: steps
392
+ - `prediction_loss_only`: True
393
+ - `per_device_train_batch_size`: 10
394
+ - `per_device_eval_batch_size`: 10
395
+ - `per_gpu_train_batch_size`: None
396
+ - `per_gpu_eval_batch_size`: None
397
+ - `gradient_accumulation_steps`: 1
398
+ - `eval_accumulation_steps`: None
399
+ - `torch_empty_cache_steps`: None
400
+ - `learning_rate`: 5e-05
401
+ - `weight_decay`: 0.0
402
+ - `adam_beta1`: 0.9
403
+ - `adam_beta2`: 0.999
404
+ - `adam_epsilon`: 1e-08
405
+ - `max_grad_norm`: 1
406
+ - `num_train_epochs`: 10
407
+ - `max_steps`: -1
408
+ - `lr_scheduler_type`: linear
409
+ - `lr_scheduler_kwargs`: {}
410
+ - `warmup_ratio`: 0.0
411
+ - `warmup_steps`: 0
412
+ - `log_level`: passive
413
+ - `log_level_replica`: warning
414
+ - `log_on_each_node`: True
415
+ - `logging_nan_inf_filter`: True
416
+ - `save_safetensors`: True
417
+ - `save_on_each_node`: False
418
+ - `save_only_model`: False
419
+ - `restore_callback_states_from_checkpoint`: False
420
+ - `no_cuda`: False
421
+ - `use_cpu`: False
422
+ - `use_mps_device`: False
423
+ - `seed`: 42
424
+ - `data_seed`: None
425
+ - `jit_mode_eval`: False
426
+ - `use_ipex`: False
427
+ - `bf16`: False
428
+ - `fp16`: False
429
+ - `fp16_opt_level`: O1
430
+ - `half_precision_backend`: auto
431
+ - `bf16_full_eval`: False
432
+ - `fp16_full_eval`: False
433
+ - `tf32`: None
434
+ - `local_rank`: 0
435
+ - `ddp_backend`: None
436
+ - `tpu_num_cores`: None
437
+ - `tpu_metrics_debug`: False
438
+ - `debug`: []
439
+ - `dataloader_drop_last`: False
440
+ - `dataloader_num_workers`: 0
441
+ - `dataloader_prefetch_factor`: None
442
+ - `past_index`: -1
443
+ - `disable_tqdm`: False
444
+ - `remove_unused_columns`: True
445
+ - `label_names`: None
446
+ - `load_best_model_at_end`: False
447
+ - `ignore_data_skip`: False
448
+ - `fsdp`: []
449
+ - `fsdp_min_num_params`: 0
450
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
451
+ - `fsdp_transformer_layer_cls_to_wrap`: None
452
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
453
+ - `deepspeed`: None
454
+ - `label_smoothing_factor`: 0.0
455
+ - `optim`: adamw_torch
456
+ - `optim_args`: None
457
+ - `adafactor`: False
458
+ - `group_by_length`: False
459
+ - `length_column_name`: length
460
+ - `ddp_find_unused_parameters`: None
461
+ - `ddp_bucket_cap_mb`: None
462
+ - `ddp_broadcast_buffers`: False
463
+ - `dataloader_pin_memory`: True
464
+ - `dataloader_persistent_workers`: False
465
+ - `skip_memory_metrics`: True
466
+ - `use_legacy_prediction_loop`: False
467
+ - `push_to_hub`: False
468
+ - `resume_from_checkpoint`: None
469
+ - `hub_model_id`: None
470
+ - `hub_strategy`: every_save
471
+ - `hub_private_repo`: None
472
+ - `hub_always_push`: False
473
+ - `gradient_checkpointing`: False
474
+ - `gradient_checkpointing_kwargs`: None
475
+ - `include_inputs_for_metrics`: False
476
+ - `include_for_metrics`: []
477
+ - `eval_do_concat_batches`: True
478
+ - `fp16_backend`: auto
479
+ - `push_to_hub_model_id`: None
480
+ - `push_to_hub_organization`: None
481
+ - `mp_parameters`:
482
+ - `auto_find_batch_size`: False
483
+ - `full_determinism`: False
484
+ - `torchdynamo`: None
485
+ - `ray_scope`: last
486
+ - `ddp_timeout`: 1800
487
+ - `torch_compile`: False
488
+ - `torch_compile_backend`: None
489
+ - `torch_compile_mode`: None
490
+ - `dispatch_batches`: None
491
+ - `split_batches`: None
492
+ - `include_tokens_per_second`: False
493
+ - `include_num_input_tokens_seen`: False
494
+ - `neftune_noise_alpha`: None
495
+ - `optim_target_modules`: None
496
+ - `batch_eval_metrics`: False
497
+ - `eval_on_start`: False
498
+ - `use_liger_kernel`: False
499
+ - `eval_use_gather_object`: False
500
+ - `average_tokens_across_devices`: False
501
+ - `prompts`: None
502
+ - `batch_sampler`: batch_sampler
503
+ - `multi_dataset_batch_sampler`: round_robin
504
+
505
+ </details>
506
+
507
+ ### Training Logs
508
+ | Epoch | Step | cosine_ndcg@10 |
509
+ |:-----:|:----:|:--------------:|
510
+ | 1.0 | 40 | 0.9715 |
511
+ | 1.25 | 50 | 0.9792 |
512
+ | 2.0 | 80 | 0.9792 |
513
+ | 2.5 | 100 | 0.9715 |
514
+ | 3.0 | 120 | 0.9638 |
515
+ | 3.75 | 150 | 0.9715 |
516
+ | 4.0 | 160 | 0.9792 |
517
+ | 5.0 | 200 | 0.9623 |
518
+ | 6.0 | 240 | 0.9777 |
519
+ | 6.25 | 250 | 0.9777 |
520
+ | 7.0 | 280 | 0.9792 |
521
+ | 7.5 | 300 | 0.9715 |
522
+ | 8.0 | 320 | 0.9715 |
523
+ | 8.75 | 350 | 0.9792 |
524
+ | 9.0 | 360 | 0.9792 |
525
+ | 10.0 | 400 | 0.9792 |
526
+
527
+
528
+ ### Framework Versions
529
+ - Python: 3.11.11
530
+ - Sentence Transformers: 3.4.1
531
+ - Transformers: 4.48.2
532
+ - PyTorch: 2.5.1+cu124
533
+ - Accelerate: 1.3.0
534
+ - Datasets: 3.2.0
535
+ - Tokenizers: 0.21.0
536
+
537
+ ## Citation
538
+
539
+ ### BibTeX
540
+
541
+ #### Sentence Transformers
542
+ ```bibtex
543
+ @inproceedings{reimers-2019-sentence-bert,
544
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
545
+ author = "Reimers, Nils and Gurevych, Iryna",
546
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
547
+ month = "11",
548
+ year = "2019",
549
+ publisher = "Association for Computational Linguistics",
550
+ url = "https://arxiv.org/abs/1908.10084",
551
+ }
552
+ ```
553
+
554
+ #### MatryoshkaLoss
555
+ ```bibtex
556
+ @misc{kusupati2024matryoshka,
557
+ title={Matryoshka Representation Learning},
558
+ author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
559
+ year={2024},
560
+ eprint={2205.13147},
561
+ archivePrefix={arXiv},
562
+ primaryClass={cs.LG}
563
+ }
564
+ ```
565
+
566
+ #### MultipleNegativesRankingLoss
567
+ ```bibtex
568
+ @misc{henderson2017efficient,
569
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
570
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
571
+ year={2017},
572
+ eprint={1705.00652},
573
+ archivePrefix={arXiv},
574
+ primaryClass={cs.CL}
575
+ }
576
+ ```
577
+
578
+ <!--
579
+ ## Glossary
580
+
581
+ *Clearly define terms in order to be accessible across audiences.*
582
+ -->
583
+
584
+ <!--
585
+ ## Model Card Authors
586
+
587
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
588
+ -->
589
+
590
+ <!--
591
+ ## Model Card Contact
592
+
593
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
594
+ -->
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "Snowflake/snowflake-arctic-embed-l",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 1024,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 4096,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 16,
17
+ "num_hidden_layers": 24,
18
+ "pad_token_id": 0,
19
+ "position_embedding_type": "absolute",
20
+ "torch_dtype": "float32",
21
+ "transformers_version": "4.48.2",
22
+ "type_vocab_size": 2,
23
+ "use_cache": true,
24
+ "vocab_size": 30522
25
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.4.1",
4
+ "transformers": "4.48.2",
5
+ "pytorch": "2.5.1+cu124"
6
+ },
7
+ "prompts": {
8
+ "query": "Represent this sentence for searching relevant passages: "
9
+ },
10
+ "default_prompt_name": null,
11
+ "similarity_fn_name": "cosine"
12
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:97d7567ff8c43b8b99f236ce8a6863b54a28b9665f5dc5f3da1219262ef06be9
3
+ size 1336413848
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_lower_case": true,
47
+ "extra_special_tokens": {},
48
+ "mask_token": "[MASK]",
49
+ "max_length": 512,
50
+ "model_max_length": 512,
51
+ "pad_to_multiple_of": null,
52
+ "pad_token": "[PAD]",
53
+ "pad_token_type_id": 0,
54
+ "padding_side": "right",
55
+ "sep_token": "[SEP]",
56
+ "stride": 0,
57
+ "strip_accents": null,
58
+ "tokenize_chinese_chars": true,
59
+ "tokenizer_class": "BertTokenizer",
60
+ "truncation_side": "right",
61
+ "truncation_strategy": "longest_first",
62
+ "unk_token": "[UNK]"
63
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff