File size: 28,939 Bytes
6586e26
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:156
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
- source_sentence: What are some potential negative uses of Large Language Models
    as described in the context?
  sentences:
  - 'I think this means that, as individual users, we don’t need to feel any guilt
    at all for the energy consumed by the vast majority of our prompts. The impact
    is likely neglible compared to driving a car down the street or maybe even watching
    a video on YouTube.

    Likewise, training. DeepSeek v3 training for less than $6m is a fantastic sign
    that training costs can and should continue to drop.

    For less efficient models I find it useful to compare their energy usage to commercial
    flights. The largest Llama 3 model cost about the same as a single digit number
    of fully loaded passenger flights from New York to London. That’s certainly not
    nothing, but once trained that model can be used by millions of people at no extra
    training cost.'
  - 'Here’s the sequel to this post: Things we learned about LLMs in 2024.

    Large Language Models

    In the past 24-36 months, our species has discovered that you can take a GIANT
    corpus of text, run it through a pile of GPUs, and use it to create a fascinating
    new kind of software.

    LLMs can do a lot of things. They can answer questions, summarize documents, translate
    from one language to another, extract information and even write surprisingly
    competent code.

    They can also help you cheat at your homework, generate unlimited streams of fake
    content and be used for all manner of nefarious purposes.'
  - 'There’s now a fascinating ecosystem of people training their own models on top
    of these foundations, publishing those models, building fine-tuning datasets and
    sharing those too.

    The Hugging Face Open LLM Leaderboard is one place that tracks these. I can’t
    even attempt to count them, and any count would be out-of-date within a few hours.

    The best overall openly licensed LLM at any time is rarely a foundation model:
    instead, it’s whichever fine-tuned community model has most recently discovered
    the best combination of fine-tuning data.

    This is a huge advantage for open over closed models: the closed, hosted models
    don’t have thousands of researchers and hobbyists around the world collaborating
    and competing to improve them.'
- source_sentence: Why might some question the necessity of the extensive infrastructure
    investments for future AI models?
  sentences:
  - 'These abilities are just a few weeks old at this point, and I don’t think their
    impact has been fully felt yet. If you haven’t tried them out yet you really should.

    Both Gemini and OpenAI offer API access to these features as well. OpenAI started
    with a WebSocket API that was quite challenging to use, but in December they announced
    a new WebRTC API which is much easier to get started with. Building a web app
    that a user can talk to via voice is easy now!

    Prompt driven app generation is a commodity already

    This was possible with GPT-4 in 2023, but the value it provides became evident
    in 2024.'
  - 'The environmental impact got much, much worse

    The much bigger problem here is the enormous competitive buildout of the infrastructure
    that is imagined to be necessary for these models in the future.

    Companies like Google, Meta, Microsoft and Amazon are all spending billions of
    dollars rolling out new datacenters, with a very material impact on the electricity
    grid and the environment. There’s even talk of spinning up new nuclear power stations,
    but those can take decades.

    Is this infrastructure necessary? DeepSeek v3’s $6m training cost and the continued
    crash in LLM prices might hint that it’s not. But would you want to be the big
    tech executive that argued NOT to build out this infrastructure only to be proven
    wrong in a few years’ time?'
  - 'OpenAI are not the only game in town here. Google released their first entrant
    in the category, gemini-2.0-flash-thinking-exp, on December 19th.

    Alibaba’s Qwen team released their QwQ model on November 28th—under an Apache
    2.0 license, and that one I could run on my own machine. They followed that up
    with a vision reasoning model called QvQ on December 24th, which I also ran locally.

    DeepSeek made their DeepSeek-R1-Lite-Preview model available to try out through
    their chat interface on November 20th.

    To understand more about inference scaling I recommend Is AI progress slowing
    down? by Arvind Narayanan and Sayash Kapoor.'
- source_sentence: How have US export regulations on GPUs to China influenced training
    optimizations?
  sentences:
  - 'Qwen2.5-Coder-32B is an LLM that can code well that runs on my Mac talks about
    Qwen2.5-Coder-32B in November—an Apache 2.0 licensed model!


    I can now run a GPT-4 class model on my laptop talks about running Meta’s Llama
    3.3 70B (released in December)'
  - 'Those US export regulations on GPUs to China seem to have inspired some very
    effective training optimizations!

    The environmental impact got better

    A welcome result of the increased efficiency of the models—both the hosted ones
    and the ones I can run locally—is that the energy usage and environmental impact
    of running a prompt has dropped enormously over the past couple of years.

    OpenAI themselves are charging 100x less for a prompt compared to the GPT-3 days.
    I have it on good authority that neither Google Gemini nor Amazon Nova (two of
    the least expensive model providers) are running prompts at a loss.'
  - 'The GPT-4 barrier was comprehensively broken

    In my December 2023 review I wrote about how We don’t yet know how to build GPT-4—OpenAI’s
    best model was almost a year old at that point, yet no other AI lab had produced
    anything better. What did OpenAI know that the rest of us didn’t?

    I’m relieved that this has changed completely in the past twelve months. 18 organizations
    now have models on the Chatbot Arena Leaderboard that rank higher than the original
    GPT-4 from March 2023 (GPT-4-0314 on the board)—70 models in total.'
- source_sentence: When was GPT-4 officially released by OpenAI?
  sentences:
  - The most recent twist, again from December (December was a lot) is live video.
    ChatGPT voice mode now provides the option to share your camera feed with the
    model and talk about what you can see in real time. Google Gemini have a preview
    of the same feature, which they managed to ship the day before ChatGPT did.
  - 'On the other hand, as software engineers we are better placed to take advantage
    of this than anyone else. We’ve all been given weird coding interns—we can use
    our deep knowledge to prompt them to solve coding problems more effectively than
    anyone else can.

    The ethics of this space remain diabolically complex

    In September last year Andy Baio and I produced the first major story on the unlicensed
    training data behind Stable Diffusion.

    Since then, almost every major LLM (and most of the image generation models) have
    also been trained on unlicensed data.'
  - 'We don’t yet know how to build GPT-4

    Frustratingly, despite the enormous leaps ahead we’ve had this year, we are yet
    to see an alternative model that’s better than GPT-4.

    OpenAI released GPT-4 in March, though it later turned out we had a sneak peak
    of it in February when Microsoft used it as part of the new Bing.

    This may well change in the next few weeks: Google’s Gemini Ultra has big claims,
    but isn’t yet available for us to try out.

    The team behind Mistral are working to beat GPT-4 as well, and their track record
    is already extremely strong considering their first public model only came out
    in September, and they’ve released two significant improvements since then.'
- source_sentence: What is the challenge in building AI personal assistants based
    on the gullibility of language models?
  sentences:
  - 'Language Models are gullible. They “believe” what we tell them—what’s in their
    training data, then what’s in the fine-tuning data, then what’s in the prompt.

    In order to be useful tools for us, we need them to believe what we feed them!

    But it turns out a lot of the things we want to build need them not to be gullible.

    Everyone wants an AI personal assistant. If you hired a real-world personal assistant
    who believed everything that anyone told them, you would quickly find that their
    ability to positively impact your life was severely limited.'
  - 'There’s now a fascinating ecosystem of people training their own models on top
    of these foundations, publishing those models, building fine-tuning datasets and
    sharing those too.

    The Hugging Face Open LLM Leaderboard is one place that tracks these. I can’t
    even attempt to count them, and any count would be out-of-date within a few hours.

    The best overall openly licensed LLM at any time is rarely a foundation model:
    instead, it’s whichever fine-tuned community model has most recently discovered
    the best combination of fine-tuning data.

    This is a huge advantage for open over closed models: the closed, hosted models
    don’t have thousands of researchers and hobbyists around the world collaborating
    and competing to improve them.'
  - 'Longer inputs dramatically increase the scope of problems that can be solved
    with an LLM: you can now throw in an entire book and ask questions about its contents,
    but more importantly you can feed in a lot of example code to help the model correctly
    solve a coding problem. LLM use-cases that involve long inputs are far more interesting
    to me than short prompts that rely purely on the information already baked into
    the model weights. Many of my tools were built using this pattern.'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
  results:
  - task:
      type: information-retrieval
      name: Information Retrieval
    dataset:
      name: Unknown
      type: unknown
    metrics:
    - type: cosine_accuracy@1
      value: 0.875
      name: Cosine Accuracy@1
    - type: cosine_accuracy@3
      value: 1.0
      name: Cosine Accuracy@3
    - type: cosine_accuracy@5
      value: 1.0
      name: Cosine Accuracy@5
    - type: cosine_accuracy@10
      value: 1.0
      name: Cosine Accuracy@10
    - type: cosine_precision@1
      value: 0.875
      name: Cosine Precision@1
    - type: cosine_precision@3
      value: 0.3333333333333333
      name: Cosine Precision@3
    - type: cosine_precision@5
      value: 0.20000000000000004
      name: Cosine Precision@5
    - type: cosine_precision@10
      value: 0.10000000000000002
      name: Cosine Precision@10
    - type: cosine_recall@1
      value: 0.875
      name: Cosine Recall@1
    - type: cosine_recall@3
      value: 1.0
      name: Cosine Recall@3
    - type: cosine_recall@5
      value: 1.0
      name: Cosine Recall@5
    - type: cosine_recall@10
      value: 1.0
      name: Cosine Recall@10
    - type: cosine_ndcg@10
      value: 0.9538662191964322
      name: Cosine Ndcg@10
    - type: cosine_mrr@10
      value: 0.9375
      name: Cosine Mrr@10
    - type: cosine_map@100
      value: 0.9375
      name: Cosine Map@100
---

# SentenceTransformer based on Snowflake/snowflake-arctic-embed-l

This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

## Model Details

### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->

### Model Sources

- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)

### Full Model Architecture

```
SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)
```

## Usage

### Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

```bash
pip install -U sentence-transformers
```

Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Mdean77/snow_ft_2025")
# Run inference
sentences = [
    'What is the challenge in building AI personal assistants based on the gullibility of language models?',
    'Language Models are gullible. They “believe” what we tell them—what’s in their training data, then what’s in the fine-tuning data, then what’s in the prompt.\nIn order to be useful tools for us, we need them to believe what we feed them!\nBut it turns out a lot of the things we want to build need them not to be gullible.\nEveryone wants an AI personal assistant. If you hired a real-world personal assistant who believed everything that anyone told them, you would quickly find that their ability to positively impact your life was severely limited.',
    'Longer inputs dramatically increase the scope of problems that can be solved with an LLM: you can now throw in an entire book and ask questions about its contents, but more importantly you can feed in a lot of example code to help the model correctly solve a coding problem. LLM use-cases that involve long inputs are far more interesting to me than short prompts that rely purely on the information already baked into the model weights. Many of my tools were built using this pattern.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```

<!--
### Direct Usage (Transformers)

<details><summary>Click to see the direct usage in Transformers</summary>

</details>
-->

<!--
### Downstream Usage (Sentence Transformers)

You can finetune this model on your own dataset.

<details><summary>Click to expand</summary>

</details>
-->

<!--
### Out-of-Scope Use

*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->

## Evaluation

### Metrics

#### Information Retrieval

* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)

| Metric              | Value      |
|:--------------------|:-----------|
| cosine_accuracy@1   | 0.875      |
| cosine_accuracy@3   | 1.0        |
| cosine_accuracy@5   | 1.0        |
| cosine_accuracy@10  | 1.0        |
| cosine_precision@1  | 0.875      |
| cosine_precision@3  | 0.3333     |
| cosine_precision@5  | 0.2        |
| cosine_precision@10 | 0.1        |
| cosine_recall@1     | 0.875      |
| cosine_recall@3     | 1.0        |
| cosine_recall@5     | 1.0        |
| cosine_recall@10    | 1.0        |
| **cosine_ndcg@10**  | **0.9539** |
| cosine_mrr@10       | 0.9375     |
| cosine_map@100      | 0.9375     |

<!--
## Bias, Risks and Limitations

*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->

<!--
### Recommendations

*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->

## Training Details

### Training Dataset

#### Unnamed Dataset

* Size: 156 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 156 samples:
  |         | sentence_0                                                                         | sentence_1                                                                           |
  |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
  | type    | string                                                                             | string                                                                               |
  | details | <ul><li>min: 12 tokens</li><li>mean: 20.94 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 135.22 tokens</li><li>max: 214 tokens</li></ul> |
* Samples:
  | sentence_0                                                                                                                                                  | sentence_1                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               |
  |:------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
  | <code>What advantage does a 64GB Mac have for running models in terms of CPU and GPU memory sharing?</code>                                                 | <code>On paper, a 64GB Mac should be a great machine for running models due to the way the CPU and GPU can share the same memory. In practice, many models are released as model weights and libraries that reward NVIDIA’s CUDA over other platforms.<br>The llama.cpp ecosystem helped a lot here, but the real breakthrough has been Apple’s MLX library, “an array framework for Apple Silicon”. It’s fantastic.<br>Apple’s mlx-lm Python library supports running a wide range of MLX-compatible models on my Mac, with excellent performance. mlx-community on Hugging Face offers more than 1,000 models that have been converted to the necessary format.</code> |
  | <code>How has Apple’s MLX library impacted the performance of running machine learning models on Mac?</code>                                                | <code>On paper, a 64GB Mac should be a great machine for running models due to the way the CPU and GPU can share the same memory. In practice, many models are released as model weights and libraries that reward NVIDIA’s CUDA over other platforms.<br>The llama.cpp ecosystem helped a lot here, but the real breakthrough has been Apple’s MLX library, “an array framework for Apple Silicon”. It’s fantastic.<br>Apple’s mlx-lm Python library supports running a wide range of MLX-compatible models on my Mac, with excellent performance. mlx-community on Hugging Face offers more than 1,000 models that have been converted to the necessary format.</code> |
  | <code>How does the ability of models like ChatGPT Code Interpreter to execute and debug code impact the problem of hallucination in code generation?</code> | <code>Except... you can run generated code to see if it’s correct. And with patterns like ChatGPT Code Interpreter the LLM can execute the code itself, process the error message, then rewrite it and keep trying until it works!<br>So hallucination is a much lesser problem for code generation than for anything else. If only we had the equivalent of Code Interpreter for fact-checking natural language!<br>How should we feel about this as software engineers?<br>On the one hand, this feels like a threat: who needs a programmer if ChatGPT can write code for you?</code>                                                                                 |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
  ```json
  {
      "loss": "MultipleNegativesRankingLoss",
      "matryoshka_dims": [
          768,
          512,
          256,
          128,
          64
      ],
      "matryoshka_weights": [
          1,
          1,
          1,
          1,
          1
      ],
      "n_dims_per_step": -1
  }
  ```

### Training Hyperparameters
#### Non-Default Hyperparameters

- `eval_strategy`: steps
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `num_train_epochs`: 10
- `multi_dataset_batch_sampler`: round_robin

#### All Hyperparameters
<details><summary>Click to expand</summary>

- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`: 
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin

</details>

### Training Logs
| Epoch | Step | cosine_ndcg@10 |
|:-----:|:----:|:--------------:|
| 1.0   | 16   | 0.9692         |
| 2.0   | 32   | 0.9539         |
| 3.0   | 48   | 0.9692         |
| 3.125 | 50   | 0.9692         |
| 4.0   | 64   | 0.9692         |
| 5.0   | 80   | 0.9692         |
| 6.0   | 96   | 0.9692         |
| 6.25  | 100  | 0.9692         |
| 7.0   | 112  | 0.9539         |
| 8.0   | 128  | 0.9539         |
| 9.0   | 144  | 0.9539         |
| 9.375 | 150  | 0.9539         |
| 10.0  | 160  | 0.9539         |


### Framework Versions
- Python: 3.13.0
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.6.0
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0

## Citation

### BibTeX

#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
```

#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}
```

#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
```

<!--
## Glossary

*Clearly define terms in order to be accessible across audiences.*
-->

<!--
## Model Card Authors

*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->

<!--
## Model Card Contact

*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->