---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:156
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
- source_sentence: What new type of LLM was introduced in the final quarter of 2024
according to the context?
sentences:
- The most recent twist, again from December (December was a lot) is live video.
ChatGPT voice mode now provides the option to share your camera feed with the
model and talk about what you can see in real time. Google Gemini have a preview
of the same feature, which they managed to ship the day before ChatGPT did.
- 'Now that those features are rolling out they’re pretty weak. As an LLM power-user
I know what these models are capable of, and Apple’s LLM features offer a pale
imitation of what a frontier LLM can do. Instead we’re getting notification summaries
that misrepresent news headlines and writing assistant tools that I’ve not found
useful at all. Genmoji are kind of fun though.
The rise of inference-scaling “reasoning” models
The most interesting development in the final quarter of 2024 was the introduction
of a new shape of LLM, exemplified by OpenAI’s o1 models—initially released as
o1-preview and o1-mini on September 12th.'
- 'I’m still trying to figure out the best patterns for doing this for my own work.
Everyone knows that evals are important, but there remains a lack of great guidance
for how to best implement them—I’m tracking this under my evals tag. My SVG pelican
riding a bicycle benchmark is a pale imitation of what a real eval suite should
look like.
Apple Intelligence is bad, Apple’s MLX library is excellent
As a Mac user I’ve been feeling a lot better about my choice of platform this
year.
Last year it felt like my lack of a Linux/Windows machine with an NVIDIA GPU
was a huge disadvantage in terms of trying out new models.'
- source_sentence: Which three best available models were freely accessible for a
few months this year?
sentences:
- 'I think people who complain that LLM improvement has slowed are often missing
the enormous advances in these multi-modal models. Being able to run prompts against
images (and audio and video) is a fascinating new way to apply these models.
Voice and live camera mode are science fiction come to life
The audio and live video modes that have started to emerge deserve a special mention.
The ability to talk to ChatGPT first arrived in September 2023, but it was mostly
an illusion: OpenAI used their excellent Whisper speech-to-text model and a new
text-to-speech model (creatively named tts-1) to enable conversations with the
ChatGPT mobile apps, but the actual model just saw text.'
- 'This prompt-driven custom interface feature is so powerful and easy to build
(once you’ve figured out the gnarly details of browser sandboxing) that I expect
it to show up as a feature in a wide range of products in 2025.
Universal access to the best models lasted for just a few short months
For a few short months this year all three of the best available models—GPT-4o,
Claude 3.5 Sonnet and Gemini 1.5 Pro—were freely available to most of the world.'
- '17th: AI for Data Journalism: demonstrating what we can do with this stuff right
now
22nd: Options for accessing Llama 3 from the terminal using LLM
May
8th: Slop is the new name for unwanted AI-generated content
15th: ChatGPT in “4o” mode is not running the new features yet
29th: Training is not the same as chatting: ChatGPT and other LLMs don’t remember
everything you say
June
6th: Accidental prompt injection against RAG applications
10th: Thoughts on the WWDC 2024 keynote on Apple Intelligence
17th: Language models on the command-line
21st: Building search-based RAG using Claude, Datasette and Val Town
27th: Open challenges for AI engineering
July
14th: Imitation Intelligence, my keynote for PyCon US 2024'
- source_sentence: Which company released the QwQ model under an Apache 2.0 license?
sentences:
- 'Stuff we figured out about AI in 2023
Simon Willison’s Weblog
Subscribe
Stuff we figured out about AI in 2023
31st December 2023
2023 was the breakthrough year for Large Language Models (LLMs). I think it’s
OK to call these AI—they’re the latest and (currently) most interesting development
in the academic field of Artificial Intelligence that dates back to the 1950s.
Here’s my attempt to round up the highlights in one place!'
- 'OpenAI are not the only game in town here. Google released their first entrant
in the category, gemini-2.0-flash-thinking-exp, on December 19th.
Alibaba’s Qwen team released their QwQ model on November 28th—under an Apache
2.0 license, and that one I could run on my own machine. They followed that up
with a vision reasoning model called QvQ on December 24th, which I also ran locally.
DeepSeek made their DeepSeek-R1-Lite-Preview model available to try out through
their chat interface on November 20th.
To understand more about inference scaling I recommend Is AI progress slowing
down? by Arvind Narayanan and Sayash Kapoor.'
- 'I like people who are skeptical of this stuff. The hype has been deafening for
more than two years now, and there are enormous quantities of snake oil and misinformation
out there. A lot of very bad decisions are being made based on that hype. Being
critical is a virtue.
If we want people with decision-making authority to make good decisions about
how to apply these tools we first need to acknowledge that there ARE good applications,
and then help explain how to put those into practice while avoiding the many unintiutive
traps.
(If you still don’t think there are any good applications at all I’m not sure
why you made it to this point in the article!)'
- source_sentence: What is the approximate cost of processing 260 input tokens and
92 output tokens according to the context?
sentences:
- 'The GPT-4 barrier was comprehensively broken
Some of those GPT-4 models run on my laptop
LLM prices crashed, thanks to competition and increased efficiency
Multimodal vision is common, audio and video are starting to emerge
Voice and live camera mode are science fiction come to life
Prompt driven app generation is a commodity already
Universal access to the best models lasted for just a few short months
“Agents” still haven’t really happened yet
Evals really matter
Apple Intelligence is bad, Apple’s MLX library is excellent
The rise of inference-scaling “reasoning” models
Was the best currently available LLM trained in China for less than $6m?
The environmental impact got better
The environmental impact got much, much worse'
- Structured and Gradual Learning. In organic datasets, the relationship between
tokens is often complex and indirect. Many reasoning steps may be required to
connect the current token to the next, making it challenging for the model to
learn effectively from next-token prediction. By contrast, each token generated
by a language model is by definition predicted by the preceding tokens, making
it easier for a model to follow the resulting reasoning patterns.
- '260 input tokens, 92 output tokens. Cost approximately 0.0024 cents (that’s less
than a 400th of a cent).
This increase in efficiency and reduction in price is my single favourite trend
from 2024. I want the utility of LLMs at a fraction of the energy cost and it
looks like that’s what we’re getting.
Multimodal vision is common, audio and video are starting to emerge
My butterfly example above illustrates another key trend from 2024: the rise of
multi-modal LLMs.
A year ago the single most notable example of these was GPT-4 Vision, released
at OpenAI’s DevDay in November 2023. Google’s multi-modal Gemini 1.0 was announced
on December 7th 2023 so it also (just) makes it into the 2023 window.'
- source_sentence: Why does the author remain skeptical about the utility of LLMs?
sentences:
- 'Terminology aside, I remain skeptical as to their utility based, once again,
on the challenge of gullibility. LLMs believe anything you tell them. Any systems
that attempts to make meaningful decisions on your behalf will run into the same
roadblock: how good is a travel agent, or a digital assistant, or even a research
tool if it can’t distinguish truth from fiction?
Just the other day Google Search was caught serving up an entirely fake description
of the non-existant movie “Encanto 2”. It turned out to be summarizing an imagined
movie listing from a fan fiction wiki.'
- 'The biggest innovation here is that it opens up a new way to scale a model: instead
of improving model performance purely through additional compute at training time,
models can now take on harder problems by spending more compute on inference.
The sequel to o1, o3 (they skipped “o2” for European trademark reasons) was announced
on 20th December with an impressive result against the ARC-AGI benchmark, albeit
one that likely involved more than $1,000,000 of compute time expense!
o3 is expected to ship in January. I doubt many people have real-world problems
that would benefit from that level of compute expenditure—I certainly don’t!—but
it appears to be a genuine next step in LLM architecture for taking on much harder
problems.'
- 'The May 13th announcement of GPT-4o included a demo of a brand new voice mode,
where the true multi-modal GPT-4o (the o is for “omni”) model could accept audio
input and output incredibly realistic sounding speech without needing separate
TTS or STT models.
The demo also sounded conspicuously similar to Scarlett Johansson... and after
she complained the voice from the demo, Skye, never made it to a production product.
The delay in releasing the new voice mode after the initial demo caused quite
a lot of confusion. I wrote about that in ChatGPT in “4o” mode is not running
the new features yet.'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.875
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 1.0
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.875
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20000000000000004
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000002
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.875
name: Cosine Recall@1
- type: cosine_recall@3
value: 1.0
name: Cosine Recall@3
- type: cosine_recall@5
value: 1.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9484108127976215
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9305555555555555
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9305555555555555
name: Cosine Map@100
---
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l)
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("mbudisic/snoflake-simon-20250506182012")
# Run inference
sentences = [
'Why does the author remain skeptical about the utility of LLMs?',
'Terminology aside, I remain skeptical as to their utility based, once again, on the challenge of gullibility. LLMs believe anything you tell them. Any systems that attempts to make meaningful decisions on your behalf will run into the same roadblock: how good is a travel agent, or a digital assistant, or even a research tool if it can’t distinguish truth from fiction?\nJust the other day Google Search was caught serving up an entirely fake description of the non-existant movie “Encanto 2”. It turned out to be summarizing an imagined movie listing from a fan fiction wiki.',
'The May 13th announcement of GPT-4o included a demo of a brand new voice mode, where the true multi-modal GPT-4o (the o is for “omni”) model could accept audio input and output incredibly realistic sounding speech without needing separate TTS or STT models.\nThe demo also sounded conspicuously similar to Scarlett Johansson... and after she complained the voice from the demo, Skye, never made it to a production product.\nThe delay in releasing the new voice mode after the initial demo caused quite a lot of confusion. I wrote about that in ChatGPT in “4o” mode is not running the new features yet.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [InformationRetrievalEvaluator
](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.875 |
| cosine_accuracy@3 | 1.0 |
| cosine_accuracy@5 | 1.0 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.875 |
| cosine_precision@3 | 0.3333 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.875 |
| cosine_recall@3 | 1.0 |
| cosine_recall@5 | 1.0 |
| cosine_recall@10 | 1.0 |
| **cosine_ndcg@10** | **0.9484** |
| cosine_mrr@10 | 0.9306 |
| cosine_map@100 | 0.9306 |
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 156 training samples
* Columns: sentence_0
and sentence_1
* Approximate statistics based on the first 156 samples:
| | sentence_0 | sentence_1 |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details |
What are some key themes identified in the development of Large Language Models in 2024?
| Things we learned about LLMs in 2024
Simon Willison’s Weblog
Subscribe
Things we learned about LLMs in 2024
31st December 2024
A lot has happened in the world of Large Language Models over the course of 2024. Here’s a review of things we figured out about the field in the past twelve months, plus my attempt at identifying key themes and pivotal moments.
This is a sequel to my review of 2023.
In this article:
|
| How does the 2024 review of LLMs compare to the review from 2023?
| Things we learned about LLMs in 2024
Simon Willison’s Weblog
Subscribe
Things we learned about LLMs in 2024
31st December 2024
A lot has happened in the world of Large Language Models over the course of 2024. Here’s a review of things we figured out about the field in the past twelve months, plus my attempt at identifying key themes and pivotal moments.
This is a sequel to my review of 2023.
In this article:
|
| What factors contributed to the crash in LLM prices?
| The GPT-4 barrier was comprehensively broken
Some of those GPT-4 models run on my laptop
LLM prices crashed, thanks to competition and increased efficiency
Multimodal vision is common, audio and video are starting to emerge
Voice and live camera mode are science fiction come to life
Prompt driven app generation is a commodity already
Universal access to the best models lasted for just a few short months
“Agents” still haven’t really happened yet
Evals really matter
Apple Intelligence is bad, Apple’s MLX library is excellent
The rise of inference-scaling “reasoning” models
Was the best currently available LLM trained in China for less than $6m?
The environmental impact got better
The environmental impact got much, much worse
|
* Loss: [MatryoshkaLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `num_train_epochs`: 10
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters