---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:156
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
- source_sentence: What are some potential negative uses of Large Language Models
as described in the context?
sentences:
- 'I think this means that, as individual users, we don’t need to feel any guilt
at all for the energy consumed by the vast majority of our prompts. The impact
is likely neglible compared to driving a car down the street or maybe even watching
a video on YouTube.
Likewise, training. DeepSeek v3 training for less than $6m is a fantastic sign
that training costs can and should continue to drop.
For less efficient models I find it useful to compare their energy usage to commercial
flights. The largest Llama 3 model cost about the same as a single digit number
of fully loaded passenger flights from New York to London. That’s certainly not
nothing, but once trained that model can be used by millions of people at no extra
training cost.'
- 'Here’s the sequel to this post: Things we learned about LLMs in 2024.
Large Language Models
In the past 24-36 months, our species has discovered that you can take a GIANT
corpus of text, run it through a pile of GPUs, and use it to create a fascinating
new kind of software.
LLMs can do a lot of things. They can answer questions, summarize documents, translate
from one language to another, extract information and even write surprisingly
competent code.
They can also help you cheat at your homework, generate unlimited streams of fake
content and be used for all manner of nefarious purposes.'
- 'There’s now a fascinating ecosystem of people training their own models on top
of these foundations, publishing those models, building fine-tuning datasets and
sharing those too.
The Hugging Face Open LLM Leaderboard is one place that tracks these. I can’t
even attempt to count them, and any count would be out-of-date within a few hours.
The best overall openly licensed LLM at any time is rarely a foundation model:
instead, it’s whichever fine-tuned community model has most recently discovered
the best combination of fine-tuning data.
This is a huge advantage for open over closed models: the closed, hosted models
don’t have thousands of researchers and hobbyists around the world collaborating
and competing to improve them.'
- source_sentence: Why might some question the necessity of the extensive infrastructure
investments for future AI models?
sentences:
- 'These abilities are just a few weeks old at this point, and I don’t think their
impact has been fully felt yet. If you haven’t tried them out yet you really should.
Both Gemini and OpenAI offer API access to these features as well. OpenAI started
with a WebSocket API that was quite challenging to use, but in December they announced
a new WebRTC API which is much easier to get started with. Building a web app
that a user can talk to via voice is easy now!
Prompt driven app generation is a commodity already
This was possible with GPT-4 in 2023, but the value it provides became evident
in 2024.'
- 'The environmental impact got much, much worse
The much bigger problem here is the enormous competitive buildout of the infrastructure
that is imagined to be necessary for these models in the future.
Companies like Google, Meta, Microsoft and Amazon are all spending billions of
dollars rolling out new datacenters, with a very material impact on the electricity
grid and the environment. There’s even talk of spinning up new nuclear power stations,
but those can take decades.
Is this infrastructure necessary? DeepSeek v3’s $6m training cost and the continued
crash in LLM prices might hint that it’s not. But would you want to be the big
tech executive that argued NOT to build out this infrastructure only to be proven
wrong in a few years’ time?'
- 'OpenAI are not the only game in town here. Google released their first entrant
in the category, gemini-2.0-flash-thinking-exp, on December 19th.
Alibaba’s Qwen team released their QwQ model on November 28th—under an Apache
2.0 license, and that one I could run on my own machine. They followed that up
with a vision reasoning model called QvQ on December 24th, which I also ran locally.
DeepSeek made their DeepSeek-R1-Lite-Preview model available to try out through
their chat interface on November 20th.
To understand more about inference scaling I recommend Is AI progress slowing
down? by Arvind Narayanan and Sayash Kapoor.'
- source_sentence: How have US export regulations on GPUs to China influenced training
optimizations?
sentences:
- 'Qwen2.5-Coder-32B is an LLM that can code well that runs on my Mac talks about
Qwen2.5-Coder-32B in November—an Apache 2.0 licensed model!
I can now run a GPT-4 class model on my laptop talks about running Meta’s Llama
3.3 70B (released in December)'
- 'Those US export regulations on GPUs to China seem to have inspired some very
effective training optimizations!
The environmental impact got better
A welcome result of the increased efficiency of the models—both the hosted ones
and the ones I can run locally—is that the energy usage and environmental impact
of running a prompt has dropped enormously over the past couple of years.
OpenAI themselves are charging 100x less for a prompt compared to the GPT-3 days.
I have it on good authority that neither Google Gemini nor Amazon Nova (two of
the least expensive model providers) are running prompts at a loss.'
- 'The GPT-4 barrier was comprehensively broken
In my December 2023 review I wrote about how We don’t yet know how to build GPT-4—OpenAI’s
best model was almost a year old at that point, yet no other AI lab had produced
anything better. What did OpenAI know that the rest of us didn’t?
I’m relieved that this has changed completely in the past twelve months. 18 organizations
now have models on the Chatbot Arena Leaderboard that rank higher than the original
GPT-4 from March 2023 (GPT-4-0314 on the board)—70 models in total.'
- source_sentence: When was GPT-4 officially released by OpenAI?
sentences:
- The most recent twist, again from December (December was a lot) is live video.
ChatGPT voice mode now provides the option to share your camera feed with the
model and talk about what you can see in real time. Google Gemini have a preview
of the same feature, which they managed to ship the day before ChatGPT did.
- 'On the other hand, as software engineers we are better placed to take advantage
of this than anyone else. We’ve all been given weird coding interns—we can use
our deep knowledge to prompt them to solve coding problems more effectively than
anyone else can.
The ethics of this space remain diabolically complex
In September last year Andy Baio and I produced the first major story on the unlicensed
training data behind Stable Diffusion.
Since then, almost every major LLM (and most of the image generation models) have
also been trained on unlicensed data.'
- 'We don’t yet know how to build GPT-4
Frustratingly, despite the enormous leaps ahead we’ve had this year, we are yet
to see an alternative model that’s better than GPT-4.
OpenAI released GPT-4 in March, though it later turned out we had a sneak peak
of it in February when Microsoft used it as part of the new Bing.
This may well change in the next few weeks: Google’s Gemini Ultra has big claims,
but isn’t yet available for us to try out.
The team behind Mistral are working to beat GPT-4 as well, and their track record
is already extremely strong considering their first public model only came out
in September, and they’ve released two significant improvements since then.'
- source_sentence: What is the challenge in building AI personal assistants based
on the gullibility of language models?
sentences:
- 'Language Models are gullible. They “believe” what we tell them—what’s in their
training data, then what’s in the fine-tuning data, then what’s in the prompt.
In order to be useful tools for us, we need them to believe what we feed them!
But it turns out a lot of the things we want to build need them not to be gullible.
Everyone wants an AI personal assistant. If you hired a real-world personal assistant
who believed everything that anyone told them, you would quickly find that their
ability to positively impact your life was severely limited.'
- 'There’s now a fascinating ecosystem of people training their own models on top
of these foundations, publishing those models, building fine-tuning datasets and
sharing those too.
The Hugging Face Open LLM Leaderboard is one place that tracks these. I can’t
even attempt to count them, and any count would be out-of-date within a few hours.
The best overall openly licensed LLM at any time is rarely a foundation model:
instead, it’s whichever fine-tuned community model has most recently discovered
the best combination of fine-tuning data.
This is a huge advantage for open over closed models: the closed, hosted models
don’t have thousands of researchers and hobbyists around the world collaborating
and competing to improve them.'
- 'Longer inputs dramatically increase the scope of problems that can be solved
with an LLM: you can now throw in an entire book and ask questions about its contents,
but more importantly you can feed in a lot of example code to help the model correctly
solve a coding problem. LLM use-cases that involve long inputs are far more interesting
to me than short prompts that rely purely on the information already baked into
the model weights. Many of my tools were built using this pattern.'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.875
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 1.0
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.875
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20000000000000004
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000002
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.875
name: Cosine Recall@1
- type: cosine_recall@3
value: 1.0
name: Cosine Recall@3
- type: cosine_recall@5
value: 1.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9538662191964322
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9375
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9375
name: Cosine Map@100
---
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l)
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Mdean77/snow_ft_2025")
# Run inference
sentences = [
'What is the challenge in building AI personal assistants based on the gullibility of language models?',
'Language Models are gullible. They “believe” what we tell them—what’s in their training data, then what’s in the fine-tuning data, then what’s in the prompt.\nIn order to be useful tools for us, we need them to believe what we feed them!\nBut it turns out a lot of the things we want to build need them not to be gullible.\nEveryone wants an AI personal assistant. If you hired a real-world personal assistant who believed everything that anyone told them, you would quickly find that their ability to positively impact your life was severely limited.',
'Longer inputs dramatically increase the scope of problems that can be solved with an LLM: you can now throw in an entire book and ask questions about its contents, but more importantly you can feed in a lot of example code to help the model correctly solve a coding problem. LLM use-cases that involve long inputs are far more interesting to me than short prompts that rely purely on the information already baked into the model weights. Many of my tools were built using this pattern.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [InformationRetrievalEvaluator
](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.875 |
| cosine_accuracy@3 | 1.0 |
| cosine_accuracy@5 | 1.0 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.875 |
| cosine_precision@3 | 0.3333 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.875 |
| cosine_recall@3 | 1.0 |
| cosine_recall@5 | 1.0 |
| cosine_recall@10 | 1.0 |
| **cosine_ndcg@10** | **0.9539** |
| cosine_mrr@10 | 0.9375 |
| cosine_map@100 | 0.9375 |
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 156 training samples
* Columns: sentence_0
and sentence_1
* Approximate statistics based on the first 156 samples:
| | sentence_0 | sentence_1 |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details |
What advantage does a 64GB Mac have for running models in terms of CPU and GPU memory sharing?
| On paper, a 64GB Mac should be a great machine for running models due to the way the CPU and GPU can share the same memory. In practice, many models are released as model weights and libraries that reward NVIDIA’s CUDA over other platforms.
The llama.cpp ecosystem helped a lot here, but the real breakthrough has been Apple’s MLX library, “an array framework for Apple Silicon”. It’s fantastic.
Apple’s mlx-lm Python library supports running a wide range of MLX-compatible models on my Mac, with excellent performance. mlx-community on Hugging Face offers more than 1,000 models that have been converted to the necessary format.
|
| How has Apple’s MLX library impacted the performance of running machine learning models on Mac?
| On paper, a 64GB Mac should be a great machine for running models due to the way the CPU and GPU can share the same memory. In practice, many models are released as model weights and libraries that reward NVIDIA’s CUDA over other platforms.
The llama.cpp ecosystem helped a lot here, but the real breakthrough has been Apple’s MLX library, “an array framework for Apple Silicon”. It’s fantastic.
Apple’s mlx-lm Python library supports running a wide range of MLX-compatible models on my Mac, with excellent performance. mlx-community on Hugging Face offers more than 1,000 models that have been converted to the necessary format.
|
| How does the ability of models like ChatGPT Code Interpreter to execute and debug code impact the problem of hallucination in code generation?
| Except... you can run generated code to see if it’s correct. And with patterns like ChatGPT Code Interpreter the LLM can execute the code itself, process the error message, then rewrite it and keep trying until it works!
So hallucination is a much lesser problem for code generation than for anything else. If only we had the equivalent of Code Interpreter for fact-checking natural language!
How should we feel about this as software engineers?
On the one hand, this feels like a threat: who needs a programmer if ChatGPT can write code for you?
|
* Loss: [MatryoshkaLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `num_train_epochs`: 10
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters