snow_ft_2025 / README.md
Mdean77's picture
Add new SentenceTransformer model
6586e26 verified
metadata
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:156
  - loss:MatryoshkaLoss
  - loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
  - source_sentence: >-
      What are some potential negative uses of Large Language Models as
      described in the context?
    sentences:
      - >-
        I think this means that, as individual users, we don’t need to feel any
        guilt at all for the energy consumed by the vast majority of our
        prompts. The impact is likely neglible compared to driving a car down
        the street or maybe even watching a video on YouTube.

        Likewise, training. DeepSeek v3 training for less than $6m is a
        fantastic sign that training costs can and should continue to drop.

        For less efficient models I find it useful to compare their energy usage
        to commercial flights. The largest Llama 3 model cost about the same as
        a single digit number of fully loaded passenger flights from New York to
        London. That’s certainly not nothing, but once trained that model can be
        used by millions of people at no extra training cost.
      - >-
        Here’s the sequel to this post: Things we learned about LLMs in 2024.

        Large Language Models

        In the past 24-36 months, our species has discovered that you can take a
        GIANT corpus of text, run it through a pile of GPUs, and use it to
        create a fascinating new kind of software.

        LLMs can do a lot of things. They can answer questions, summarize
        documents, translate from one language to another, extract information
        and even write surprisingly competent code.

        They can also help you cheat at your homework, generate unlimited
        streams of fake content and be used for all manner of nefarious
        purposes.
      - >-
        There’s now a fascinating ecosystem of people training their own models
        on top of these foundations, publishing those models, building
        fine-tuning datasets and sharing those too.

        The Hugging Face Open LLM Leaderboard is one place that tracks these. I
        can’t even attempt to count them, and any count would be out-of-date
        within a few hours.

        The best overall openly licensed LLM at any time is rarely a foundation
        model: instead, it’s whichever fine-tuned community model has most
        recently discovered the best combination of fine-tuning data.

        This is a huge advantage for open over closed models: the closed, hosted
        models don’t have thousands of researchers and hobbyists around the
        world collaborating and competing to improve them.
  - source_sentence: >-
      Why might some question the necessity of the extensive infrastructure
      investments for future AI models?
    sentences:
      - >-
        These abilities are just a few weeks old at this point, and I don’t
        think their impact has been fully felt yet. If you haven’t tried them
        out yet you really should.

        Both Gemini and OpenAI offer API access to these features as well.
        OpenAI started with a WebSocket API that was quite challenging to use,
        but in December they announced a new WebRTC API which is much easier to
        get started with. Building a web app that a user can talk to via voice
        is easy now!

        Prompt driven app generation is a commodity already

        This was possible with GPT-4 in 2023, but the value it provides became
        evident in 2024.
      - >-
        The environmental impact got much, much worse

        The much bigger problem here is the enormous competitive buildout of the
        infrastructure that is imagined to be necessary for these models in the
        future.

        Companies like Google, Meta, Microsoft and Amazon are all spending
        billions of dollars rolling out new datacenters, with a very material
        impact on the electricity grid and the environment. There’s even talk of
        spinning up new nuclear power stations, but those can take decades.

        Is this infrastructure necessary? DeepSeek v3’s $6m training cost and
        the continued crash in LLM prices might hint that it’s not. But would
        you want to be the big tech executive that argued NOT to build out this
        infrastructure only to be proven wrong in a few years’ time?
      - >-
        OpenAI are not the only game in town here. Google released their first
        entrant in the category, gemini-2.0-flash-thinking-exp, on December
        19th.

        Alibaba’s Qwen team released their QwQ model on November 28th—under an
        Apache 2.0 license, and that one I could run on my own machine. They
        followed that up with a vision reasoning model called QvQ on December
        24th, which I also ran locally.

        DeepSeek made their DeepSeek-R1-Lite-Preview model available to try out
        through their chat interface on November 20th.

        To understand more about inference scaling I recommend Is AI progress
        slowing down? by Arvind Narayanan and Sayash Kapoor.
  - source_sentence: >-
      How have US export regulations on GPUs to China influenced training
      optimizations?
    sentences:
      - >-
        Qwen2.5-Coder-32B is an LLM that can code well that runs on my Mac talks
        about Qwen2.5-Coder-32B in November—an Apache 2.0 licensed model!


        I can now run a GPT-4 class model on my laptop talks about running
        Meta’s Llama 3.3 70B (released in December)
      - >-
        Those US export regulations on GPUs to China seem to have inspired some
        very effective training optimizations!

        The environmental impact got better

        A welcome result of the increased efficiency of the models—both the
        hosted ones and the ones I can run locally—is that the energy usage and
        environmental impact of running a prompt has dropped enormously over the
        past couple of years.

        OpenAI themselves are charging 100x less for a prompt compared to the
        GPT-3 days. I have it on good authority that neither Google Gemini nor
        Amazon Nova (two of the least expensive model providers) are running
        prompts at a loss.
      - >-
        The GPT-4 barrier was comprehensively broken

        In my December 2023 review I wrote about how We don’t yet know how to
        build GPT-4—OpenAI’s best model was almost a year old at that point, yet
        no other AI lab had produced anything better. What did OpenAI know that
        the rest of us didn’t?

        I’m relieved that this has changed completely in the past twelve months.
        18 organizations now have models on the Chatbot Arena Leaderboard that
        rank higher than the original GPT-4 from March 2023 (GPT-4-0314 on the
        board)—70 models in total.
  - source_sentence: When was GPT-4 officially released by OpenAI?
    sentences:
      - >-
        The most recent twist, again from December (December was a lot) is live
        video. ChatGPT voice mode now provides the option to share your camera
        feed with the model and talk about what you can see in real time. Google
        Gemini have a preview of the same feature, which they managed to ship
        the day before ChatGPT did.
      - >-
        On the other hand, as software engineers we are better placed to take
        advantage of this than anyone else. We’ve all been given weird coding
        interns—we can use our deep knowledge to prompt them to solve coding
        problems more effectively than anyone else can.

        The ethics of this space remain diabolically complex

        In September last year Andy Baio and I produced the first major story on
        the unlicensed training data behind Stable Diffusion.

        Since then, almost every major LLM (and most of the image generation
        models) have also been trained on unlicensed data.
      - >-
        We don’t yet know how to build GPT-4

        Frustratingly, despite the enormous leaps ahead we’ve had this year, we
        are yet to see an alternative model that’s better than GPT-4.

        OpenAI released GPT-4 in March, though it later turned out we had a
        sneak peak of it in February when Microsoft used it as part of the new
        Bing.

        This may well change in the next few weeks: Google’s Gemini Ultra has
        big claims, but isn’t yet available for us to try out.

        The team behind Mistral are working to beat GPT-4 as well, and their
        track record is already extremely strong considering their first public
        model only came out in September, and they’ve released two significant
        improvements since then.
  - source_sentence: >-
      What is the challenge in building AI personal assistants based on the
      gullibility of language models?
    sentences:
      - >-
        Language Models are gullible. They “believe” what we tell them—what’s in
        their training data, then what’s in the fine-tuning data, then what’s in
        the prompt.

        In order to be useful tools for us, we need them to believe what we feed
        them!

        But it turns out a lot of the things we want to build need them not to
        be gullible.

        Everyone wants an AI personal assistant. If you hired a real-world
        personal assistant who believed everything that anyone told them, you
        would quickly find that their ability to positively impact your life was
        severely limited.
      - >-
        There’s now a fascinating ecosystem of people training their own models
        on top of these foundations, publishing those models, building
        fine-tuning datasets and sharing those too.

        The Hugging Face Open LLM Leaderboard is one place that tracks these. I
        can’t even attempt to count them, and any count would be out-of-date
        within a few hours.

        The best overall openly licensed LLM at any time is rarely a foundation
        model: instead, it’s whichever fine-tuned community model has most
        recently discovered the best combination of fine-tuning data.

        This is a huge advantage for open over closed models: the closed, hosted
        models don’t have thousands of researchers and hobbyists around the
        world collaborating and competing to improve them.
      - >-
        Longer inputs dramatically increase the scope of problems that can be
        solved with an LLM: you can now throw in an entire book and ask
        questions about its contents, but more importantly you can feed in a lot
        of example code to help the model correctly solve a coding problem. LLM
        use-cases that involve long inputs are far more interesting to me than
        short prompts that rely purely on the information already baked into the
        model weights. Many of my tools were built using this pattern.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
  - cosine_accuracy@1
  - cosine_accuracy@3
  - cosine_accuracy@5
  - cosine_accuracy@10
  - cosine_precision@1
  - cosine_precision@3
  - cosine_precision@5
  - cosine_precision@10
  - cosine_recall@1
  - cosine_recall@3
  - cosine_recall@5
  - cosine_recall@10
  - cosine_ndcg@10
  - cosine_mrr@10
  - cosine_map@100
model-index:
  - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
    results:
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: Unknown
          type: unknown
        metrics:
          - type: cosine_accuracy@1
            value: 0.875
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 1
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 1
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 1
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.875
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.3333333333333333
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.20000000000000004
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.10000000000000002
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.875
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 1
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 1
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 1
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.9538662191964322
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.9375
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.9375
            name: Cosine Map@100

SentenceTransformer based on Snowflake/snowflake-arctic-embed-l

This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-l. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Snowflake/snowflake-arctic-embed-l
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 1024 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Mdean77/snow_ft_2025")
# Run inference
sentences = [
    'What is the challenge in building AI personal assistants based on the gullibility of language models?',
    'Language Models are gullible. They “believe” what we tell them—what’s in their training data, then what’s in the fine-tuning data, then what’s in the prompt.\nIn order to be useful tools for us, we need them to believe what we feed them!\nBut it turns out a lot of the things we want to build need them not to be gullible.\nEveryone wants an AI personal assistant. If you hired a real-world personal assistant who believed everything that anyone told them, you would quickly find that their ability to positively impact your life was severely limited.',
    'Longer inputs dramatically increase the scope of problems that can be solved with an LLM: you can now throw in an entire book and ask questions about its contents, but more importantly you can feed in a lot of example code to help the model correctly solve a coding problem. LLM use-cases that involve long inputs are far more interesting to me than short prompts that rely purely on the information already baked into the model weights. Many of my tools were built using this pattern.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.875
cosine_accuracy@3 1.0
cosine_accuracy@5 1.0
cosine_accuracy@10 1.0
cosine_precision@1 0.875
cosine_precision@3 0.3333
cosine_precision@5 0.2
cosine_precision@10 0.1
cosine_recall@1 0.875
cosine_recall@3 1.0
cosine_recall@5 1.0
cosine_recall@10 1.0
cosine_ndcg@10 0.9539
cosine_mrr@10 0.9375
cosine_map@100 0.9375

Training Details

Training Dataset

Unnamed Dataset

  • Size: 156 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 156 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 12 tokens
    • mean: 20.94 tokens
    • max: 32 tokens
    • min: 43 tokens
    • mean: 135.22 tokens
    • max: 214 tokens
  • Samples:
    sentence_0 sentence_1
    What advantage does a 64GB Mac have for running models in terms of CPU and GPU memory sharing? On paper, a 64GB Mac should be a great machine for running models due to the way the CPU and GPU can share the same memory. In practice, many models are released as model weights and libraries that reward NVIDIA’s CUDA over other platforms.
    The llama.cpp ecosystem helped a lot here, but the real breakthrough has been Apple’s MLX library, “an array framework for Apple Silicon”. It’s fantastic.
    Apple’s mlx-lm Python library supports running a wide range of MLX-compatible models on my Mac, with excellent performance. mlx-community on Hugging Face offers more than 1,000 models that have been converted to the necessary format.
    How has Apple’s MLX library impacted the performance of running machine learning models on Mac? On paper, a 64GB Mac should be a great machine for running models due to the way the CPU and GPU can share the same memory. In practice, many models are released as model weights and libraries that reward NVIDIA’s CUDA over other platforms.
    The llama.cpp ecosystem helped a lot here, but the real breakthrough has been Apple’s MLX library, “an array framework for Apple Silicon”. It’s fantastic.
    Apple’s mlx-lm Python library supports running a wide range of MLX-compatible models on my Mac, with excellent performance. mlx-community on Hugging Face offers more than 1,000 models that have been converted to the necessary format.
    How does the ability of models like ChatGPT Code Interpreter to execute and debug code impact the problem of hallucination in code generation? Except... you can run generated code to see if it’s correct. And with patterns like ChatGPT Code Interpreter the LLM can execute the code itself, process the error message, then rewrite it and keep trying until it works!
    So hallucination is a much lesser problem for code generation than for anything else. If only we had the equivalent of Code Interpreter for fact-checking natural language!
    How should we feel about this as software engineers?
    On the one hand, this feels like a threat: who needs a programmer if ChatGPT can write code for you?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • num_train_epochs: 10
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step cosine_ndcg@10
1.0 16 0.9692
2.0 32 0.9539
3.0 48 0.9692
3.125 50 0.9692
4.0 64 0.9692
5.0 80 0.9692
6.0 96 0.9692
6.25 100 0.9692
7.0 112 0.9539
8.0 128 0.9539
9.0 144 0.9539
9.375 150 0.9539
10.0 160 0.9539

Framework Versions

  • Python: 3.13.0
  • Sentence Transformers: 3.4.1
  • Transformers: 4.48.3
  • PyTorch: 2.6.0
  • Accelerate: 1.3.0
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}