Text Generation
Transformers
Safetensors
English
mistral
claude
conversational
text-generation-inference
Claudette-7B / README.md
ayan4m1's picture
Grammar in README.md
6e47b7a verified
metadata
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
datasets:
  - mlfoundations-dev/oh-dcft-v3.1-claude-3-5-sonnet-20241022
  - Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
  - nothingiisreal/Claude-3-Opus-Instruct-15K
language:
  - en
license: apache-2.0
pipeline_tag: text-generation
tags:
  - mistral
  - claude
quantized_by: ayan4m1
inference: true
fine-tuning: true
library_name: transformers

Claudette-7B - A Mistral fine-tuning with Claude data

anime girl with cyan hair

Using unsloth for fine-tuning:

==((====))==  Unsloth 2025.2.4: Fast Llama patching. Transformers: 4.48.2.
   \\   /|    GPU: NVIDIA A100-SXM4-40GB. Max memory: 39.557 GB. Platform: Linux.
O^O/ \_/ \    Torch: 2.5.1+cu124. CUDA: 8.0. CUDA Toolkit: 12.4. Triton: 3.1.0
\        /    Bfloat16 = TRUE. FA [Xformers = 0.0.29. FA2 = False]
 "-____-"     Free Apache license: http://github.com/unslothai/unsloth

Original model: https://huggingface.co/unsloth/mistral-7b-instruct-v0.3-bnb-4bit

Applied Claude-sourced datasets containing ~200k question/answer pairs for fine-tuning.

Training loss

loss graph

Prompt format

<s>[INST]{prompt}[/INST]

Comparison

In my non-exhaustive testing, this model performs as well or better than Llama3.1-8B-Sonnet in half the execution time.

Release History

  • v0.1 - [2025-02-12] Initial release, trained to 1024 steps

Credits

Thanks to Mistral AI, mlfoundations-dev, Gryphe, and nothingisreal for providing the data used to create this fine-tuning.