Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth!

Follows https://github.com/huggingface/alignment-handbook/issues/45#issuecomment-1845598205

From HuggingFaceH4/mistral-7b-sft-beta

✨ Finetune for Free

All notebooks are beginner friendly! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.

Unsloth supports Free Notebooks Performance Memory use
Gemma 7b ▶️ Start on Colab 2.4x faster 58% less
Mistral 7b ▶️ Start on Colab 2.2x faster 62% less
Llama-2 7b ▶️ Start on Colab 2.2x faster 43% less
TinyLlama ▶️ Start on Colab 3.9x faster 74% less
CodeLlama 34b A100 ▶️ Start on Colab 1.9x faster 27% less
Mistral 7b 1xT4 ▶️ Start on Kaggle 5x faster* 62% less
DPO - Zephyr ▶️ Start on Colab 1.9x faster 19% less
Downloads last month
10,203
Safetensors
Model size
3.86B params
Tensor type
F32
·
BF16
·
U8
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for unsloth/zephyr-sft-bnb-4bit

Adapters
6 models
Finetunes
68 models
Quantizations
4 models

Collection including unsloth/zephyr-sft-bnb-4bit