Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth!

We have a Google Colab Tesla T4 notebook for Gemma 2b here: https://colab.research.google.com/drive/15gGm7x_jTm017_Ic8e317tdIpDG53Mtu?usp=sharing

✨ Finetune for Free

All notebooks are beginner friendly! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.

Unsloth supports Free Notebooks Performance Memory use
Gemma 7b ▢️ Start on Colab 2.4x faster 58% less
Mistral 7b ▢️ Start on Colab 2.2x faster 62% less
Llama-2 7b ▢️ Start on Colab 2.2x faster 43% less
TinyLlama ▢️ Start on Colab 3.9x faster 74% less
CodeLlama 34b A100 ▢️ Start on Colab 1.9x faster 27% less
Mistral 7b 1xT4 ▢️ Start on Kaggle 5x faster* 62% less
DPO - Zephyr ▢️ Start on Colab 1.9x faster 19% less
Downloads last month
4,352
Safetensors
Model size
1.55B params
Tensor type
F32
Β·
BF16
Β·
U8
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for unsloth/gemma-2b-bnb-4bit

Adapters
12 models
Finetunes
102 models
Quantizations
39 models

Spaces using unsloth/gemma-2b-bnb-4bit 17

Collection including unsloth/gemma-2b-bnb-4bit