significant extra memory usage compared to the other 27b

#4
by FlorinAndrei - opened

Following the Gemma 3 fine-tunning notebook, but with my own dataset. Using an RTX 3090 with 24 GB VRAM.

If I use unsloth/gemma-3-27b-it-bnb-4bit then VRAM usage is around 71% and I can complete finetuning.

If I use unsloth/gemma-3-27b-it-unsloth-bnb-4bit (this model) then VRAM usage is close to 99% and eventually it crashes.

Unsloth AI org

Oh weird, our dynamic quants are slightly bigger than the standard ones.

did you turn on use_gradient_checkpointing = unsloth?

@shimmyshimmer

No, I started with the "official" Gemma 3 notebook and added a few bits of my own, but the basic logic is the same. The original notebook does not have that option enabled, and I was not sure if it's a good idea to use it with Gemma 3.

model = FastModel.get_peft_model(
    model,
    finetune_vision_layers=False,
    finetune_language_layers=True,
    finetune_attention_modules=True,
    finetune_mlp_modules=True,
    r=8,
    lora_alpha=8,
    lora_dropout=0,
    bias="none",
    random_state=42,
    use_rslora=True,
)

Do you think it's worth trying that option with Gemma 3?

I use it with Llama 3.1 lol but that's much smaller to begin with and my 3090 is nowhere near the full capacity.

Sign up or log in to comment