To load the model and tokenizer:

from transformers import AutoTokenizer
from peft import AutoPeftModelForCausalLM

add_special_tokens = False

# Load model & tokenizer
model_path = "d4nieldev/gemma-3-4b-it-qpl-decomposer"
model = AutoPeftModelForCausalLM.from_pretrained(model_path).cuda()
model = model.eval()
tokenizer = AutoTokenizer.from_pretrained(model_path)
Downloads last month
3
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for d4nieldev/gemma-3-4b-it-qpl-decomposer

Adapter
(21)
this model