gemma-3-270m-it-web-agent - Fine-tuned

This repository contains three variants of the model:

Training

  • Base model: unsloth/gemma-3-270m-it
  • Dataset: ArunKr/gui_grounding_dataset-100
  • Method: LoRA fine-tuning with Unsloth

Quantizations

We provide f16, bf16, f32, and q8_0 GGUF files for llama.cpp / Ollama.

Usage Example

from transformers import AutoModelForCausalLM, AutoTokenizer

tok = AutoTokenizer.from_pretrained("ArunKr/gemma-3-270m-it-web-agent-16bit")
model = AutoModelForCausalLM.from_pretrained("ArunKr/gemma-3-270m-it-web-agent-16bit")
print(model.generate(**tok("Hello", return_tensors="pt")))

Ollama Example

ollama run ArunKr/SmolLM-135M-Instruct-manim-gguf:<file_name>.gguf

www.ollama.com

Downloads last month
41
GGUF
Model size
268M params
Architecture
gemma3
Hardware compatibility
Log In to view the estimation

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ArunKr/gemma-3-270m-it-web-agent-gguf

Adapter
(14)
this model

Dataset used to train ArunKr/gemma-3-270m-it-web-agent-gguf