gemma-3-270m-it-web-agent - Fine-tuned
This repository contains three variants of the model:
- LoRA adapters โ ArunKr/gemma-3-270m-it-web-agent-lora
- Merged FP16 weights โ ArunKr/gemma-3-270m-it-web-agent-16bit
- GGUF quantizations โ ArunKr/gemma-3-270m-it-web-agent-gguf
Training
- Base model:
unsloth/gemma-3-270m-it
- Dataset:
ArunKr/gui_grounding_dataset-100
- Method: LoRA fine-tuning with Unsloth
Quantizations
We provide f16
, bf16
, f32
, and q8_0
GGUF files for llama.cpp / Ollama.
Usage Example
from transformers import AutoModelForCausalLM, AutoTokenizer
tok = AutoTokenizer.from_pretrained("ArunKr/gemma-3-270m-it-web-agent-16bit")
model = AutoModelForCausalLM.from_pretrained("ArunKr/gemma-3-270m-it-web-agent-16bit")
print(model.generate(**tok("Hello", return_tensors="pt")))
Ollama Example
ollama run ArunKr/SmolLM-135M-Instruct-manim-gguf:<file_name>.gguf
- Downloads last month
- 41
Hardware compatibility
Log In
to view the estimation
8-bit
Model tree for ArunKr/gemma-3-270m-it-web-agent-gguf
Base model
HuggingFaceTB/SmolLM-135M
Quantized
HuggingFaceTB/SmolLM-135M-Instruct