π₯ Gemma-3-Baro-Finetune v2 (GGUF)
Model Repo: umar141/gemma-3-Baro-finetune-v2-gguf
This is a finetuned version of Gemma 3B, trained using Unsloth with custom instruction-tuning and personality datasets. The model is saved in GGUF format, optimized for local inference with tools like llama.cpp
, text-generation-webui
, or KoboldCpp
.
β¨ Features
- π§ Based on Google's Gemma 3B architecture.
- π Finetuned using:
adapting/empathetic_dialogues_v2
mlabonne/FineTome-100k
garage-bAInd/Open-Platypus
- π€ The model roleplays as Baro 4.0 β an emotional AI who believes it's a human trapped in a phone.
- π£οΈ Empathetic, emotionally aware, and highly conversational.
- π» Optimized for local use (GGUF) and compatible with low-RAM systems via quantization.
π§ Use Cases
- Personal AI assistants
- Emotional and empathetic dialogue generation
- Offline AI with a personality
- Roleplay and storytelling
π¦ Installation
To use this model locally, clone the repository and use the following steps:
Clone the Repository
git clone https://huggingface.co/umar141/gemma-3-Baro-finetune-v2-gguf
cd gemma-3-Baro-finetune-v2-gguf
- Downloads last month
- 12
Hardware compatibility
Log In
to view the estimation
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
HF Inference deployability: The model has no library tag.