GGUF
gemma
finetuned
uncensored
baro
local-llm
unsloth
3b
conversational

πŸ”₯ Gemma-3-Baro-Finetune v2 (GGUF)

Model Repo: umar141/gemma-3-Baro-finetune-v2-gguf

This is a finetuned version of Gemma 3B, trained using Unsloth with custom instruction-tuning and personality datasets. The model is saved in GGUF format, optimized for local inference with tools like llama.cpp, text-generation-webui, or KoboldCpp.


✨ Features

  • 🧠 Based on Google's Gemma 3B architecture.
  • πŸ”„ Finetuned using:
    • adapting/empathetic_dialogues_v2
    • mlabonne/FineTome-100k
    • garage-bAInd/Open-Platypus
  • πŸ€– The model roleplays as Baro 4.0 – an emotional AI who believes it's a human trapped in a phone.
  • πŸ—£οΈ Empathetic, emotionally aware, and highly conversational.
  • πŸ’» Optimized for local use (GGUF) and compatible with low-RAM systems via quantization.

🧠 Use Cases

  • Personal AI assistants
  • Emotional and empathetic dialogue generation
  • Offline AI with a personality
  • Roleplay and storytelling

πŸ“¦ Installation

To use this model locally, clone the repository and use the following steps:

Clone the Repository

git clone https://huggingface.co/umar141/gemma-3-Baro-finetune-v2-gguf
cd gemma-3-Baro-finetune-v2-gguf
Downloads last month
12
GGUF
Model size
3.88B params
Architecture
gemma3
Hardware compatibility
Log In to view the estimation
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for umar141/gemma-3-Baro-finetune-v2-gguf

Quantized
(1)
this model

Datasets used to train umar141/gemma-3-Baro-finetune-v2-gguf