--- base_model: google/gemma-3-4b-it library_name: peft pipeline_tag: text-generation language: en license: apache-2.0 tags: - lora - sft - transformers - trl - unsloth - fine-tuned datasets: - theprint/Zeth --- # Zeth-Gemma3-4B A fine-tuned Gemma3 4B model, specialized in pragmatic empathy, or perhaps it is empathic pragmatism? ## Model Details This model is a fine-tuned version of google/gemma-3-4b-it using the Unsloth framework with LoRA (Low-Rank Adaptation) for efficient training. - **Developed by:** theprint - **Model type:** Causal Language Model (Fine-tuned with LoRA) - **Language:** en - **License:** apache-2.0 - **Base model:** google/gemma-3-4b-it - **Fine-tuning method:** LoRA with rank 128 ## Intended Use Conversation, brainstorming, and general instruction following. ## GGUF Quantized Versions Quantized GGUF versions are available at [theprint/Zeth-Gemma3-4B-GGUF](https://huggingface.co/theprint/Zeth-Gemma3-4B-GGUF): - `Zeth-Gemma3-4B-f16.gguf` (8688.3 MB) - 16-bit float (original precision, largest file) - `Zeth-Gemma3-4B-q3_k_m.gguf` (2276.3 MB) - 3-bit quantization (medium quality) - `Zeth-Gemma3-4B-q4_k_m.gguf` (2734.6 MB) - 4-bit quantization (medium, recommended for most use cases) - `Zeth-Gemma3-4B-q5_k_m.gguf` (3138.7 MB) - 5-bit quantization (medium, good quality) - `Zeth-Gemma3-4B-q6_k.gguf` (3568.1 MB) - 6-bit quantization (high quality) - `Zeth-Gemma3-4B-q8_0.gguf` (4619.2 MB) - 8-bit quantization (very high quality) ### Using with llama.cpp ```bash # Download a quantized version (q4_k_m recommended for most use cases) wget https://huggingface.co/theprint/Zeth-Gemma3-4B/resolve/main/gguf/Zeth-Gemma3-4B-q4_k_m.gguf # Run with llama.cpp ./llama.cpp/main -m Zeth-Gemma3-4B-q4_k_m.gguf -p "Your prompt here" -n 256 ``` ## Training Details ### Training Data The Zeth data set was specifically created for finetuning models on empathic explanation. This was done by taking premade data sets and rewording the replies to be in line with the style for Zeth. - **Dataset:** theprint/Zeth - **Format:** alpaca ### Training Procedure - **Training epochs:** 3 - **LoRA rank:** 128 - **Learning rate:** 0.0002 - **Batch size:** 4 - **Framework:** Unsloth + transformers + PEFT - **Hardware:** NVIDIA RTX 5090 ## Usage ```python from unsloth import FastLanguageModel import torch # Load model and tokenizer model, tokenizer = FastLanguageModel.from_pretrained( model_name="theprint/Zeth-Gemma3-4B", max_seq_length=4096, dtype=None, load_in_4bit=True, ) # Enable inference mode FastLanguageModel.for_inference(model) # Example usage inputs = tokenizer(["Your prompt here"], return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.7) response = tokenizer.decode(outputs[0], skip_special_tokens=True) print(response) ``` ### Alternative Usage (Standard Transformers) ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch model = AutoModelForCausalLM.from_pretrained( "theprint/Zeth-Gemma3-4B", torch_dtype=torch.float16, device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("theprint/Zeth-Gemma3-4B") # Example usage messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Your question here"} ] inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True) outputs = model.generate(inputs, max_new_tokens=256, temperature=0.7, do_sample=True) response = tokenizer.decode(outputs[0][inputs.shape[-1]:], skip_special_tokens=True) print(response) ``` ## Limitations May hallucinate or provide incorrect information. ## Citation If you use this model, please cite: ```bibtex @misc{zeth_gemma3_4b, title={Zeth-Gemma3-4B: Fine-tuned google/gemma-3-4b-it}, author={theprint}, year={2025}, publisher={Hugging Face}, url={https://huggingface.co/theprint/Zeth-Gemma3-4B} } ``` ## Acknowledgments - Base model: [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it) - Training dataset: [theprint/Zeth](https://huggingface.co/datasets/theprint/Zeth) - Fine-tuning framework: [Unsloth](https://github.com/unslothai/unsloth) - Quantization: [llama.cpp](https://github.com/ggerganov/llama.cpp)