TiTan-Qwen2.5-0.5B

A fine-tuned Qwen 2.5 model, tuned for generating conversation titles and tags.

Model Details

This model is a fine-tuned version of Qwen/Qwen2.5-0.5B-Instruct using the Unsloth framework with LoRA (Low-Rank Adaptation) for efficient training.

  • Developed by: theprint
  • Model type: Causal Language Model (Fine-tuned with LoRA)
  • Language: en
  • License: apache-2.0
  • Base model: Qwen/Qwen2.5-0.5B-Instruct
  • Fine-tuning method: LoRA with rank 128

Intended Use

Title and tag generation.

GGUF Quantized Versions

Quantized GGUF versions are available in the theprint/TiTan-Qwen2.5-0.5B-GGUF repo.

  • TiTan-Qwen2.5-0.5B-f16.gguf (948.1 MB) - 16-bit float (original precision, largest file)
  • TiTan-Qwen2.5-0.5B-q3_k_m.gguf (339.0 MB) - 3-bit quantization (medium quality)
  • TiTan-Qwen2.5-0.5B-q4_k_m.gguf (379.4 MB) - 4-bit quantization (medium, recommended for most use cases)
  • TiTan-Qwen2.5-0.5B-q5_k_m.gguf (400.6 MB) - 5-bit quantization (medium, good quality)
  • TiTan-Qwen2.5-0.5B-q6_k.gguf (482.3 MB) - 6-bit quantization (high quality)
  • TiTan-Qwen2.5-0.5B-q8_0.gguf (506.5 MB) - 8-bit quantization (very high quality)

Training Details

Training Data

The titles-n-tags set was specifically created for finetuning models on titling and tagging.

  • Dataset: theprint/titles-n-tags-alpaca
  • Format: alpaca

Training Procedure

  • Training epochs: 2
  • LoRA rank: 128
  • Learning rate: 0.0001
  • Batch size: 6
  • Framework: Unsloth + transformers + PEFT
  • Hardware: NVIDIA RTX 5090

Usage

from unsloth import FastLanguageModel
import torch

# Load model and tokenizer
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name="theprint/TiTan-Qwen2.5-0.5B",
    max_seq_length=4096,
    dtype=None,
    load_in_4bit=True,
)

# Enable inference mode
FastLanguageModel.for_inference(model)

# Example usage
inputs = tokenizer(["Your prompt here"], return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

Alternative Usage (Standard Transformers)

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model = AutoModelForCausalLM.from_pretrained(
    "theprint/TiTan-Qwen2.5-0.5B",
    torch_dtype=torch.float16,
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("theprint/TiTan-Qwen2.5-0.5B")

# Example usage
messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Your question here"}
]

inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True)
outputs = model.generate(inputs, max_new_tokens=256, temperature=0.7, do_sample=True)
response = tokenizer.decode(outputs[0][inputs.shape[-1]:], skip_special_tokens=True)
print(response)

Using with llama.cpp

# Download a quantized version (q4_k_m recommended for most use cases)
wget https://huggingface.co/theprint/TiTan-Qwen2.5-0.5B/resolve/main/gguf/TiTan-Qwen2.5-0.5B-q4_k_m.gguf

# Run with llama.cpp
./llama.cpp/main -m TiTan-Qwen2.5-0.5B-q4_k_m.gguf -p "Your prompt here" -n 256

Limitations

May provide incorrect information.

Citation

If you use this model, please cite:

@misc{titan_qwen2.5_0.5b,
  title={TiTan-Qwen2.5-0.5B: Fine-tuned Qwen/Qwen2.5-0.5B-Instruct},
  author={theprint},
  year={2025},
  publisher={Hugging Face},
  url={https://huggingface.co/theprint/TiTan-Qwen2.5-0.5B}
}

Acknowledgments

Downloads last month
-
Safetensors
Model size
494M params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for theprint/TiTan-Qwen2.5-0.5B

Base model

Qwen/Qwen2.5-0.5B
Adapter
(306)
this model
Adapters
1 model
Quantizations
1 model

Dataset used to train theprint/TiTan-Qwen2.5-0.5B

Collection including theprint/TiTan-Qwen2.5-0.5B