Model Card for Model ID
Model Details
This model is a fine-tuned version of Qwen2.5-0.5B-Instruct, optimized for empathetic and supportive conversations in the mental health domain. It was trained on the ShenLab/MentalChat16K dataset, which includes over 16,000 counseling-style Q&A examples, combining real clinical paraphrases and synthetic mental health dialogues. The model is designed to understand and respond to emotionally nuanced prompts related to stress, anxiety, relationships, and personal well-being.
Model Description
- Language(s) (NLP): English
- License: MIT
- Finetuned from model: unsloth/Qwen2.5-0.5B-Instruct
- Dataset: ShenLab/MentalChat16K
Uses
This model is intended for research and experimentation in AI-driven mental health support. Key use cases include:
- Mental health chatbot prototypes
- Empathy-focused dialogue agents
- Benchmarking LLMs on emotional intelligence and counseling-style prompts
- Educational or training tools in psychology or mental health communication
This model is NOT intended for clinical diagnosis, therapy, or real-time intervention. It must not replace licensed mental health professionals.
Bias, Risks, and Limitations
Biases:
- The real interview data is biased toward caregivers (mostly White, female, U.S.-based), which may affect the model’s cultural and demographic generalizability.
- The synthetic dialogues are generated by GPT-3.5, which may introduce linguistic and cultural biases from its pretraining.
Limitations:
- The base model, Qwen2.5-0.5B-Instruct, is a small model (0.5B parameters), limiting depth of reasoning and nuanced understanding.
- Not suitable for handling acute mental health crises or emergency counseling.
- Responses may lack therapeutic rigor or miss subtle psychological cues.
- May produce hallucinated or inaccurate mental health advice.
How to Get Started with the Model
Use the code below to get started with the model.
from huggingface_hub import notebook_login,login
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
login(token="")
tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen2.5-0.5B-Instruct",)
base_model = AutoModelForCausalLM.from_pretrained(
"unsloth/Qwen2.5-0.5B-Instruct",
device_map={"": 0}, token=""
)
model = PeftModel.from_pretrained(base_model,"Rustamshry/Qwen2.5-MentalChat-16K")
instruction = """
You are a helpful mental health counselling assistant, please answer the mental health questions based on the patient's description.
The assistant gives helpful, comprehensive, and appropriate answers to the user's questions.
"""
question = """
I've tried setting boundaries, but it feels like I'm constantly being pulled in different directions.
I feel guilty for not being able to help my siblings, but I also know that I can't continue to neglect my mom's needs.
I'm worried that if I don't find a way to manage these demands, I'll burn out and won't be able to care for her effectively.
"""
prompt = (
f"### Instruction:\n{instruction}\n\n"
f"### Question:\n{question}\n\n"
f"### Response:\n"
)
input_ids = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**input_ids,
max_new_tokens=4048,
#temperature=0.6,
#top_p=0.95,
#do_sample=True,
#eos_token_id=tokenizer.eos_token_id
)
print(tokenizer.decode(outputs[0]),skip_special_tokens=True)
Framework versions
- PEFT 0.15.2
- Downloads last month
- 9
Model tree for Rustamshry/Qwen2.5-MentalChat-16K
Base model
Qwen/Qwen2.5-0.5B