Mistral-7B-Instruct-v0.3-Mental-Health-chatbot
This is a fine-tuned version of Mistral-7B-Instruct-v0.3 for empathetic and mental-health-oriented dialogue.
It has been adapted to provide supportive, safe, and context-aware responses in conversations related to mental health.
Disclaimer:
This model is not a replacement for professional mental health care. It should only be used for research and educational purposes.
If you are experiencing a crisis, please seek immediate help from a qualified professional.
Model Details
- Base model: Mistral-7B-Instruct-v0.3
- Fine-tuned for: Empathetic dialogue, mental health supportive responses
- Framework: Transformers
Datasets Used
This model was fine-tuned on multiple open-source mental health conversation datasets:
Amod/mental_health_counseling_conversations
- Real counseling Q&A with licensed professionals
- JSON format, ~1Kβ10K entries
mpingale/mental-health-chat-dataset
- Tabular chat dataset with question/answer pairs and therapist info
- Parquet format, ~2.78K entries
heliosbrahma/mental_health_chatbot_dataset
- Small conversational dataset (~172β1K entries)
- Q&A pairs for chatbot training, MIT license
-
- Large dataset of ~13.4K entries
- JSON format, mental health dialogues
By combining these datasets, the model learns empathetic, contextually relevant, and supportive responses.
Intended Use
- Research in empathetic AI
- Exploring AI for mental health support
- Developing chatbots that respond with empathy
Not intended for:
- Real-time crisis management
- Replacing human therapists or counselors
Evaluation Results
The model was evaluated using multiple NLP metrics to measure fluency, relevance, and semantic similarity:
π Automatic Metrics
BERTScore (F1 mean):
0.876
SBERT Cosine Similarity (mean):
0.695
ROUGE Scores:
- ROUGE-1:
0.341
- ROUGE-2:
0.135
- ROUGE-L:
0.221
- ROUGE-Lsum:
0.225
- ROUGE-1:
BLEU:
0.081
chrF++:
38.05
π Perplexity
- Base Model Perplexity:
11.32
- Fine-tuned Model Perplexity:
2.52
Lower perplexity indicates the fine-tuned model produces more coherent and predictable text compared to the base model.
These results show that fine-tuning substantially improved the modelβs language generation quality, making it better aligned for empathetic and supportive mental health conversations.
Example Usage
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load your model and tokenizer
model_name = "Tanneru/Mistral-7B-Instruct-v0.3-Mental-Health-chatbot"
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Define the chat
chat = [
{"role": "system", "content": "You are EmpathAI, a supportive and empathetic AI trained for mental health conversations. Always respond with kindness and empathy.Do not use hashtags or excessive emojis.Do not repeat phrases or sentences"},
{"role": "user", "content": "I feel very anxious about my exams. Can you help me calm down?"}
]
# Tokenize input
inputs = tokenizer.apply_chat_template(
chat,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
# Generate response
outputs = model.generate(
inputs,
max_new_tokens=256,
do_sample=True,
temperature=0.6,
top_p=0.9,
no_repeat_ngram_size=6, # hard block repeated 6-gram
repetition_penalty=1.15, # soft discouragement of reuse
pad_token_id=tokenizer.eos_token_id
)
# Extract assistant's reply
response = outputs[0][inputs.shape[-1]:]
print("EmpathAI:", tokenizer.decode(response, skip_special_tokens=True))
Limitations
- The model may sometimes generate inaccurate or harmful advice.
- Responses may vary depending on phrasing and context.
- Should not be solely relied upon for medical or therapeutic guidance.
Citation
If you use this model in your research or project, please cite it:
@misc{tanneru2025mistralmentalhealth,
title = {Mistral-7B-Instruct-v0.3-Mental-Health-chatbot},
author = {Tanneru},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/Tanneru/Mistral-7B-Instruct-v0.3-Mental-Health-chatbot}}
}
Model tree for Tanneru/Mistral-7B-Instruct-v0.3-Mental-Health-chatbot
Base model
mistralai/Mistral-7B-v0.3