Mistral-7B-Instruct Fine-Tuned for Mental Health Counseling
Model Overview
This is a fine-tuned version of unsloth/mistral-7b-instruct-v0.3-bnb-4bit
, designed for mental health counseling. It enhances response quality in mental health discussions, providing empathetic and well-structured replies.
Dataset
- Amod/mental_health_counseling_conversations (cleaned version:
arafatanam/Mental-Health-Counseling
) - 2752 rows - chillies/student-mental-health-counseling-vn (translated version:
arafatanam/Student-Mental-Health-Counseling-10K
) - 7500 rows - Total dataset size: 10,252 rows
Training Details
- Hardware: Kaggle Notebooks (GPU T4 x2)
- Fine-tuning framework:
Unsloth
withLoRA
- Training settings:
max_seq_length = 512
batch_size = 8
gradient_accumulation_steps = 4
num_train_epochs = 2
learning_rate = 5e-5
optimizer = adamw_8bit
lr_scheduler = cosine
Training Results
- Final training loss:
1.0042
- Total steps:
640
- Trainable parameters:
0.60%
of the model` - Validation loss:
0.978
- Evaluation metric (perplexity):
2.85
Usage
This model is best suited for:
- Mental health chatbots
- Virtual therapy applications
- Mental health support and response generation
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The model has no library tag.
Model tree for arafatanam/Student-Focus-mistral-7b-instruct-v0.3
Base model
mistralai/Mistral-7B-v0.3
Finetuned
mistralai/Mistral-7B-Instruct-v0.3