--- license: apache-2.0 datasets: - arafatanam/Mental-Health-Couseling - arafatanam/Student-Mental-Health-Counseling-10K base_model: - unsloth/mistral-7b-instruct-v0.3-bnb-4bit tags: - mental-health - student-focused - chatbot --- # Mistral-7B-Instruct Fine-Tuned for Mental Health Counseling ## Model Overview This is a fine-tuned version of [`unsloth/mistral-7b-instruct-v0.3-bnb-4bit`](https://huggingface.co/unsloth/mistral-7b-instruct-v0.3-bnb-4bit), designed for mental health counseling. It enhances response quality in mental health discussions, providing empathetic and well-structured replies. ## Dataset - **Amod/mental_health_counseling_conversations** (cleaned version: [`arafatanam/Mental-Health-Counseling`](https://huggingface.co/datasets/arafatanam/Mental-Health-Counseling)) - **2752 rows** - **chillies/student-mental-health-counseling-vn** (translated version: [`arafatanam/Student-Mental-Health-Counseling-10K`](https://huggingface.co/datasets/arafatanam/Student-Mental-Health-Counseling-10K)) - **7500 rows** - **Total dataset size**: 10,252 rows ## Training Details - **Hardware**: Kaggle Notebooks (GPU T4 x2) - **Fine-tuning framework**: `Unsloth` with `LoRA` - **Training settings**: - `max_seq_length = 512` - `batch_size = 8` - `gradient_accumulation_steps = 4` - `num_train_epochs = 2` - `learning_rate = 5e-5` - `optimizer = adamw_8bit` - `lr_scheduler = cosine` ## Training Results - **Final training loss**: `1.0042` - **Total steps**: `640` - **Trainable parameters**: `0.60%` of the model` - **Validation loss**: `0.978` - **Evaluation metric** (perplexity): `2.85` ## Usage This model is best suited for: - Mental health chatbots - Virtual therapy applications - Mental health support and response generation