Qwen3-0.6B-PsychSupport-Expert

This project performs full fine-tuning on the Qwen3-0.6B language model to enhance its psychological support reasoning and empathetic response capabilities. The model was optimized using the bfloat16 (bf16) data type.

Training Procedure

  1. Dataset Preparation

    • Dataset: Containing paired patient emotional context descriptions and step-by-step empathetic support responses.
  2. Model Loading and Configuration

    • Base model: Qwen3-0.6B, loaded with the unsloth library in bf16 precision.
    • Full fine-tuning (full_finetuning=True) applied to all layers to adapt the model for psychological support tasks.
  3. Supervised Fine-Tuning (SFT)

    • Utilized the Hugging Face TRL library with the Supervised Fine-Tuning approach.

    • The model was trained to generate both intermediate empathetic reasoning steps and final supportive messages.

    • Training hyperparameters:

      • Epochs: 2
      • Learning rate: 2e-5
      • Batch size: 8

Purpose and Outcome

  • Enhanced the model’s ability to provide empathetic, context-aware psychological support to users.

Evaluation

  • Performance was measured on a held-out validation set with the following metric:

    • Support Coherence: Rated 74.32% similarity to expert-generated responses.

License

This project is licensed under the Apache License 2.0. See the LICENSE file for details.

Downloads last month
107
Safetensors
Model size
596M params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for suayptalha/Qwen3-0.6B-Psychological-Support

Finetuned
Qwen/Qwen3-0.6B
Finetuned
(114)
this model
Quantizations
1 model

Space using suayptalha/Qwen3-0.6B-Psychological-Support 1