Qwen3-0.6B-Diagnosis-Expert

This project performs full fine-tuning on the Qwen3-0.6B language model to enhance its clinical diagnosis interpretation and reasoning capabilities. The model was optimized using the bfloat16 (bf16) data type.

Training Procedure

  1. Dataset Preparation

    • Dataset: Containing paired clinical patient histories and step-by-step diagnostic conclusions.
  2. Model Loading and Configuration

    • Base model: Qwen3-0.6B, loaded with the unsloth library in bf16 precision.
    • Full fine-tuning (full_finetuning=True) applied to all layers to adapt the model for medical diagnostic tasks.
  3. Supervised Fine-Tuning (SFT)

    • Utilized the Hugging Face TRL library with the Supervised Fine-Tuning approach.

    • The model was trained to generate both intermediate reasoning steps and final diagnostic statements.

    • Training hyperparameters:

      • Epochs: 2
      • Learning rate: 2e-5
      • Batch size: 8

Purpose and Outcome

  • Significantly improved the model’s ability to interpret clinical information and propose accurate, structured diagnoses.

Evaluation

  • Performance was measured on a held-out validation set with the following metric:

    • Diagnostic Similarity: 71.68% similarity compared to DeepSeek V3-0324 baseline.

License

This project is licensed under the Apache License 2.0. See the LICENSE file for details.

Downloads last month
122
Safetensors
Model size
596M params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for suayptalha/Qwen3-0.6B-Diagnose

Finetuned
Qwen/Qwen3-0.6B
Finetuned
(103)
this model
Quantizations
2 models

Dataset used to train suayptalha/Qwen3-0.6B-Diagnose

Space using suayptalha/Qwen3-0.6B-Diagnose 1