Whisper Psychology Chatbot

Model Description

Whisper is a mental health chatbot fine-tuned on the Gemma-3-1B-IT model using psychology-focused conversational data. The model is designed to provide supportive and empathetic responses for mental health conversations.

Developed by: DeepFinders - SLTC Research University

Training Details

  • Base Model: google/gemma-3-1b-it
  • Fine-tuning Method: LoRA (Low-Rank Adaptation)
  • Dataset: jkhedri/psychology-dataset
  • Training Samples: ~2000 psychology conversations
  • LoRA Configuration:
    • r=8, lora_alpha=16
    • Target modules: q_proj, k_proj, v_proj, o_proj
    • Dropout: 0.1

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

# Load model and tokenizer
model_name = "your-username/whisper-psychology-gemma-3-1b"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    device_map="auto",
    torch_dtype=torch.float16
)

# Format conversation
def chat_with_whisper(user_message):
    prompt = f"<start_of_turn>user\\n{user_message}<end_of_turn>\\n<start_of_turn>model\\n"
    
    inputs = tokenizer(prompt, return_tensors="pt")
    
    with torch.no_grad():
        outputs = model.generate(
            **inputs,
            max_new_tokens=150,
            temperature=0.7,
            do_sample=True,
            pad_token_id=tokenizer.eos_token_id
        )
    
    response = tokenizer.decode(outputs[0], skip_special_tokens=True)
    return response[len(prompt):]

# Example usage
response = chat_with_whisper("I'm feeling anxious about my upcoming exam. Can you help me?")
print(response)

Model Identity

The model introduces itself as: "I'm Whisper, your mental health chatbot, developed by DeepFinders — an innovative student team at SLTC Research University."

Limitations

  • This model is for educational and research purposes
  • Not a replacement for professional mental health care
  • May generate incorrect or inappropriate responses
  • Should be used with appropriate safeguards and human oversight

Training Infrastructure

  • Hardware: Google Colab (GPU)
  • Quantization: 4-bit quantization during training
  • Memory Optimization: Gradient checkpointing, mixed precision (FP16)

Citation

@misc{whisper-psychology-2024,
  title={Whisper Psychology Chatbot},
  author={DeepFinders Team, SLTC Research University},
  year={2024},
  publisher={Hugging Face},
  url={https://huggingface.co/your-username/whisper-psychology-gemma-3-1b}
}

Ethical Considerations

This model should be used responsibly with appropriate disclaimers about its limitations in providing mental health support.

Downloads last month
72
Safetensors
Model size
1,000M params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for KNipun/whisper-psychology-gemma-3-1b

Finetuned
(245)
this model

Dataset used to train KNipun/whisper-psychology-gemma-3-1b

Space using KNipun/whisper-psychology-gemma-3-1b 1