Model Card for Psychology-RLHF

Model Description

This model is a fine-tuned version of Qwen2.5-0.5B-Instruct on the samhog/psychology-RLHF dataset using ORPO. The primary objective was to experiment with Reinforcement Learning from Human Feedback (RLHF) via ORPO, focusing on preference alignment. The dataset comes from the psychology domain, but the main purpose of this fine-tuning was to study and demonstrate the effectiveness of ORPO for aligning small-scale instruction-tuned models.

  • Language(s) (NLP): English
  • License: MIT
  • Finetuned from model: unsloth/Qwen2.5-0.5B-Instruct
  • Fine-tuning Method: ORPO (Offline Reinforcement Preference Optimization)
  • Dataset: samhog/psychology-RLHF
  • Domain: Psychology, mental health reasoning, and conversational alignment

Uses

Direct Use

  • Educational and research purposes in psychology-related question-answering.
  • Conversational agents for safe psychology discussions.
  • Research on RLHF and ORPO fine-tuning in domain-specific contexts.

Bias, Risks, and Limitations

  • This model is not a substitute for professional mental health advice.
  • Trained on synthetic/human preference data β†’ may still generate biased or hallucinated content.
  • Small-scale model (0.5B parameters) β†’ limited reasoning ability compared to larger LLMs.

How to Get Started with the Model

from huggingface_hub import login
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel

login(token="")

tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen2.5-0.5B-Instruct",)
base_model = AutoModelForCausalLM.from_pretrained(
    "unsloth/Qwen2.5-0.5B-Instruct",
    device_map={"": 0}, token=""
)

model = PeftModel.from_pretrained(base_model,"Rustamshry/Psychology-RLHF")


prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction:
{}

### Input:
{}

### Response:
{}"""


inputs = tokenizer(
    [
        prompt.format(
            "You are an AI assistant that helps people find information",
            "I'm having trouble with my teenage child. They're acting out and I don't know what to do.", 
            "",  
        )
    ],
    return_tensors="pt",
).to("cuda")


from transformers import TextStreamer

text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=text_streamer, max_new_tokens=512)

Training Details

Training Metrics:

  • Training Loss: ↓ from 1.86 β†’ 0.2978
  • NLL Loss: ↓ from 1.77 β†’ 0.34
  • Reward (Chosen): -0.19 β†’ -0.037
  • Reward (Rejected): -0.20 β†’ -0.150
  • Reward Gap: β‰ˆ +0.11

Interpretation:

  • Losses decreased steadily, indicating stable convergence.
  • Chosen rewards improved toward 0, while rejected remained lower, showing preference alignment.
  • Final model demonstrates improved distinction between good vs. bad responses.

Framework versions

  • PEFT 0.17.1
Downloads last month
25
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Rustamshry/Psychology-RLHF

Base model

Qwen/Qwen2.5-0.5B
Adapter
(239)
this model

Dataset used to train Rustamshry/Psychology-RLHF