Model Card for RyZhangHason/Teacher-Persona

This model implements the Teacher Persona AI approach, designed to emulate the human qualities of an instructor rather than merely providing personalized content. It embodies the narrative elements, affective cues, and communication style of an educator to create a more humanized learning experience.

Model Details

Model Description

This model was developed using a fine-tuning approach on the Qwen2-1.5B-Instruct base model. Unlike traditional personalized AI systems that focus exclusively on content adaptation, this Teacher Persona AI embodies the relational and affective qualities of human teaching鈥攅lements such as empathy, warmth, and the ability to foster trust that are central to effective education.

  • Developed by: Ruiyu Zhang, Lin Nie, Ce Zhao
  • Model type: Qwen2-1.5B-Instruct with LoRA fine-tuning
  • License: Same as base model (Qwen/Qwen2-1.5B-Instruct)
  • Finetuned from model: Qwen/Qwen2-1.5B-Instruct

Use

This model is designed for educational contexts where humanized, engaging instruction is needed:

  • Educational platforms seeking to reduce psychological distance between learners and AI
  • Tools for personalized tutoring that emphasize both cognitive and affective dimensions
  • Self-study resources where maintaining learner motivation and engagement is crucial
  • Research into the effects of AI humanization on learning outcomes

The model operates without system prompts, as it has been trained to consistently maintain its instructor persona.

Special Features

  • Persona Integration: Maintains an authentic teacher presence without explicit system prompts
  • Humanized Interaction: Naturally provides explanations with empathy, warmth, and a conversational tone
  • Social Presence: Reduces psychological distance by conveying personal warmth and authenticity

Training Details

Training Data

The model was fine-tuned on several types of examples:

  1. Core content examples: Detailed explanations of academic concepts
  2. Identity examples: Responses that reinforce the teacher persona and teaching philosophy
  3. Implicit persona examples: Domain-specific content with a humanized, educational framing
  4. Pedagogical examples: Topics explained with pedagogical elements like examples, comparisons, and relatable contexts

Training Procedure

The model underwent a two-phase training process:

  • Phase A: Unsupervised fine-tuning on text corpus (Authentic conversations in the instructor's classroom)
  • Phase B (Stage 1): 40 steps with system prompts defining the teacher persona
  • Phase B (Stage 2): 40 steps with mixed prompt approach (50% with prompts, 50% without)
  • Phase B (Stage 3): 60 steps with no system prompts to internalize the persona

How to Use the Model

The model can be used with or without a system prompt. For best results:

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

# Load model and tokenizer
model_name = "RyZhangHason/Teacher-Persona"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.float16,
    trust_remote_code=True,
    device_map="auto"
)

def get_response(question):
    input_text = f"<|im_start|>user\n{question}<|im_end|>\n<|im_start|>assistant\n"
    inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
    
    with torch.no_grad():
        outputs = model.generate(
            **inputs,
            max_new_tokens=256,
            do_sample=True,
            temperature=0.7,
            top_p=0.9,
        )
    
    response = tokenizer.decode(outputs[0], skip_special_tokens=True)
    return response.split("<|im_start|>assistant\n")[-1].strip()

Limitations and Biases

  • Domain knowledge: While the model has pedagogical abilities across topics, its depth of knowledge varies by subject
  • Source material biases: Any biases present in the training examples may be reflected in the model's outputs
  • Limited factual knowledge: The model's knowledge is limited to what was present in its pre-training and fine-tuning data
  • Educational context specificity: The model is optimized for educational interactions rather than general-purpose use

Model Card Authors

RyZhangHason

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for RyZhangHason/Teacher-Persona

Adapter
(843)
this model