Physics MCQ Generator
A fine-tuned language model that generates high-quality physics multiple-choice questions for university entrance exam preparation with customizable cognitive skill levels based on Bloom's Taxonomy.
Model Details
Model Description
This model is specifically designed to generate competitive physics multiple-choice questions with accurate content, plausible distractors, and appropriate difficulty levels for entrance exam preparation. It supports four cognitive skill levels (Recall, Application, Analysis, Evaluation) and excels across major physics domains including mechanics, electromagnetism, thermodynamics, optics, and modern physics.
- Developed by: [flanara]
- Model type: Fine-tuned Causal Language Model
- Language(s) (NLP): English
- License: MIT
- Finetuned from model: microsoft/phi-2
Model Sources
Uses
Direct Use
This model is intended for direct use in generating physics multiple-choice questions for:
- University entrance exam preparation with varying cognitive levels
- Differentiated instruction materials
- Bloom's Taxonomy-aligned assessment creation
- Educational content creation across cognitive domains
- Tutoring and teaching assistance with skill-based questioning
Downstream Use
The model can be integrated into:
- Educational platforms with adaptive learning paths
- Automated question bank generators with cognitive level filtering
- Physics tutoring applications with skill-based progression
- Exam preparation software with customized difficulty curves
- Teacher tools for creating balanced assessments
Out-of-Scope Use
- Generating questions for high-stakes exams without expert validation
- Creating medical or safety-critical content
- Replacing human physics educators entirely
- Generating content outside physics domain
- Using for psychological or cognitive assessment
Bias, Risks, and Limitations
Limitations
- Performance is best on classical physics topics; may struggle with advanced quantum mechanics
- Generated questions should always be reviewed by subject matter experts
- Limited context length (~512 tokens) may affect complex question generation
- Training data primarily from international curriculum standards
- Cognitive skill differentiation may not be perfect for all topics
Risks
- Potential for generating incorrect physics concepts if prompted unusually
- May reflect biases present in the training data
- Should not be used for high-stakes assessment without human oversight
- Cognitive level assignments may not always match intended complexity
Recommendations
Users should:
- Always verify generated questions with physics experts
- Use as a tool to assist educators, not replace them
- Disclose AI-generated content when used in educational materials
- Monitor and review outputs for accuracy and appropriateness
- Validate cognitive skill level assignments for important assessments
How to Get Started with the Model
Basic Usage with Cognitive Skills
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch
# Load the model
model = AutoModelForCausalLM.from_pretrained(
"microsoft/phi-2",
device_map="auto",
torch_dtype=torch.float16,
trust_remote_code=True
)
model = PeftModel.from_pretrained(model, "your_username/physics-mcq-generator")
tokenizer = AutoTokenizer.from_pretrained("your_username/physics-mcq-generator")
tokenizer.pad_token = tokenizer.eos_token
def generate_physics_mcq(chapter, topic, difficulty="Medium", cognitive_skill="Application"):
"""
Generate a physics MCQ with customizable cognitive skill level
Cognitive Skill Levels:
- 'Recall': Basic fact recall and definition questions
- 'Application': Applying concepts to solve problems
- 'Analysis': Analyzing situations and relationships
- 'Evaluation': Complex reasoning and critical evaluation
"""
prompt = f"""### Instruction:
Generate a multiple-choice question (MCQ) for a university entrance exam in Physics.
### Input:
Subject: Physics | Chapter: {chapter} | Topic: {topic} | Difficulty: {difficulty} | Cognitive_Skill: {cognitive_skill}
### Response:
Question:"""
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=250,
temperature=0.7,
do_sample=True,
pad_token_id=tokenizer.eos_token_id
)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
# Examples with different cognitive skills
print("๐ง Recall Question (Basic knowledge):")
mcq = generate_physics_mcq("Mechanics", "Newton's Laws", "Easy", "Recall")
print(mcq)
print("\nโก Application Question (Problem solving):")
mcq = generate_physics_mcq("Electromagnetism", "Ohm's Law", "Medium", "Application")
print(mcq)
print("\n๐ Analysis Question (Complex reasoning):")
mcq = generate_physics_mcq("Thermodynamics", "First Law", "Hard", "Analysis")
print(mcq)
print("\n๐ฏ Evaluation Question (Critical thinking):")
mcq = generate_physics_mcq("Modern Physics", "Quantum Mechanics", "Hard", "Evaluation")
print(mcq)
- Downloads last month
- 28
Model tree for flanara/physics-mcq-generator
Base model
microsoft/phi-2