Qwen3-8B Fine-tuned for Physics
This model is a fine-tuned version of Qwen/Qwen3-8B on physics question-answering tasks.
Model Details
- Base Model: Qwen3-8B
- Fine-tuning Method: LoRA
- Dataset: Physics Q&A (20,000 samples)
- Training: [Your training details]
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("your-username/qwen3-8b-physics")
model = AutoModelForCausalLM.from_pretrained("your-username/qwen3-8b-physics")
# Example usage
prompt = "Solve this quantum mechanics problem:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=512)
print(tokenizer.decode(outputs[0]))
- Downloads last month
- 27