Qwen3-8B Fine-tuned for Physics

This model is a fine-tuned version of Qwen/Qwen3-8B on physics question-answering tasks.

Model Details

  • Base Model: Qwen3-8B
  • Fine-tuning Method: LoRA
  • Dataset: Physics Q&A (20,000 samples)
  • Training: [Your training details]

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("your-username/qwen3-8b-physics")
model = AutoModelForCausalLM.from_pretrained("your-username/qwen3-8b-physics")

# Example usage
prompt = "Solve this quantum mechanics problem:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=512)
print(tokenizer.decode(outputs[0]))
Downloads last month
27
Safetensors
Model size
8.19B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Fxde42/qwen3-8b-physics

Base model

Qwen/Qwen3-8B-Base
Finetuned
Qwen/Qwen3-8B
Finetuned
(220)
this model
Quantizations
2 models