1.png

OpenScienceReasoning-Qwen-e10

OpenScienceReasoning-Qwen-e10 is a high-efficiency, science-focused reasoning model fine-tuned on Qwen3-1.7B using the nvidia/OpenScienceReasoning-2 dataset. It incorporates 10,000 distinct entries for scientific reasoning, chain-of-thought exploration, and analytical problem solving. The model blends symbolic precision, scientific logic, and structured output fluency—making it an ideal tool for researchers, educators, and developers seeking advanced reasoning under constrained compute.

GGUF: https://huggingface.co/prithivMLmods/OpenScienceReasoning-Qwen-e10-GGUF


Key Features

  1. Scientific Reasoning & Chain-of-Thought Fine-tuned on 10,000 curated entries from the OpenScienceReasoning-2 dataset, designed to enhance step-by-step analytical reasoning in science and mathematics.

  2. Advanced Code Reasoning & Generation Supports multi-language coding with explanations, optimization hints, and error detection—ideal for algorithm synthesis, debugging, and prototyping.

  3. Mathematical & Scientific Problem Solving Performs analytical reasoning in physics, biology, chemistry, and mathematics—explaining concepts, solving equations, and handling symbolic derivations.

  4. Hybrid Symbolic-AI Thinking Combines structured logic, chain-of-thought reasoning, and open-ended inference, delivering robust performance on STEM-related tasks.

  5. Structured Output Mastery Seamlessly generates output in LaTeX, Markdown, JSON, CSV, and YAML, suited for technical documentation, research papers, and structured data.

  6. Optimized Lightweight Footprint for Versatile Deployment Balances performance and efficiency, making it deployable on mid-range GPUs, offline clusters, and edge AI systems.


Quickstart with Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "prithivMLmods/OpenScienceReasoning-Qwen-e10"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Explain the difference between Newtonian mechanics and quantum mechanics with examples."

messages = [
    {"role": "system", "content": "You are a scientific tutor skilled in reasoning, math, and coding."},
    {"role": "user", "content": prompt}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)

Intended Use

  • Scientific tutoring, computational reasoning, and mathematical education
  • Research assistant for physics, chemistry, biology, and interdisciplinary domains
  • Structured technical data generation in multiple formats
  • STEM-focused chatbot or API for research and education tools
  • Deployment in mid-resource environments requiring high reasoning fidelity

Limitations

  • Not tuned for general-purpose or long-form creative writing
  • Context limitations may hinder multi-document or full codebase analysis
  • Specialized for scientific and technical reasoning—general chat may underperform
  • Prioritizes structured logic over casual or emotional tone generation
Downloads last month
15
Safetensors
Model size
1.72B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/OpenScienceReasoning-Qwen-e10

Finetuned
Qwen/Qwen3-1.7B
Finetuned
(240)
this model
Quantizations
3 models

Dataset used to train prithivMLmods/OpenScienceReasoning-Qwen-e10