1.png

Explora-0.6B

Explora-0.6B is a lightweight and efficient general-purpose reasoning model, fine-tuned on Qwen3-0.6B using the first 100,000 entries of the Open-Omega-Explora-2.5M dataset. It is tailored for science and code-focused reasoning tasks, combining symbolic clarity with fluent instruction-following, ideal for exploratory workflows in STEM domains.

GGUF: https://huggingface.co/prithivMLmods/Explora-0.6B-GGUF

Key Features

  1. General-Purpose STEM Reasoning Fine-tuned for code and science problems, the model handles symbolic reasoning, basic computations, and structured logic with clarity and fluency.

  2. Built on Qwen3-0.6B Leverages the multilingual and instruction-tuned capabilities of Qwen3-0.6B, making it well-suited for lightweight deployments with strong core reasoning ability.

  3. Open-Omega-Explora Dataset Trained on the first 100k entries of the Open-Omega-Explora-2.5M dataset, which includes a diverse mix of problems from math, code, and science domains.

  4. Balanced Thinking Mode Supports moderate reasoning depth while avoiding excessive hallucination—great for step-by-step problem solving, function generation, and explanatory output.

  5. Compact & Deployable At just 0.6B parameters, it’s ideal for offline environments, low-resource inference setups, and educational tools requiring fast, reliable logic.

  6. Output Flexibility Capable of producing answers in Markdown, Python, JSON, or plain text depending on the task—suitable for both human readability and integration into pipelines.

Quickstart with Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "prithivMLmods/Explora-0.6B"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Explain Newton's second law of motion with a Python code example."

messages = [
    {"role": "system", "content": "You are a helpful science and code reasoning assistant."},
    {"role": "user", "content": prompt}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=256
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)

Intended Use

  • Educational and lightweight research tools
  • General science and programming help
  • Low-resource STEM assistant for code labs or classrooms
  • Fast-response agent for structured reasoning tasks

Limitations

  • Not optimized for deep multi-hop reasoning or creative tasks
  • May require prompt engineering for highly specific technical queries
  • Smaller context window and lower fluency compared to larger models
  • Best used with specific and scoped questions for accurate outputs
Downloads last month
8
Safetensors
Model size
596M params
Tensor type
BF16
·
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Explora-0.6B

Finetuned
Qwen/Qwen3-0.6B
Finetuned
(174)
this model
Quantizations
3 models

Dataset used to train prithivMLmods/Explora-0.6B

Collection including prithivMLmods/Explora-0.6B