zdfbdccf.png

Crux-Qwen3_OpenThinking-4B

Crux-Qwen3_OpenThinking-4B is fine-tuned on the Qwen3-4B architecture, optimized for advanced open thinking, mathematical reasoning, and logical problem solving. This model is trained on the traces of sk1.1, which include 1,000 entries from the Gemini thinking trajectory, combined with fine-tuning on 100k tokens of open math reasoning data. This makes it highly effective for nuanced reasoning, educational tasks, and complex problem-solving requiring clear thought processes.

GGUF : https://huggingface.co/prithivMLmods/Crux-Qwen3_OpenThinking-4B-GGUF

Key Features

  1. Open and Structured Thinking Fine-tuned on Gemini trajectory data and sk1.1 traces, enabling it to model complex thought processes, open reasoning, and multi-step problem-solving.

  2. Mathematical and Logical Reasoning Trained with a focus on symbolic logic, arithmetic, and multi-step math problems, ideal for STEM education and technical domains.

  3. Code Understanding and Generation Capable of writing, interpreting, and explaining code snippets in Python, JavaScript, and other languages with clarity.

  4. Factual Precision and Reliability Curated datasets and reasoning benchmarks minimize hallucinations, ensuring trustworthy outputs for technical content.

  5. Instruction-Tuned for Clarity Strong compliance with structured prompts, delivering step-by-step reasoning, formatted outputs (Markdown, JSON, tables), and clear explanations.

  6. Multilingual Capabilities Supports over 20 languages for educational and technical translations across diverse linguistic contexts.

  7. Optimized Efficiency Utilizes the 4B parameter Qwen3 base for resource-friendly deployment while maintaining strong reasoning performance.

Quickstart with Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "prithivMLmods/Crux-Qwen3_OpenThinking-4B"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Explain the thought process behind solving: If 5x - 3 = 2x + 12, find x."

messages = [
    {"role": "system", "content": "You are an open thinking tutor who explains reasoning clearly."},
    {"role": "user", "content": prompt}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)

Intended Use

  • Advanced open and logical reasoning
  • Educational STEM tutoring and math problem solving
  • Code assistance, explanation, and debugging
  • Structured content generation (JSON, Markdown, tables)
  • Multilingual reasoning and translation
  • Lightweight, efficient deployment for reasoning tasks

Limitations

  • Less suited for highly creative or fictional content generation
  • May require clear, unambiguous prompts for best results
  • Smaller context window relative to larger models (14B+)
  • Possible occasional factual inaccuracies in rare edge cases

References

  1. YaRN: Efficient Context Window Extension of Large Language Models
Downloads last month
50
Safetensors
Model size
4.02B params
Tensor type
FP16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Crux-Qwen3_OpenThinking-4B

Base model

Qwen/Qwen3-4B-Base
Finetuned
Qwen/Qwen3-4B
Finetuned
(84)
this model
Merges
9 models
Quantizations
9 models

Datasets used to train prithivMLmods/Crux-Qwen3_OpenThinking-4B

Collection including prithivMLmods/Crux-Qwen3_OpenThinking-4B