Crux-Qwen3_OpenThinking-4B
Crux-Qwen3_OpenThinking-4B is fine-tuned on the Qwen3-4B architecture, optimized for advanced open thinking, mathematical reasoning, and logical problem solving. This model is trained on the traces of sk1.1, which include 1,000 entries from the Gemini thinking trajectory, combined with fine-tuning on 100k tokens of open math reasoning data. This makes it highly effective for nuanced reasoning, educational tasks, and complex problem-solving requiring clear thought processes.
GGUF : https://huggingface.co/prithivMLmods/Crux-Qwen3_OpenThinking-4B-GGUF
Key Features
Open and Structured Thinking Fine-tuned on Gemini trajectory data and sk1.1 traces, enabling it to model complex thought processes, open reasoning, and multi-step problem-solving.
Mathematical and Logical Reasoning Trained with a focus on symbolic logic, arithmetic, and multi-step math problems, ideal for STEM education and technical domains.
Code Understanding and Generation Capable of writing, interpreting, and explaining code snippets in Python, JavaScript, and other languages with clarity.
Factual Precision and Reliability Curated datasets and reasoning benchmarks minimize hallucinations, ensuring trustworthy outputs for technical content.
Instruction-Tuned for Clarity Strong compliance with structured prompts, delivering step-by-step reasoning, formatted outputs (Markdown, JSON, tables), and clear explanations.
Multilingual Capabilities Supports over 20 languages for educational and technical translations across diverse linguistic contexts.
Optimized Efficiency Utilizes the 4B parameter Qwen3 base for resource-friendly deployment while maintaining strong reasoning performance.
Quickstart with Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Crux-Qwen3_OpenThinking-4B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Explain the thought process behind solving: If 5x - 3 = 2x + 12, find x."
messages = [
{"role": "system", "content": "You are an open thinking tutor who explains reasoning clearly."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
Intended Use
- Advanced open and logical reasoning
- Educational STEM tutoring and math problem solving
- Code assistance, explanation, and debugging
- Structured content generation (JSON, Markdown, tables)
- Multilingual reasoning and translation
- Lightweight, efficient deployment for reasoning tasks
Limitations
- Less suited for highly creative or fictional content generation
- May require clear, unambiguous prompts for best results
- Smaller context window relative to larger models (14B+)
- Possible occasional factual inaccuracies in rare edge cases
References
- Downloads last month
- 50
Model tree for prithivMLmods/Crux-Qwen3_OpenThinking-4B
Base model
Qwen/Qwen3-4B-Base