Add a heading.png

Canum-Qwen3_R1-4B-iCoT

Canum-Qwen3_R1-4B-iCoT is a precision-tuned variant of the Qwen3-4B architecture, explicitly aligned with internal chain-of-thought (iCoT) methodologies. Trained on the TAUR-dev/STEPS__r1_4d_eval__mini_all dataset, this model excels in long-form mathematical reasoning, progressive symbolic logic, and multi-stage problem decomposition, all within a compact 4B parameter footprint.

GGUF : https://huggingface.co/prithivMLmods/Canum-Qwen3_R1-4B-iCoT-Q4_K_M-GGUF

Key Features

  1. Internal Chain-of-Thought Reasoning (iCoT) Enables deeper logical progression through internally coherent reasoning steps, ideal for complex mathematical derivations and multivariable algebraic thinking.

  2. Dataset: TAUR-dev/STEPS__r1_4d_eval__mini_all Fine-tuned using structured evaluation sequences to build resilience in multi-step problem solving and improve interpretability in math-focused tasks.

  3. Long Reasoning Paths in STEM Domains Suited for long-chain logical flows in geometry, number theory, calculus, and symbolic manipulation, including proofs and multi-stage equation solving.

  4. Lightweight Yet Capable (4B) Maintains strong reasoning and instruction-following abilities with lower computational cost compared to larger models, suitable for single-GPU deployments.

  5. Instruction-Following and Step-by-Step Alignment Follows complex instructions with multi-turn dependencies and provides granular output that aligns with internal steps used in the reasoning process.

  6. Technical Format Adaptability Outputs answers in clean Markdown, LaTeX, JSON, or table formats for academic, development, and notebook-based use cases.

Quickstart with Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "prithivMLmods/Canum-Qwen3_R1-4B-iCoT"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Use internal CoT to solve: A rectangle has a length that is 3 times its width. If the perimeter is 48 units, what are the dimensions?"

messages = [
    {"role": "system", "content": "You are a reasoning assistant trained to use internal chain-of-thought (iCoT) for multi-step mathematical problems."},
    {"role": "user", "content": prompt}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)

Intended Use

  • Internal chain-of-thought (iCoT) problem solving
  • Long-form symbolic math and algebraic derivations
  • Curriculum-based step-by-step math tutoring
  • Structured multi-turn reasoning in STEM domains
  • Output generation in technical formats (LaTeX, Markdown)

Limitations

  • May require well-structured prompts for optimal reasoning output
  • Smaller context length may limit extremely long multi-part problems
  • Focused on precision reasoning, not creative or subjective writing
  • Best used with prompt patterns that guide internal logical steps

References

  1. TAUR-dev/STEPS__r1_4d_eval__mini_all – Dataset for structured math reasoning
  2. Internal CoT (iCoT) – Progressive logical strategy for complex problems
  3. AIMO-2 Math Benchmark – OpenMathReasoning
  4. YaRN: Efficient Context Extension of LLMs
Downloads last month
21
Safetensors
Model size
4.02B params
Tensor type
FP16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Canum-Qwen3_R1-4B-iCoT

Base model

Qwen/Qwen3-4B-Base
Finetuned
Qwen/Qwen3-4B
Finetuned
(3)
this model
Merges
1 model
Quantizations
3 models

Datasets used to train prithivMLmods/Canum-Qwen3_R1-4B-iCoT

Collection including prithivMLmods/Canum-Qwen3_R1-4B-iCoT