IOP.png

Vulpecula-4B

Vulpecula-4B is fine-tuned based on the traces of SK1.1, consisting of the same 1,000 entries of the DeepSeek thinking trajectory, along with fine-tuning on Fine-Tome 100k and Open Math Reasoning datasets. This specialized 4B parameter model is designed for enhanced mathematical reasoning, logical problem-solving, and structured content generation, optimized for precision and step-by-step explanation.

GGUF : https://huggingface.co/prithivMLmods/Vulpecula-4B-GGUF

Key Features

  1. Advanced Mathematical and Logical Reasoning Fine-tuned on DeepSeek trajectories and Open Math Reasoning to excel at symbolic logic, arithmetic, and complex multi-step math problems, ideal for STEM education and competitions.

  2. Trace-Based Fine-Tuning Leverages SK1.1 trace dataset entries to model deep, interpretable reasoning paths, improving transparency and consistency in problem-solving.

  3. Compact Code Understanding Capable of understanding and generating efficient code snippets in Python, JavaScript, and more, supporting algorithmic explanations and lightweight coding tasks.

  4. Factual and Instructional Precision Trained on curated high-quality data with reasoning benchmarks to minimize hallucinations and strictly follow instructions for structured outputs (Markdown, JSON, tables).

  5. Multilingual Capabilities Supports over 20 languages for technical reasoning and translation, enhancing multilingual educational applications.

  6. Optimized Performance for Resource-Constrained Environments Balances reasoning capability with efficient resource use, suitable for deployment in environments with limited compute.

Quickstart with Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "prithivMLmods/Vulpecula-4B"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Solve the equation: 3x + 7 = 22. Show all steps."

messages = [
    {"role": "system", "content": "You are a step-by-step math tutor."},
    {"role": "user", "content": prompt}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)

Intended Use

  • Advanced mathematical and logical problem solving
  • Education-centric STEM tutoring and explanations
  • Code assistance and debugging for lightweight coding tasks
  • Structured content generation including JSON, Markdown, and tables
  • Multilingual reasoning and technical translation
  • Efficient deployment in low-resource settings with a focus on accuracy and stepwise reasoning

Limitations

  • Limited creativity in purely open-ended or fictional prompts
  • May face challenges with ambiguous or multi-intent queries
  • Smaller context window compared to larger 14B+ models
  • Possible factual errors in complex edge cases or adversarial inputs

References

  1. YaRN: Efficient Context Window Extension of Large Language Models
  2. Qwen2.5 Technical Report – https://arxiv.org/pdf/2412.15115
Downloads last month
20
Safetensors
Model size
4.02B params
Tensor type
FP16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Vulpecula-4B

Base model

Qwen/Qwen3-4B-Base
Finetuned
Qwen/Qwen3-4B
Finetuned
(101)
this model
Quantizations
3 models

Collection including prithivMLmods/Vulpecula-4B