C.png

Horologium-QwenC-1.5B

Horologium-QwenC-1.5B is a reasoning-focused language model trained extensively on both coding and mathematics problems using reinforcement learning (RL). It is designed to provide intelligent, step-by-step solutions to structured tasks that require logical precision, algorithmic thought, and symbolic computation.

Key Features

  1. Unified Reasoning for Code & Math
    Tailored to perform both code understanding/generation and mathematical problem-solving, with a consistent focus on clarity and logic.

  2. Reinforcement Learning Fine-Tuning
    Trained with reinforcement learning to improve reward-aligned behaviors in complex problem-solving scenarios—especially in debugging, proof validation, and computational tasks.

  3. Symbolic and Numerical Proficiency
    Capable of handling symbolic math, algebra, calculus, and discrete mathematics, while also excelling at code logic, syntax validation, and API usage.

  4. Compact yet Powerful
    At 1.5B parameters, this model provides strong reasoning capabilities while remaining efficient for edge devices and local deployment.

  5. Structured Output
    Produces high-quality, structured results in Markdown, JSON, and annotated code blocks with contextual explanations.

Quickstart with Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "prithivMLmods/Horologium-QwenC-1.5B"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Solve this: A function f is defined as f(x) = x^2 + 2x + 1. Find f(5) and explain the steps. Then write equivalent Python code."

messages = [
    {"role": "system", "content": "You are an expert in math and coding. Solve problems step-by-step and explain clearly."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

Intended Use

  • Educational Tutoring Systems
    For students and learners exploring both programming and mathematics.

  • Coding & Algorithmic Interview Prep
    Useful for solving DSA questions, algorithmic challenges, and leetcode-style problems.

  • Math & Code Co-Pilots
    Integrated into coding environments to explain both logic and formulas used in implementations.

  • Data Analysis & Scientific Computing
    Aids in writing and verifying data-centric scripts and computational logic.

Limitations

  1. Scope of Accuracy
    May occasionally produce mathematically sound but over-explained or verbose solutions.

  2. Complex Multistep Problems
    Performance may degrade slightly on very long multi-turn symbolic derivations or nested algorithms.

  3. Limited Real-Time Adaptation
    No awareness of real-time data or updates beyond training scope.

  4. Security & Logic Bugs
    Always audit generated code or logic for real-world use.

Downloads last month
10
Safetensors
Model size
1.78B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Horologium-QwenC-1.5B

Base model

Qwen/Qwen2.5-1.5B
Finetuned
(841)
this model
Quantizations
3 models

Collection including prithivMLmods/Horologium-QwenC-1.5B