Horologium-QwenC-1.5B
Horologium-QwenC-1.5B is a reasoning-focused language model trained extensively on both coding and mathematics problems using reinforcement learning (RL). It is designed to provide intelligent, step-by-step solutions to structured tasks that require logical precision, algorithmic thought, and symbolic computation.
Key Features
Unified Reasoning for Code & Math
Tailored to perform both code understanding/generation and mathematical problem-solving, with a consistent focus on clarity and logic.Reinforcement Learning Fine-Tuning
Trained with reinforcement learning to improve reward-aligned behaviors in complex problem-solving scenarios—especially in debugging, proof validation, and computational tasks.Symbolic and Numerical Proficiency
Capable of handling symbolic math, algebra, calculus, and discrete mathematics, while also excelling at code logic, syntax validation, and API usage.Compact yet Powerful
At 1.5B parameters, this model provides strong reasoning capabilities while remaining efficient for edge devices and local deployment.Structured Output
Produces high-quality, structured results in Markdown, JSON, and annotated code blocks with contextual explanations.
Quickstart with Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Horologium-QwenC-1.5B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Solve this: A function f is defined as f(x) = x^2 + 2x + 1. Find f(5) and explain the steps. Then write equivalent Python code."
messages = [
{"role": "system", "content": "You are an expert in math and coding. Solve problems step-by-step and explain clearly."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
Intended Use
Educational Tutoring Systems
For students and learners exploring both programming and mathematics.Coding & Algorithmic Interview Prep
Useful for solving DSA questions, algorithmic challenges, and leetcode-style problems.Math & Code Co-Pilots
Integrated into coding environments to explain both logic and formulas used in implementations.Data Analysis & Scientific Computing
Aids in writing and verifying data-centric scripts and computational logic.
Limitations
Scope of Accuracy
May occasionally produce mathematically sound but over-explained or verbose solutions.Complex Multistep Problems
Performance may degrade slightly on very long multi-turn symbolic derivations or nested algorithms.Limited Real-Time Adaptation
No awareness of real-time data or updates beyond training scope.Security & Logic Bugs
Always audit generated code or logic for real-world use.
- Downloads last month
- 10