prithivMLmods's picture
Update README.md
93f7b20 verified
---
library_name: transformers
tags:
- text-generation-inference
- code
- math
- R1
- distill
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen2.5-1.5B-Instruct
pipeline_tag: text-generation
---
![PPP.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/7xnCRDtZ0T3-oWnCrw-Rf.png)
# **Castula-U2-QwenRe-1.5B**
> **Castula-U2-QwenRe-1.5B** is a **compact, multilingual reasoning model** fine-tuned from **Qwen-1.5B**, excelling in **mathematical problem solving**, **logical reasoning**, **code generation**, and **general-purpose tasks**. Its step-by-step reasoning and bilingual fluency make it ideal for educational systems, coding assistants, and lightweight reasoning applications.
## **Key Features**
1. **Advanced Step-by-Step Reasoning**
Fine-tuned to produce intermediate steps for math, logic, and code problems, offering transparency and interpretability crucial for education, coding help, and diagnostics.
2. **Multilingual Proficiency (English + Chinese)**
Understands and solves problems in **both English and Simplified Chinese**, making it accessible in diverse learning and working environments.
3. **Compact Yet Versatile (1.5B Parameters)**
Small enough for **low-resource environments**, yet capable of **math**, **logical puzzles**, **basic coding tasks**, and general comprehension, balancing performance and efficiency.
4. **Structured Computation & Problem Solving**
Mirrors human-like multi-step problem-solving, making solutions easy to follow, debug, or verify.
## **Quickstart with Transformers**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Castula-U2-QwenRe-1.5B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Solve: A train travels 180 km in 3 hours. What is its average speed?"
messages = [
{"role": "system", "content": "You are a helpful tutor skilled in solving math, logic, and code problems with step-by-step explanations."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## **Intended Use**
- **Math & Logic Tutoring**: Solves problems with explanations ideal for students and educators.
- **Code Assistant**: Helps with beginner-to-intermediate code generation and understanding.
- **Bilingual Apps**: Educational tools in **English** and **Chinese** for a global audience.
- **Lightweight Reasoning Systems**: Deployable in **mobile apps**, **browser extensions**, and **edge devices**.
## **Limitations**
1. **Domain Specialization**:
Best in math, logic, and code. Performance may degrade in highly creative or abstract language tasks.
2. **Compact Scale**:
While efficient, may underperform larger models in deeply complex reasoning or long-context tasks.
3. **Inherited Bias**:
May reflect biases from the base model (Qwen-1.5B); outputs should be verified for sensitive or critical uses.
4. **Prompt Sensitivity**:
Structured, clearly stated inputs produce significantly better outputs.