|
|
--- |
|
|
license: apache-2.0 |
|
|
datasets: |
|
|
- qingy2024/QwQ-Distill-Data |
|
|
- AI-MO/NuminaMath-TIR |
|
|
language: |
|
|
- en |
|
|
base_model: |
|
|
- Qwen/Qwen2-1.5B-Instruct |
|
|
pipeline_tag: text-generation |
|
|
library_name: transformers |
|
|
tags: |
|
|
- text-generation-inference |
|
|
- general-purpose |
|
|
- math |
|
|
- code |
|
|
--- |
|
|
|
|
|
 |
|
|
|
|
|
# **OpenRHO-2B-Thinker** |
|
|
|
|
|
> **OpenRHO-2B-Thinker** is a **general-purpose reasoning model** designed to enhance the cognitive abilities of **edge-deployed large language models (LLMs)** through **reinforcement learning (RL)**. Fine-tuned from **Qwen2-1.5B-Instruct** using the **QwQ distill dataset**, it delivers refined improvements in logical reasoning, structured problem-solving, and lightweight coding — making it highly efficient for **resource-constrained environments**. |
|
|
|
|
|
## **Key Improvements** |
|
|
|
|
|
1. **Advanced Reasoning via RL**: |
|
|
Built to support symbolic reasoning, logical deduction, and structured problem-solving with high efficiency — specifically optimized for real-time use on edge systems. |
|
|
|
|
|
2. **Compact Coding Assistant**: |
|
|
Enhanced understanding of multiple programming paradigms and syntax across Python, JavaScript, C++, and more. Supports in-situ code generation and debugging for embedded coding scenarios. |
|
|
|
|
|
3. **Error Detection & Correction**: |
|
|
Identifies logic errors, malformed data structures (e.g., JSON, XML), and provides corrections quickly — with lightweight inference and minimal latency. |
|
|
|
|
|
4. **Instruction Following & Precision**: |
|
|
Tuned to follow multi-step instructions with improved contextual memory, offering consistent and precise responses across a variety of prompt types. |
|
|
|
|
|
5. **Extended Context Compatibility**: |
|
|
Maintains support for **128K token inputs** and **8K token outputs**, while remaining lean enough for real-time edge usage with low power consumption. |
|
|
|
|
|
## **Quickstart with Transformers** |
|
|
|
|
|
```python |
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
|
|
model_name = "prithivMLmods/OpenRHO-2B-Thinker" |
|
|
|
|
|
model = AutoModelForCausalLM.from_pretrained( |
|
|
model_name, |
|
|
torch_dtype="auto", |
|
|
device_map="auto" |
|
|
) |
|
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
|
|
|
|
prompt = "What is a generator function in Python? Explain with an example." |
|
|
messages = [ |
|
|
{"role": "system", "content": "You are a helpful and concise AI assistant skilled in programming and reasoning."}, |
|
|
{"role": "user", "content": prompt} |
|
|
] |
|
|
text = tokenizer.apply_chat_template( |
|
|
messages, |
|
|
tokenize=False, |
|
|
add_generation_prompt=True |
|
|
) |
|
|
model_inputs = tokenizer([text], return_tensors="pt").to(model.device) |
|
|
|
|
|
generated_ids = model.generate( |
|
|
**model_inputs, |
|
|
max_new_tokens=512 |
|
|
) |
|
|
generated_ids = [ |
|
|
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) |
|
|
] |
|
|
|
|
|
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] |
|
|
``` |
|
|
|
|
|
## **Intended Use** |
|
|
|
|
|
1. **Edge LLM Applications**: |
|
|
Built for embedded AI agents, mobile inference, and low-latency chatbots on constrained hardware. |
|
|
|
|
|
2. **General-Purpose Reasoning**: |
|
|
Effective for real-time logical reasoning, structured deduction, and lightweight problem-solving tasks in everyday applications. |
|
|
|
|
|
3. **Educational & Programming Tools**: |
|
|
Helpful for teaching programming and debugging in interactive, constrained environments (e.g., IoT, robotics kits). |
|
|
|
|
|
4. **Lightweight Conversational Agents**: |
|
|
Enables responsive, intelligent interactions in edge-deployed customer service bots, support kiosks, and automation systems. |
|
|
|
|
|
5. **Multilingual Mini-NLP Tasks**: |
|
|
Supports basic multilingual tasks such as translation, summarization, and information retrieval across multiple languages. |
|
|
|
|
|
6. **Structured Format Generation**: |
|
|
Can generate JSON, Markdown, tables, or tabular outputs in lightweight settings for embedded data workflows. |
|
|
|
|
|
## **Limitations** |
|
|
|
|
|
1. **Hardware Requirements (Minimal but Non-Zero)**: |
|
|
While designed for edge use, optimal performance still benefits from mid-range NPUs, GPUs, or specialized accelerators. |
|
|
|
|
|
2. **Knowledge Cutoff & Real-Time Awareness**: |
|
|
No ability to fetch live data or respond to real-time information beyond its training snapshot. |
|
|
|
|
|
3. **Limited Creative Output**: |
|
|
Less effective for creative writing, abstract thinking, or tasks requiring deep imagination. |
|
|
|
|
|
4. **Prompt Sensitivity**: |
|
|
Outputs can vary based on prompt clarity; structured prompts yield better, more predictable results. |
|
|
|
|
|
5. **Inherited Biases**: |
|
|
May reflect biases from pretraining data. Use caution in sensitive or high-stakes domains. |