Prince-1 commited on
Commit
b8618ef
·
verified ·
1 Parent(s): 5352fa6

Build the rkllm format of model Osmosis-Structure-0.6

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. Osmosis-Structure-0.6B.rkllm +3 -0
  3. README.md +199 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Osmosis-Structure-0.6B.rkllm filter=lfs diff=lfs merge=lfs -text
Osmosis-Structure-0.6B.rkllm ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:97a523bd2bed74156d5e784b1e8ae50956b40b11d06c567dde0dcaa9d8525cfc
3
+ size 1525874142
README.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: rkllm
4
+ tags:
5
+ - rkllm
6
+ - rockchip
7
+ - rk3588
8
+ - slm
9
+ - Structured Outputs
10
+ base_model: osmosis-ai/Osmosis-Structure-0.6B
11
+ base_model_relation: quantized
12
+ ---
13
+
14
+ # `Osmosis-Structure-0.6B`: Small Language Model for Structured Outputs
15
+
16
+ ![huggingface badge](hfbadge.svg)
17
+
18
+ <div align="center">
19
+
20
+ </div>
21
+
22
+ `Osmosis-Structure-0.6B` is a specialized small language model (SLM) designed to excel at structured output generation. Despite its compact 0.6B parameter size, this model demonstrates remarkable performance on extracting structured information when paired with supported frameworks.
23
+
24
+ Our approach leverages structured output during training, forcing our model to only focus on the value for each key declared by the inference engine, which significantly improves the accuracy of the model's ability to produce well-formatted, structured responses across various domains, particularly in mathematical reasoning and problem-solving tasks.
25
+
26
+ <div align="center">
27
+
28
+ ![Osmosis Structure Demo](output.gif)
29
+
30
+ </div>
31
+
32
+ ## Results
33
+
34
+ We evaluate the effectiveness of osmosis-enhanced structured generation on challenging mathematical reasoning benchmarks. The following results demonstrate the dramatic performance improvements achieved through structured outputs with osmosis enhancement across different model families - the same technique that powers `Osmosis-Structure-0.6B`.
35
+
36
+ ### Math DAPO 17K Dataset
37
+
38
+ <div align="center">
39
+
40
+ | Model | Structured Output | Structured w/ Osmosis | Performance Gain |
41
+ |-------|:-------------:|:-------------:|:-------------:|
42
+ | Claude 4 Sonnet | 15.52% | **69.40%** | +347% |
43
+ | Claude 4 Opus | 15.28% | **69.91%** | +357% |
44
+ | GPT-4.1 | 10.53% | **70.03%** | +565% |
45
+ | OpenAI o3 | 91.14% | **94.05%** | +2.9% |
46
+
47
+ <em>Table 1: Performance on Math DAPO 17K.</em>
48
+
49
+ </div>
50
+
51
+ ### AIME 1983-2024 Dataset
52
+
53
+ <div align="center">
54
+
55
+ | Model | Structured Output | Structured w/ Osmosis | Performance Gain |
56
+ |-------|:-------------:|:-------------:|:-------------:|
57
+ | Claude 4 Sonnet | 16.29% | **62.59%** | +284% |
58
+ | Claude 4 Opus | 22.94% | **65.06%** | +184% |
59
+ | GPT-4.1 | 2.79% | **39.66%** | +1322% |
60
+ | OpenAI o3 | 92.05% | **93.24%** | +1.3% |
61
+
62
+ <em>Table 2: Performance on AIME 1983-2024.</em>
63
+
64
+ </div>
65
+
66
+ > **Key Insight**: These results demonstrate that by allowing models to think freely and leverage test time compute, we are able to increase performance and still maintain the structured guarantee after the fact with a SLM. `Osmosis-Structure-0.6B` is specifically designed and optimized to maximize these benefits in a compact 0.6B parameter model.
67
+
68
+ ## Model Training
69
+
70
+ `Osmosis-Structure-0.6B` is built on top of `Qwen3-0.6B`. We first established a baseline format using 10 samples of randomly generated text and their JSON interpretations. We then applied reinforcement learning to approximately 500,000 examples of JSON-to-natural language pairs, consisting of either reasoning traces with their final outputs, or natural language reports with their expected structured formats.
71
+
72
+ We used [verl](https://github.com/volcengine/verl) as the framework to train our model and [SGLang](https://github.com/sgl-project/sglang) as the rollout backend. To enable structured training, we modified parts of the verl codebase to allow for *per sample schema* to be passed into the training data.
73
+
74
+ ## Usage
75
+
76
+
77
+ ### SGLang
78
+
79
+ We recommend an engine like SGLang to be used to serve the model, to serve, run the following:
80
+
81
+ `python3 -m sglang.launch_server --model-path osmosis-ai/Osmosis-Structure-0.6B --host 0.0.0.0 --api-key osmosis`
82
+
83
+ And to use the endpoint:
84
+
85
+ ```python
86
+ import json
87
+ from openai import OpenAI
88
+
89
+ api_key = "osmosis"
90
+ api_base_url = "http://0.0.0.0:30000/v1"
91
+ client = OpenAI(
92
+ api_key=api_key,
93
+ base_url=api_base_url,
94
+ )
95
+
96
+ # Schema for extracting structured output from reasoning traces
97
+ json_schema = json.dumps(
98
+ {
99
+ "type": "object",
100
+ "properties": {
101
+ "answer": {"type": "string"}
102
+ },
103
+ "required": ["answer"]
104
+ }
105
+ )
106
+
107
+ # You can also dump pydantic models to json schema as well
108
+
109
+ # Example reasoning trace input
110
+ reasoning_trace = """
111
+ Problem: Solve for x in the equation 2x + 5 = 13
112
+
113
+ Let me work through this step by step:
114
+
115
+ First, I need to isolate the term with x. I'll subtract 5 from both sides:
116
+ 2x + 5 - 5 = 13 - 5
117
+ 2x = 8
118
+
119
+ Next, I'll divide both sides by 2 to solve for x:
120
+ 2x ÷ 2 = 8 ÷ 2
121
+ x = 4
122
+
123
+ Let me verify this answer by substituting back into the original equation:
124
+ 2(4) + 5 = 8 + 5 = 13 ✓
125
+
126
+ Ok, which means I got the correct answer, and I'm confident about my answer.
127
+ """
128
+ response = client.chat.completions.create(
129
+ model="osmosis-ai/Osmosis-Structure-0.6B",
130
+ messages=[
131
+ {
132
+ "role": "system",
133
+ "content": f"You are a helpful assistant that understands and translates text to JSON format according to the following schema. {json_schema}"
134
+ },
135
+ {
136
+ "role": "user",
137
+ "content": reasoning_trace,
138
+ },
139
+ ],
140
+ temperature=0,
141
+ max_tokens=512,
142
+ response_format={
143
+ "type": "json_schema",
144
+ "json_schema": {"name": "reasoning_extraction", "schema": json.loads(json_schema)},
145
+ },
146
+ )
147
+
148
+ print(json.dumps(json.loads(response.choices[0].message.content), indent=2))
149
+ ```
150
+
151
+
152
+ ### Ollama
153
+
154
+ You can also use Ollama as an inference provider on local machines, here is a sample code of the setup:
155
+
156
+ ```python
157
+ from ollama import chat
158
+ from pydantic import BaseModel
159
+
160
+ class Answer(BaseModel):
161
+ answer: int
162
+
163
+ reasoning_trace = """
164
+ Problem: Solve for x in the equation 2x + 5 = 13
165
+
166
+ Let me work through this step by step:
167
+
168
+ First, I need to isolate the term with x. I'll subtract 5 from both sides:
169
+ 2x + 5 - 5 = 13 - 5
170
+ 2x = 8
171
+
172
+ Next, I'll divide both sides by 2 to solve for x:
173
+ 2x ÷ 2 = 8 ÷ 2
174
+ x = 4
175
+
176
+ Let me verify this answer by substituting back into the original equation:
177
+ 2(4) + 5 = 8 + 5 = 13 ✓
178
+
179
+ Ok, which means I got the correct answer, and I'm confident about my answer.
180
+ """
181
+
182
+ response = chat(
183
+ messages=[
184
+ {
185
+ "role": "system",
186
+ "content": f"You are a helpful assistant that understands and translates text to JSON format according to the following schema. {Answer.model_json_schema()}"
187
+ },
188
+ {
189
+ 'role': 'user',
190
+ 'content': reasoning_trace,
191
+ }
192
+ ],
193
+ model='Osmosis/Osmosis-Structure-0.6B',
194
+ format=Answer.model_json_schema(),
195
+ )
196
+
197
+ answer = Answer.model_validate_json(response.message.content)
198
+ print(answer)
199
+ ```