Update README.md
Browse files
README.md
CHANGED
@@ -4,7 +4,7 @@ tags:
|
|
4 |
- text-generation-inference
|
5 |
- transformers
|
6 |
- unsloth
|
7 |
-
-
|
8 |
- trl
|
9 |
- sft
|
10 |
license: apache-2.0
|
@@ -12,12 +12,292 @@ language:
|
|
12 |
- en
|
13 |
---
|
14 |
|
15 |
-
|
16 |
-
|
17 |
-
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
- text-generation-inference
|
5 |
- transformers
|
6 |
- unsloth
|
7 |
+
- llama
|
8 |
- trl
|
9 |
- sft
|
10 |
license: apache-2.0
|
|
|
12 |
- en
|
13 |
---
|
14 |
|
15 |
+
<html lang="en">
|
16 |
+
<head>
|
17 |
+
<meta charset="UTF-8">
|
18 |
+
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
19 |
+
</head>
|
20 |
+
<div class="container">
|
21 |
+
<h1>GRMR-V3-G1B</h1>
|
22 |
+
<p>GRMR-V3-G1B is a fine-tuned version of <a href="https://huggingface.co/unsloth/gemma-3-1b-pt">unsloth/gemma-3-1b-pt</a> specifically optimized for grammar correction tasks.</p>
|
23 |
+
<div class="important-note">
|
24 |
+
<p><strong>IMPORTANT:</strong> Please ensure you are using the following sampler settings for optimal results:</p>
|
25 |
+
<pre><code>temperature = 0.7
|
26 |
+
frequency_penalty = 0.0
|
27 |
+
presence_penalty = 0.0
|
28 |
+
min_p = 0.01
|
29 |
+
top_p = 0.95
|
30 |
+
top_k = 40</code></pre>
|
31 |
+
</div>
|
32 |
+
<h2>Model description</h2>
|
33 |
+
<p>GRMR-V3-G1B is a grammar correction model built on Gemma 3 1B (base model, no vision). It has been fine-tuned on a large dataset of grammar correction examples to help improve text quality by fixing grammatical errors, punctuation, spelling, and other language issues.</p>
|
34 |
+
<p>The model uses a specialized chat template that structures inputs as "text" and outputs as "corrected" to maintain a clear distinction between original and corrected content.</p>
|
35 |
+
<h3>Model Details</h3>
|
36 |
+
<div class="model-meta">
|
37 |
+
<p><strong>Developed by:</strong> qingy2024</p>
|
38 |
+
<p><strong>Finetuned from model:</strong> <a href="https://huggingface.co/unsloth/gemma-3-1b-pt">unsloth/gemma-3-1b-pt</a>
|
39 |
+
<p><strong>Model type:</strong> Gemma 3</p>
|
40 |
+
<p><strong>Language(s):</strong> English</p>
|
41 |
+
<p><strong>License:</strong> apache-2.0</p>
|
42 |
+
<p><strong>Training Dataset:</strong> <a href="https://huggingface.co/datasets/qingy2024/grmr-v4-60k">qingy2024/grmr-v4-60k</a></p>
|
43 |
+
</div>
|
44 |
+
<hr>
|
45 |
+
<h2>Manual Testing Examples</h2>
|
46 |
+
<p>Here are a few examples of grammar corrections this model can handle:</p>
|
47 |
+
<table>
|
48 |
+
<thead>
|
49 |
+
<tr>
|
50 |
+
<th>Original Text</th>
|
51 |
+
<th>Corrected Text</th>
|
52 |
+
</tr>
|
53 |
+
</thead>
|
54 |
+
<tbody>
|
55 |
+
<tr>
|
56 |
+
<td>i dont know weather to bring a umbrella today</td>
|
57 |
+
<td>I don't know whether to bring an umbrella today.</td>
|
58 |
+
</tr>
|
59 |
+
<tr>
|
60 |
+
<td>she go to the store yesterday</td>
|
61 |
+
<td>She went to the store yesterday.</td>
|
62 |
+
</tr>
|
63 |
+
<tr>
|
64 |
+
<td>they is going to be late for the meeting</td>
|
65 |
+
<td>They are going to be late for the meeting.</td>
|
66 |
+
</tr>
|
67 |
+
<tr>
|
68 |
+
<td>the cat laying on the floor all day</td>
|
69 |
+
<td>The cat is laying on the floor all day.</td>
|
70 |
+
</tr>
|
71 |
+
</tbody>
|
72 |
+
</table>
|
73 |
+
<hr>
|
74 |
+
<h2>Training procedure</h2>
|
75 |
+
<p>The model was fine-tuned using full parameter fine-tuning (not LoRA) on the GRMR-V4-60K dataset. The training was optimized using the Unsloth framework for efficient training of LLMs.</p>
|
76 |
+
<h3>Training hyperparameters</h3>
|
77 |
+
<ul>
|
78 |
+
<li><strong>Batch size:</strong> 8</li>
|
79 |
+
<li><strong>Gradient accumulation steps:</strong> 2</li>
|
80 |
+
<li><strong>Learning rate:</strong> 5e-5</li>
|
81 |
+
<li><strong>Epochs:</strong> 1</li>
|
82 |
+
<li><strong>Optimizer:</strong> AdamW (8-bit)</li>
|
83 |
+
<li><strong>Weight decay:</strong> 0.01</li>
|
84 |
+
<li><strong>LR scheduler:</strong> Cosine</li>
|
85 |
+
<li><strong>Warmup steps:</strong> 180</li>
|
86 |
+
<li><strong>Max sequence length:</strong> 16,384</li>
|
87 |
+
<li><strong>Training precision:</strong> Mixed precision (BF16 where available, FP16 otherwise)</li>
|
88 |
+
</ul>
|
89 |
+
<h2>Intended uses & limitations</h2>
|
90 |
+
<p>This model is designed for grammar correction tasks. It can be used to:</p>
|
91 |
+
<ul>
|
92 |
+
<li>Fix grammatical errors in written text</li>
|
93 |
+
<li>Correct punctuation</li>
|
94 |
+
<li>Address spelling mistakes</li>
|
95 |
+
<li>Improve sentence structure and clarity</li>
|
96 |
+
</ul>
|
97 |
+
<h3>Limitations</h3>
|
98 |
+
<ul>
|
99 |
+
<li>The model may struggle with highly technical or domain-specific content</li>
|
100 |
+
<li>It may not fully understand context-dependent grammar rules in all cases</li>
|
101 |
+
<li>Performance may vary for non-standard English or text with multiple errors</li>
|
102 |
+
</ul>
|
103 |
+
<h2>How to use</h2>
|
104 |
+
<p><code>llama.cpp</code> and projects based on it should be able to run this model like any others.</p>
|
105 |
+
<p>For pure <code>transformers</code> code, you can refer here:</p>
|
106 |
+
<pre><code class="language-python">from transformers import AutoModelForCausalLM, AutoTokenizer
|
107 |
+
# Load model and tokenizer
|
108 |
+
model_name = "qingy2024/GRMR-V3-G1B"
|
109 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
110 |
+
model = AutoModelForCausalLM.from_pretrained(model_name)
|
111 |
+
text_to_correct = "i am going to the store tommorow and buy some thing for dinner"
|
112 |
+
messages = [
|
113 |
+
{"role": "user", "content": text_to_correct}
|
114 |
+
]
|
115 |
+
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
116 |
+
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
|
117 |
+
outputs = model.generate(
|
118 |
+
inputs["input_ids"],
|
119 |
+
max_new_tokens=512,
|
120 |
+
temperature=0.1, # NOTE: For best results, use the recommended temperature of 0.7
|
121 |
+
do_sample=True
|
122 |
+
)
|
123 |
+
corrected_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
124 |
+
print(corrected_text)</code></pre>
|
125 |
+
<h3>Using with the Hugging Face pipeline</h3><pre><code class="language-python">from transformers import pipeline
|
126 |
+
pipe = pipeline(
|
127 |
+
"text-generation",
|
128 |
+
model="qingy2024/GRMR-V3-G1B",
|
129 |
+
torch_dtype="auto",
|
130 |
+
device_map="auto"
|
131 |
+
)
|
132 |
+
messages = [
|
133 |
+
{"role": "user", "content": "i dont know weather to bring a umbrella today"}
|
134 |
+
]
|
135 |
+
result = pipe(
|
136 |
+
messages,
|
137 |
+
max_new_tokens=100,
|
138 |
+
temperature=0.1, # NOTE: For best results, use the recommended temperature of 0.7
|
139 |
+
do_sample=True,
|
140 |
+
return_full_text=False
|
141 |
+
)[0]["generated_text"]
|
142 |
+
print(result)</code></pre><p><em>Note: The Python examples above use <code>temperature=0.1</code> for reproducibility in quick tests. For optimal grammar correction quality, please use the recommended sampler settings, especially <code>temperature=0.7</code>.</em></p><h2>Custom Chat Template</h2><p class="chat-template-info">The model uses a custom chat template with special formatting for grammar correction:</p><ul><li>User inputs are formatted with <code><|text_start|></code> and <code><|text_end|></code> tags</li><li>Model outputs are formatted with <code><|corrected_start|></code> and <code><|corrected_end|></code> tags</li></ul><p>The complete chat template is:</p><pre><code class="language-jinja">{{- bos_token }}
|
143 |
+
{#- Process messages with role mapping #}
|
144 |
+
{%- for message in messages %}
|
145 |
+
{%- if message['role'] == 'user' %}
|
146 |
+
{{- '<start_of_turn>text
|
147 |
+
'+ message['content'] | trim + '<end_of_turn>
|
148 |
+
' }}
|
149 |
+
{%- elif message['role'] == 'assistant' %}
|
150 |
+
{{- '<start_of_turn>corrected
|
151 |
+
'+ message['content'] | trim + '<end_of_turn>
|
152 |
+
' }}
|
153 |
+
{%- endif %}
|
154 |
+
{%- endfor %}
|
155 |
+
{%- if add_generation_prompt %}
|
156 |
+
{{- '<start_of_turn>corrected
|
157 |
+
' }}
|
158 |
+
{%- endif %}</code></pre><h2>Training Dataset</h2><p>The model was fine-tuned on the <a href="https://huggingface.co/datasets/qingy2024/grmr-v4-60k">qingy2024/grmr-v4-60k</a> dataset, which contains 60,000 examples of original text and their grammatically corrected versions.</p><h2>Bias, Risks, and Limitations</h2><ul><li>The model may reflect biases present in the training data</li><li>It may not perform equally well across different writing styles or domains</li><li>The model might occasionally introduce errors or change the meaning of text</li><li>It focuses on grammatical correctness rather than stylistic improvements</li></ul><h2>Citations</h2><pre><code>@article{gemma_2025,
|
159 |
+
title={Gemma 3},
|
160 |
+
url={https://goo.gle/Gemma3Report},
|
161 |
+
publisher={Kaggle},
|
162 |
+
author={Gemma Team},
|
163 |
+
year={2025}
|
164 |
+
}</code></pre><h2>Contact</h2><p>For questions or issues related to the model, please reach out via Hugging Face or by creating an issue in the repository.</p></div>
|
165 |
+
<style>
|
166 |
+
body {
|
167 |
+
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Helvetica, Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol";
|
168 |
+
line-height: 1.6;
|
169 |
+
margin: 0;
|
170 |
+
padding: 0;
|
171 |
+
background-color: #f8f9fa;
|
172 |
+
color: #333;
|
173 |
+
}
|
174 |
+
.container {
|
175 |
+
max-width: 1200px;
|
176 |
+
margin: 10px auto;
|
177 |
+
padding: 25px;
|
178 |
+
background-color: #ffffff;
|
179 |
+
border-radius: 8px;
|
180 |
+
box-shadow: 0 4px 12px rgba(0, 0, 0, 0.08);
|
181 |
+
}
|
182 |
+
h1, h2, h3 {
|
183 |
+
color: #0056b3; /* Primary Blue */
|
184 |
+
margin-top: 1.5em;
|
185 |
+
margin-bottom: 0.7em;
|
186 |
+
}
|
187 |
+
h1 {
|
188 |
+
text-align: center;
|
189 |
+
font-size: 2.2em;
|
190 |
+
border-bottom: 2px solid #e0e0e0;
|
191 |
+
padding-bottom: 0.5em;
|
192 |
+
margin-top: 0;
|
193 |
+
}
|
194 |
+
h2 {
|
195 |
+
font-size: 1.8em;
|
196 |
+
border-bottom: 1px solid #e9ecef;
|
197 |
+
padding-bottom: 0.3em;
|
198 |
+
}
|
199 |
+
h3 {
|
200 |
+
font-size: 1.4em;
|
201 |
+
color: #007bff; /* Lighter Blue for sub-headings */
|
202 |
+
}
|
203 |
+
p, li {
|
204 |
+
font-size: 1em;
|
205 |
+
color: #555;
|
206 |
+
}
|
207 |
+
a {
|
208 |
+
color: #007bff;
|
209 |
+
text-decoration: none;
|
210 |
+
}
|
211 |
+
a:hover {
|
212 |
+
text-decoration: underline;
|
213 |
+
color: #0056b3;
|
214 |
+
}
|
215 |
+
.important-note {
|
216 |
+
background-color: #e7f3ff; /* Light blue background */
|
217 |
+
border-left: 5px solid #007bff; /* Blue accent border */
|
218 |
+
margin: 20px 0px;
|
219 |
+
border-radius: 5px;
|
220 |
+
}
|
221 |
+
.important-note strong {
|
222 |
+
color: #0056b3;
|
223 |
+
font-weight: 600;
|
224 |
+
}
|
225 |
+
.important-note {
|
226 |
+
background-color: #d0e8ff;
|
227 |
+
padding: 0.05em 1.0em;
|
228 |
+
border-radius: 3px;
|
229 |
+
font-size: 0.9em;
|
230 |
+
}
|
231 |
+
code {
|
232 |
+
padding: 0.1em 0.4em;
|
233 |
+
border-radius: 3px;
|
234 |
+
font-size: 0.9em;
|
235 |
+
}
|
236 |
+
table {
|
237 |
+
width: 100%;
|
238 |
+
border-collapse: collapse;
|
239 |
+
margin: 20px 0;
|
240 |
+
box-shadow: 0 2px 4px rgba(0,0,0,0.05);
|
241 |
+
}
|
242 |
+
th, td {
|
243 |
+
border: 1px solid #dee2e6;
|
244 |
+
padding: 10px 12px;
|
245 |
+
text-align: left;
|
246 |
+
vertical-align: top;
|
247 |
+
}
|
248 |
+
th {
|
249 |
+
background-color: #e9ecef; /* Light gray for headers */
|
250 |
+
font-weight: 600;
|
251 |
+
color: #212529;
|
252 |
+
}
|
253 |
+
td:first-child {
|
254 |
+
/* font-style: italic; */
|
255 |
+
color: #444;
|
256 |
+
}
|
257 |
+
pre {
|
258 |
+
background-color: #f1f3f5;
|
259 |
+
padding: 15px;
|
260 |
+
border-radius: 5px;
|
261 |
+
overflow-x: auto;
|
262 |
+
border: 1px solid #ced4da;
|
263 |
+
font-size: 0.9em;
|
264 |
+
}
|
265 |
+
code {
|
266 |
+
font-family: "SFMono-Regular", Consolas, "Liberation Mono", Menlo, Courier, monospace;
|
267 |
+
background-color: #e9ecef;
|
268 |
+
padding: 0.2em 0.4em;
|
269 |
+
border-radius: 3px;
|
270 |
+
font-size: 0.9em;
|
271 |
+
}
|
272 |
+
pre code {
|
273 |
+
background-color: transparent;
|
274 |
+
padding: 0;
|
275 |
+
border-radius: 0;
|
276 |
+
font-size: 1em;
|
277 |
+
}
|
278 |
+
ul {
|
279 |
+
padding-left: 20px;
|
280 |
+
}
|
281 |
+
li {
|
282 |
+
margin-bottom: 0.5em;
|
283 |
+
}
|
284 |
+
hr {
|
285 |
+
border: none;
|
286 |
+
border-top: 1px solid #e0e0e0;
|
287 |
+
margin: 30px 0;
|
288 |
+
}
|
289 |
+
.model-meta {
|
290 |
+
background-color: #f8f9fa;
|
291 |
+
padding: 15px;
|
292 |
+
border-radius: 5px;
|
293 |
+
margin-bottom: 20px;
|
294 |
+
border: 1px solid #e9ecef;
|
295 |
+
}
|
296 |
+
.model-meta p { margin-bottom: 0.5em; }
|
297 |
+
.model-meta strong { color: #333; }
|
298 |
+
/* Specific styling for chat template explanation */
|
299 |
+
.chat-template-info span {
|
300 |
+
font-weight: bold;
|
301 |
+
color: #0056b3;
|
302 |
+
}
|
303 |
+
</style>
|