--- base_model: unsloth/Llama-3.2-3B tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en ---
GRMR-V3-L3B is a fine-tuned version of unsloth/Llama-3.2-3B specifically optimized for grammar correction tasks.
IMPORTANT: Please ensure you are using the following sampler settings for optimal results:
temperature = 0.7
frequency_penalty = 0.0
presence_penalty = 0.0
min_p = 0.01
top_p = 0.95
top_k = 40
GRMR-V3-L3B is a grammar correction model built on Meta's Llama 3.2 3B base model. It has been fine-tuned on a large dataset of grammar correction examples to help improve text quality by fixing grammatical errors, punctuation, spelling, and other language issues.
The model uses a specialized chat template that structures inputs as "text" and outputs as "corrected" to maintain a clear distinction between original and corrected content.
Here are a few examples of grammar corrections this model can handle:
Original Text | Corrected Text |
---|---|
i dont know weather to bring a umbrella today | I don't know whether to bring an umbrella today. |
she go to the store yesterday | She went to the store yesterday. |
they is going to be late for the meeting | They are going to be late for the meeting. |
the cat laying on the floor all day | The cat is laying on the floor all day. |
The model was fine-tuned using full parameter fine-tuning (not LoRA) on the GRMR-V4-60K dataset. The training was optimized using the Unsloth framework for efficient training of LLMs.
This model is designed for grammar correction tasks. It can be used to:
llama.cpp and projects based on it should be able to run this model like any others.
For pure transformers
code, you can refer here:
from transformers import AutoModelForCausalLM, AutoTokenizer# Load model and tokenizer
model_name = "qingy2024/GRMR-V3-L3B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)# Text with grammar errors to correct
text_to_correct = "i am going to the store tommorow and buy some thing for dinner"# Format as messages
messages = [
{"role": "user", "content": text_to_correct}
]# Apply the custom chat template
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)# Tokenize and generate
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
inputs["input_ids"],
max_new_tokens=512,
temperature=0.1, # NOTE: For best results, use the recommended temperature of 0.7
do_sample=True
)
# Decode and print the corrected text
corrected_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(corrected_text)
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="qingy2024/GRMR-V3-L3B",
torch_dtype="auto",
device_map="auto"
)
messages = [
{"role": "user", "content": "i dont know weather to bring a umbrella today"}
]
result = pipe(
messages,
max_new_tokens=100,
temperature=0.1, # NOTE: For best results, use the recommended temperature of 0.7
do_sample=True,
return_full_text=False
)[0]["generated_text"]
print(result)
Note: The Python examples above use temperature=0.1
for reproducibility in quick tests. For optimal grammar correction quality, please use the recommended sampler settings, especially temperature=0.7
.
The model uses a custom chat template with special formatting for grammar correction:
<|start_header_id|>text<|end_header_id|>
headers<|start_header_id|>corrected<|end_header_id|>
headers<|eot_id|>
tokensThe model was fine-tuned on the qingy2024/grmr-v4-60k dataset, which contains 60,000 examples of original text and their grammatically corrected versions.
For questions or issues related to the model, please reach out via Hugging Face or by creating an issue in the repository.