Uploaded model

  • Developed by: maanaar
  • License: apache-2.0
  • Finetuned from model : ALLaM-AI/ALLaM-7B-Instruct-preview

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

maanaar/allam7b-sft-arabic-grammar-eerab

Fine-tuned ALLaM-7B model for Arabic Q&A and grammar tasks.

Usage

from unsloth import FastLanguageModel
import torch

model_name = "maanaar/allam7b-sft-arabic-grammar-eerab"

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name=model_name,
    max_seq_length=2048,
    dtype=None,         # auto-detect
    load_in_4bit=True,  # saves GPU memory
)

messages = [
    {"role": "user", "content": "ما اعراب جملة لا اله الا الله"}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

outputs = model.generate(
    **tokenizer(text, return_tensors="pt").to(model.device),
    max_new_tokens=256,
    temperature=0.7,
    top_p=0.8,
    top_k=20
)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for maanaar/allam7b-sft-arabic-grammar-eerab

Finetuned
(14)
this model