Model Card for Model ID

Model ID: mohammed-orabi2/qwen-poetry-lora2


Model Details

Model Description: This is a LoRA fine-tuned version of the Qwen/Qwen3-1.7B model, specifically trained to generate Arabic poetic responses in a conversational format. It was trained on a dataset of 1,000 synthetic Arabic poetry dialogues, each containing a user query and a poetic response.

Developed by: Mohammed Orabi

Shared by : mohammed-orabi2

Model type: Causal Language Model with LoRA adaptation

Language(s) (NLP): Arabic

License: Apache 2.0 (inherits from Qwen3-1.7B)

Finetuned from model : Qwen/Qwen3-1.7B

**Model Sources ** Repository: https://huggingface.co/mohammed-orabi2/qwen-poetry-lora2


Uses

Direct Use: This model can be used for generating Arabic poetry in response to user queries, particularly in cultural, educational, or creative chatbot applications.

Downstream Use :

  • Poetry recommendation systems
  • Arabic literature generation tools
  • Creative writing assistants

Out-of-Scope Use:

  • Non-Arabic generation tasks
  • Factual or knowledge-based QA tasks
  • Sensitive or safety-critical environments

Bias, Risks, and Limitations

The model was fine-tuned on synthetic poetic data and may:

  • Favor specific poetic structures
  • Fail on factual, political, or philosophical prompts
  • Generate romantic or metaphorical content that could be misinterpreted in serious contexts

Users should avoid relying on this model for objective or critical outputs.


Recommendations

Users (both direct and downstream) should be aware of the creative, poetic intent of this model. For factual content, use general-purpose LLMs. Evaluate outputs manually before publishing or broadcasting.


How to Get Started with the Model

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch

base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-1.7B", device_map="auto", torch_dtype=torch.float16)
model = PeftModel.from_pretrained(base_model, "mohammed-orabi2/qwen-poetry-arabic-lora")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-1.7B")

prompt = "اكتب لي بيت شعر عن النجاح."
chat = [{"role": "user", "content": prompt}]
formatted_prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(formatted_prompt, return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(output[0], skip_special_tokens=True))

Training Details

Training Data: 1,000 synthetic Arabic poetic dialogues (prompt + poetic response) generated programmatically.

Preprocessing :

  • Applied Qwen chat template
  • Tokenized using Qwen3-1.7B tokenizer with padding/truncation

Training Hyperparameters:

  • Epochs: 5
  • Batch size: 2
  • Max length: 1024
  • Learning rate: 2e-4
  • LoRA config: r=8, alpha=16, dropout=0.05, target: ["q_proj", "v_proj"]

Speeds, Sizes, Times :

  • Training time: ~24 minutes on L4 GPU
  • Model size: LoRA adapter ~100MB

Evaluation

Testing Data: 50 reserved samples from the poetic dataset

Factors:

  • Response fluency
  • Arabic poetic structure
  • Topical relevance

Metrics:

  • Manual review (subjective)
  • BLEU/Rouge not applicable

Results:

  • 90% generated responses respected rhyme/meter and matched prompt topics

Summary

Model Examination [optional]: Output behavior consistent with training intent. Performs well within poetic use-case boundaries.


Environmental Impact

Hardware Type: NVIDIA L4 Hours used: ~0.4 hours (24 minutes) Cloud Provider: Google Colab Compute Region: US (GCP default) Carbon Emitted: Estimated ~0.2 kg CO2e


Technical Specifications

Model Architecture and Objective: Transformer decoder (CausalLM) + LoRA injection

Compute Infrastructure: Google Colab

Hardware: NVIDIA L4 (24 mins)

Software:

  • Transformers 4.x
  • PEFT 0.15.2
  • Accelerate 0.25+

Citation

BibTeX:

@misc{qwenpoetry2025,
  author = {Mohammed Orabi},
  title = {Qwen Arabic Poetry LoRA},
  year = {2025},
  howpublished = {\url{https://huggingface.co/mohammed-orabi2/qwen-poetry-lora2}}
}

APA: Mohammed Orabi. (2025). Qwen Arabic Poetry LoRA [Model]. Hugging Face. https://huggingface.co/mohammed-orabi2/qwen-poetry-lora2


Glossary

  • LoRA: Low-Rank Adaptation, a method for efficient model fine-tuning
  • CausalLM: Causal Language Modeling, predicts the next token in a sequence

More Information

For support or feedback, please open an issue on the Hugging Face repo or contact via Hugging Face profile.

Model Card Authors

Mohammed Orabi

Model Card Contact

https://huggingface.co/mohammed-orabi2


Framework versions

  • Transformers: 4.x
  • PEFT: 0.15.2
  • Datasets: latest
  • Accelerate: 0.25+
Downloads last month
7
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mohammed-orabi2/qwen-poetry-arabic-lora

Finetuned
Qwen/Qwen3-1.7B
Adapter
(18)
this model