🤖 إدراكي (Edraky) - Multilingual Educational AI Model 🇪🇬
Edraky is a fine-tuned multilingual model built on Qwen2-1.5B-Instruct
, designed to provide educational support for Arabic-speaking students, especially targeting Egypt's 3rd preparatory curriculum. It supports Arabic, English, and Hebrew to ensure flexible, broad usage in multilingual environments.
🧠 About Edraky
Edraky is part of the "إدراكي" educational initiative to democratize access to AI-powered tools for students in Egypt and the broader Arab world. By fine-tuning the powerful Qwen2 base model, Edraky delivers context-aware, curriculum-aligned, and interactive responses that help learners understand core subjects such as:
- اللغة العربية (Arabic Language)
- الدراسات الاجتماعية (Social Studies)
- العلوم (Science)
- الرياضيات (Math)
- حاسب آلي (Computer)
- اللغة الإنجليزية (English)
🚀 Key Features
- 🤖 Text Generation & Q&A: Answer student questions in an educational and child-safe manner.
- 📖 Curriculum Support: Focused especially on 3rd preparatory grade in Egypt.
- 🌍 Multilingual Input: Supports Arabic, English, and Hebrew.
- 🔀 Open-Source: Available for research, personal, or educational use.
- 📚 Trained on curated educational prompts for logic, language understanding, and curriculum-based queries.
🧪 Training & Fine-Tuning
Base model: Qwen/Qwen2-1.5B-Instruct
Training Data Sources:
- fka/awesome-chatgpt-prompts
- gsm8k-rerun/Qwen_Qwen2.5-1.5B-Instruct
- Additional data created from Arabic curriculum-style questions and student textbooks
Training Methodology:
- Supervised fine-tuning
- Prompt-optimized inputs
- Tokenized using Hugging Face’s tokenizer compatible with Qwen2 models
🔍 Evaluation
Model was evaluated on:
- ✔️ Accuracy for subject-specific answers
- ✔️ Perplexity for fluency and coherence
- ✔️ WER (Word Error Rate) for language understanding
Evaluation still in progress for full benchmarks — to be published soon.
🧑💻 Example Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Edraky/Edraky")
tokenizer = AutoTokenizer.from_pretrained("Edraky/Edraky")
prompt = "اشرح الثورة العرابية بإيجاز"
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate(**inputs, max_new_tokens=150)
print(tokenizer.decode(output[0], skip_special_tokens=True))
🧑📓 Intended Use
- 💬 Classroom support AI assistant
- ✍️ Writing and summarization in Arabic
- ❓ Question answering for exam preparation
- 🔍 Fact recall for historical, literary, and social studies content
❌ Not Intended For:
- ❌ Political or religious fatwa content
- ❌ Personal decision-making
- ❌ Generating offensive or misleading answers
🌱 Future Plans
- ✅ Add voice input/output via Whisper integration
- ✅ Online quiz companion
- ✅ Add visual aids (diagrams, maps)
- ✅ Full web platform integration (see edraky.rf.gd)
📢 Maintainers
Developed by: Edraky AI Team
🌐 Website: https://edraky.rf.gd
📧 Contact: [email protected]
📜 Citation
@misc{edraky2025,
title={Edraky: Multilingual Educational AI Model},
author={Edraky Team},
year={2025},
howpublished={\url{https://huggingface.co/Edraky/Edraky}}
}
هذا المشروع من أجل دعم التعليم في مصر باستخدام الذكاء الاصطناعي. نرجو أن يكون مفيدًا لجميع الطلاب والمعلمين 🌟
- Downloads last month
- 4
Model tree for Edraky/Edraky-AI
Base model
Qwen/Qwen2-1.5B-Instruct