ALLaM-Thinking-GGUF
Description
ALLaM-Thinking-GGUF is an Arabic language model optimized for step-by-step reasoning and mathematical problem-solving. The model has been quantized to the GGUF format for efficient inference on consumer hardware.
Model Details
- Model Name: ALLaM-Thinking-GGUF
- Author: almaghrabima
- Languages: Arabic (primary)
- Format: GGUF (GPU/CPU inference optimized)
- Quantization: q4_k_m
Features
- Specialized in step-by-step reasoning for mathematical problems
- Optimized for Arabic language comprehension and generation
- Efficient inference through GGUF quantization
- Suitable for educational applications and mathematical assistance
Installation
# Clone or download the repository
git clone https://huggingface.co/almaghrabima/ALLaM-Thinking-GGUF
# Navigate to the downloaded directory
cd ALLaM-Thinking-GGUF
Usage
The model can be used with llama.cpp for local inference:
./build/bin/llama-cli -m ./ALLaM-Thinking-q4_k_m.gguf -cnv -p "Your prompt in Arabic"
Example
./build/bin/llama-cli -m ./ALLaM-Thinking-q4_k_m.gguf -cnv -p "ูู ูุฑูู ู
ููู ู
ู 15 ูุงุนุจุงูุ 40% ู
ููู
ูุณุฌููู ุงูุฃูุฏุงู. ุฅุฐุง ุณุฌู ูู ูุงุนุจ ู
ู ุงููุงุนุจูู ุงูุฐูู ูุณุฌููู ุงูุฃูุฏุงู ูู ุงูู
ุชูุณุท 5 ุฃูุฏุงู ุฎูุงู ุงูู
ูุณู
ุ ููู
ุนุฏุฏ ุงูุฃูุฏุงู ุงูููู ุงูุชู ุณุฌููุง ุงููุงุนุจูู ุงูุฐูู ูุณุฌููู ุงูุฃูุฏุงูุ"
Sample Output
[INST] ูู ูุฑูู ู
ููู ู
ู 15 ูุงุนุจุงูุ 40 % ู
ููู
ูุณุฌููู ุงูุฃูุฏุงู. ุฅุฐุง ุณุฌู ูู ูุงุนุจ ู
ู ุงููุงุนุจูู ุงูุฐูู ูุณุฌููู ุงูุฃูุฏุงู ูู ุงูู
ุชูุณุท 5 ุฃูุฏุงู ุฎูุงู ุงูู
ูุณู
ุ ููู
ุนุฏุฏ ุงูุฃูุฏุงู ุงูููู ุงูุชู ุณุฌููุง ุงููุงุนุจูู ุงูุฐูู ูุณุฌููู ุงูุฃูุฏุงูุ [/INST]
ูุญุณุงุจ ุนุฏุฏ ุงูุฃูุฏุงู ุงูููู ุงูุชู ุณุฌููุง ุงููุงุนุจูู ุงูุฐูู ูุณุฌููู ุงูุฃูุฏุงู ูู ุงููุฑูู ุงูู
ููู ู
ู 15 ูุงุนุจุงูุ ูุจุฏุฃ ุจุญุณุงุจ ุนุฏุฏ ุงููุงุนุจูู ุงูุฐูู ูุณุฌููู ุงูุฃูุฏุงู.
ุนุฏุฏ ุงููุงุนุจูู ุงูุฐูู ูุณุฌููู ุงูุฃูุฏุงู = ุฅุฌู
ุงูู ุนุฏุฏ ุงููุงุนุจูู * ูุณุจุฉ ุงููุงุนุจูู ุงูุฐูู ูุณุฌููู ุงูุฃูุฏุงู = 15 * 0.40 = 6 ูุงุนุจูู
ุซู
ูุถุฑุจ ุนุฏุฏ ุงููุงุนุจูู ุงูุฐูู ูุณุฌููู ุงูุฃูุฏุงู ูู ู
ุชูุณุท ุนุฏุฏ ุงูุฃูุฏุงู ุงูุชู ูุณุฌููุง ูู ูุงุนุจ ู
ููู
ุฎูุงู ุงูู
ูุณู
.
ุงูุฃูุฏุงู ุงูููู ุงูู
ุณุฌูุฉ = ุนุฏุฏ ุงููุงุนุจูู ุงูุฐูู ูุณุฌููู ุงูุฃูุฏุงู * ู
ุชูุณุท ุนุฏุฏ ุงูุฃูุฏุงู ููู ูุงุนุจ = 6 * 5 = 30 ูุฏูุงู
ูุฐุงุ ุณุฌู ุงููุงุนุจูู ุงูุฐูู ูุณุฌููู ุงูุฃูุฏุงู ุฅุฌู
ุงูู 30 ูุฏูุงู ุฎูุงู ุงูู
ูุณู
.
Advanced Options
You can customize inference parameters with additional options:
./build/bin/llama-cli -m ./ALLaM-Thinking-q4_k_m.gguf -cnv -p "Your prompt" \
--ctx_size 2048 \
--temp 0.7 \
--top_p 0.9 \
--repeat_penalty 1.1
Hardware Requirements
- Minimum: 8GB RAM
- Recommended: 16GB RAM, High-end CPU or GPU with at least 8GB VRAM
License
This model is released under the Apache 2.0 License.
Citations
If you use this model in your research or applications, please cite:
@misc{almaghrabima2025allam,
author = {Mohammed Al-Maghrabi Research},
title = {ALLaM-Thinking: Arabic Large Language Model with Enhanced Reasoning Capabilities},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/almaghrabima/ALLaM-Thinking}}
}
Acknowledgements
- This model utilizes the GGUF format developed by the llama.cpp team
- Special thanks to contributors and the Arabic NLP community
- Downloads last month
- 53
Hardware compatibility
Log In
to view the estimation
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no library tag.