Model Card for Arabic Abductive Reasoning Model
This model is a fine-tuned version of meta-llama/Llama-3.1-8B-Instruct specialized for abductive commonsense reasoning in Arabic. It is designed to generate plausible explanations for narrative scenarios.
Model Details
Model Description
This model leverages the powerful base of Meta's Llama 3.1 8B Instruct and fine-tunes it to understand and perform abductive reasoning in Arabic. Abductive reasoning is a form of logical inference that aims to find the simplest and most likely explanation for a set of observations.
The model was trained on an Arabic translation of the ART (Abductive Reasoning in narrative Text) dataset, which contains over 20,000 commonsense narrative contexts and 200,000 corresponding explanations. This allows the model to excel at tasks requiring the generation or identification of plausible hypotheses based on incomplete information presented in Arabic text.
- Developed by: Youssef Maged
- Funded by: [More Information Needed]
- Shared by: Youssef Maged
- Model type: Decoder-only transformer-based language model.
- Language(s) (NLP): Arabic (ar)
- License: MIT
- Finetuned from model: meta-llama/Llama-3.1-8B-Instruct
Model Sources
- Repository: [Link to your Hugging Face repository]
- Paper: Abductive Commonsense Reasoning
- Demo: [More Information Needed]
Uses
Direct Use
The model is intended for direct use in generating and evaluating commonsense explanations in Arabic. It can be prompted with a scenario (a set of observations) to generate a plausible hypothesis that explains the situation.
Example Tasks:
- Abductive Natural Language Generation (ฮฑNLG): Given two observations, generate a hypothesis that explains the connection between them.
- Abductive Natural Language Inference (ฮฑNLI): Given a scenario and two potential hypotheses, choose the more plausible one.
Downstream Use
This model can serve as a foundational component for more complex applications, including:
- Advanced chatbots and virtual assistants with more human-like reasoning.
- Content analysis tools that can infer motivations or causes in text.
- Interactive storytelling and narrative generation systems.
- Educational tools for teaching critical thinking and reasoning skills in Arabic.
Out-of-Scope Use
- Not intended for making critical, high-stakes decisions without human oversight.
- Should not be used to generate malicious, harmful, or misleading content.
- Limited to commonsense knowledge and may not perform well in specialized or technical domains.
Bias, Risks, and Limitations
- Inherited Bias: May reflect cultural biases from the original English ART dataset or translation process.
- Inaccurate Explanations: Fluent but potentially illogical or factually incorrect outputs.
- Limited Scope: Restricted to patterns learned from the training data.
Recommendations
Implement content moderation and human-in-the-loop review for public-facing systems. Treat outputs as hypotheses, not facts.
How to Get Started with the Model
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "[Your Hugging Face Model ID]"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
observation1 = "ุนูู ุฏุฎู ุงูู
ุทุจุฎ."
observation2 = "ุจุนุฏ ูุชุฑุฉ ูุฌูุฒุฉุ ุงูุจุนุซุช ุฑุงุฆุญุฉ ุฏุฎุงู."
prompt = f"Observation 1: {observation1}\nObservation 2: {observation2}\nHypothesis:"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=50, num_return_sequences=1)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)
Training Details
Training Data
Fine-tuned on an Arabic translation of the ART dataset (Bhagavatula et al. 2019) with ~20,000 narrative contexts and ~200,000 hypotheses.
Training Procedure
Supervised fine-tuning (SFT) objective using instruction-style prompts.
Preprocessing
- Translated ART dataset to Modern Standard Arabic.
- Tokenized using Llama-3.1 tokenizer.
- Reformatted into prompt-response format.
Training Hyperparameters
- Training regime: bf16 mixed-precision (via UniSloth framework)
- Platform: Google Colab with T4 GPU
Speeds, Sizes, Times
- GPU: NVIDIA T4
- Training Time: [More Information Needed]
- Batch Size: [More Information Needed]
Evaluation
Testing Data, Factors & Metrics
Testing Data
Held-out portion of Arabic-translated ART dataset.
Factors
Optional breakdown by reasoning type: causal, temporal, social.
Metrics
- ฮฑNLI: Accuracy for plausibility classification.
- ฮฑNLG: ROUGE, BLEU, BERTScore; Human evaluation suggested.
Results
Metric | Test Set Result |
---|---|
Accuracy (ฮฑNLI) | [More Information Needed] |
ROUGE-L (ฮฑNLG) | [More Information Needed] |
Summary
[More Information Needed]
Model Examination
[More Information Needed]
Environmental Impact
- Hardware Type: NVIDIA T4 GPU
- Hours used: [More Information Needed]
- Cloud Provider: Google Colab
- Compute Region: [More Information Needed]
- Carbon Emitted: [More Information Needed]
Technical Specifications
Model Architecture and Objective
Decoder-only transformer fine-tuned with instruction tuning for abductive reasoning.
Compute Infrastructure
Hardware
NVIDIA T4 GPU (Google Colab)
Software
- UniSloth (fine-tuning framework)
- Hugging Face Transformers
- PyTorch
Citation
@inproceedings{bhagavatula2020abductive,
title={Abductive Commonsense Reasoning},
author={Chandra Bhagavatula and Ronan Le Bras and Chaitanya Malaviya and Keisuke Sakaguchi and Ari Holtzman and Hannah Rashkin and Doug Downey and Wen-tau Yih and Yejin Choi},
booktitle={International Conference on Learning Representations},
year={2020},
url={https://openreview.net/forum?id=BygP6T4KPS}
}
Glossary
- Abductive Reasoning: A logical inference method that seeks the most plausible explanation.
- ฮฑNLG / ฮฑNLI: Abductive generation and inference tasks.
More Information
[More Information Needed]
Model Card Authors
Youssef Maged
Model Card Contact
[Your Email or Contact Information]
- Downloads last month
- 157
4-bit
Model tree for youssef03/llama_3.2_3B_abductive
Base model
meta-llama/Llama-3.1-8B