Update README.md
Browse files
README.md
CHANGED
@@ -5,202 +5,92 @@ language:
|
|
5 |
pipeline_tag: text-generation
|
6 |
---
|
7 |
|
8 |
-
#
|
9 |
|
10 |
-
|
11 |
|
|
|
12 |
|
|
|
13 |
|
14 |
-
|
|
|
|
|
15 |
|
16 |
-
|
|
|
|
|
|
|
17 |
|
18 |
-
|
19 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
|
|
|
21 |
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
|
30 |
-
|
31 |
|
32 |
-
|
|
|
|
|
|
|
33 |
|
34 |
-
|
35 |
-
- **Paper [optional]:** [More Information Needed]
|
36 |
-
- **Demo [optional]:** [More Information Needed]
|
37 |
|
38 |
-
|
39 |
|
40 |
-
|
|
|
41 |
|
42 |
-
|
|
|
|
|
43 |
|
44 |
-
|
45 |
|
46 |
-
|
|
|
|
|
47 |
|
48 |
-
|
49 |
|
50 |
-
|
|
|
|
|
51 |
|
52 |
-
|
53 |
|
54 |
-
|
55 |
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
-
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
-
## How to Get Started with the Model
|
73 |
-
|
74 |
-
Use the code below to get started with the model.
|
75 |
-
|
76 |
-
[More Information Needed]
|
77 |
-
|
78 |
-
## Training Details
|
79 |
-
|
80 |
-
### Training Data
|
81 |
-
|
82 |
-
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
83 |
-
|
84 |
-
[More Information Needed]
|
85 |
-
|
86 |
-
### Training Procedure
|
87 |
-
|
88 |
-
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
89 |
-
|
90 |
-
#### Preprocessing [optional]
|
91 |
-
|
92 |
-
[More Information Needed]
|
93 |
-
|
94 |
-
|
95 |
-
#### Training Hyperparameters
|
96 |
-
|
97 |
-
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
98 |
-
|
99 |
-
#### Speeds, Sizes, Times [optional]
|
100 |
-
|
101 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
102 |
-
|
103 |
-
[More Information Needed]
|
104 |
-
|
105 |
-
## Evaluation
|
106 |
-
|
107 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
108 |
-
|
109 |
-
### Testing Data, Factors & Metrics
|
110 |
-
|
111 |
-
#### Testing Data
|
112 |
-
|
113 |
-
<!-- This should link to a Dataset Card if possible. -->
|
114 |
-
|
115 |
-
[More Information Needed]
|
116 |
-
|
117 |
-
#### Factors
|
118 |
-
|
119 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
120 |
-
|
121 |
-
[More Information Needed]
|
122 |
-
|
123 |
-
#### Metrics
|
124 |
-
|
125 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
126 |
-
|
127 |
-
[More Information Needed]
|
128 |
-
|
129 |
-
### Results
|
130 |
-
|
131 |
-
[More Information Needed]
|
132 |
-
|
133 |
-
#### Summary
|
134 |
-
|
135 |
-
|
136 |
-
|
137 |
-
## Model Examination [optional]
|
138 |
-
|
139 |
-
<!-- Relevant interpretability work for the model goes here -->
|
140 |
-
|
141 |
-
[More Information Needed]
|
142 |
-
|
143 |
-
## Environmental Impact
|
144 |
-
|
145 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
146 |
-
|
147 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
148 |
-
|
149 |
-
- **Hardware Type:** [More Information Needed]
|
150 |
-
- **Hours used:** [More Information Needed]
|
151 |
-
- **Cloud Provider:** [More Information Needed]
|
152 |
-
- **Compute Region:** [More Information Needed]
|
153 |
-
- **Carbon Emitted:** [More Information Needed]
|
154 |
-
|
155 |
-
## Technical Specifications [optional]
|
156 |
-
|
157 |
-
### Model Architecture and Objective
|
158 |
-
|
159 |
-
[More Information Needed]
|
160 |
-
|
161 |
-
### Compute Infrastructure
|
162 |
-
|
163 |
-
[More Information Needed]
|
164 |
-
|
165 |
-
#### Hardware
|
166 |
-
|
167 |
-
[More Information Needed]
|
168 |
-
|
169 |
-
#### Software
|
170 |
-
|
171 |
-
[More Information Needed]
|
172 |
-
|
173 |
-
## Citation [optional]
|
174 |
-
|
175 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
176 |
-
|
177 |
-
**BibTeX:**
|
178 |
-
|
179 |
-
[More Information Needed]
|
180 |
-
|
181 |
-
**APA:**
|
182 |
-
|
183 |
-
[More Information Needed]
|
184 |
-
|
185 |
-
## Glossary [optional]
|
186 |
-
|
187 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
188 |
-
|
189 |
-
[More Information Needed]
|
190 |
-
|
191 |
-
## More Information [optional]
|
192 |
-
|
193 |
-
[More Information Needed]
|
194 |
-
|
195 |
-
## Model Card Authors [optional]
|
196 |
-
|
197 |
-
[More Information Needed]
|
198 |
-
|
199 |
-
## Model Card Contact
|
200 |
-
|
201 |
-
[More Information Needed]
|
202 |
-
|
203 |
-
|
204 |
-
### Framework versions
|
205 |
-
|
206 |
-
- PEFT 0.7.2.dev0
|
|
|
5 |
pipeline_tag: text-generation
|
6 |
---
|
7 |
|
8 |
+
# Walia Instruction Dataset for Amharic
|
9 |
|
10 |
+
This repository contains instruction-tuning datasets used in [Walia-LLM](https://aclanthology.org/2024.findings-emnlp.25/), a fine-tuned LLaMA-2 model for the Amharic language. The dataset was carefully constructed by integrating task-specific and generative datasets, and it supports a variety of natural language processing tasks in Amharic.
|
11 |
|
12 |
+
## Dataset Summary
|
13 |
|
14 |
+
The Walia dataset is designed to enhance large language models for the Amharic language by:
|
15 |
|
16 |
+
- Converting existing task-specific datasets (e.g., sentiment analysis, QA, NER) into instruction format.
|
17 |
+
- Creating new generative datasets (e.g., poem generation, religious lyrics, story generation).
|
18 |
+
- Translating English instruction datasets (e.g., Alpaca, Dolly) into Amharic for comparative studies.
|
19 |
|
20 |
+
Each data point follows a structured instruction format with:
|
21 |
+
- `"instruction"` – a natural language task description,
|
22 |
+
- `"input"` – optional input text for the task,
|
23 |
+
- `"output"` – the expected model output in Amharic.
|
24 |
|
25 |
+
## Supported Tasks
|
26 |
|
27 |
+
| Task | Source/Type | Notes |
|
28 |
+
|---------------------------|-------------------|----------------------------|
|
29 |
+
| Sentiment Analysis | AfriSenti | 3-class sentiment |
|
30 |
+
| Named Entity Recognition | MasakhaNER | Personal name extraction |
|
31 |
+
| News Classification | MasakhaNews | Multilingual topic classes |
|
32 |
+
| QA | AmharicQA | Wikipedia-based |
|
33 |
+
| Summarization | XL-Sum | Amharic summaries |
|
34 |
+
| Machine Translation | NLLB, WMT19 | Both directions supported |
|
35 |
+
| Poem/Lyrics/Story Gen | Custom | Sourced from web/telegram |
|
36 |
+
| Spelling Correction | Synthetic | Character perturbations |
|
37 |
|
38 |
+
## Dataset Structure
|
39 |
|
40 |
+
```json
|
41 |
+
{
|
42 |
+
"instruction": "Translate the following sentence to Amharic.",
|
43 |
+
"input": "Hello, how are you?",
|
44 |
+
"output": "ሰላም፣ እንዴት ነህ?"
|
45 |
+
}
|
46 |
+
```
|
47 |
|
48 |
+
## Data Statistics
|
49 |
|
50 |
+
- ~122,000 instruction samples for training
|
51 |
+
- ~15,000 for validation and test
|
52 |
+
- 16+ task types and instruction templates
|
53 |
+
- All responses are in Amharic (except source text in MT)
|
54 |
|
55 |
+
## How to Use
|
|
|
|
|
56 |
|
57 |
+
You can load the dataset using the Hugging Face `datasets` library:
|
58 |
|
59 |
+
```python
|
60 |
+
from datasets import load_dataset
|
61 |
|
62 |
+
dataset = load_dataset("EthioNLP/walia-amharic-instructions")
|
63 |
+
print(dataset["train"][0])
|
64 |
+
```
|
65 |
|
66 |
+
## Applications
|
67 |
|
68 |
+
- Supervised fine-tuning (SFT) of LLMs for Amharic
|
69 |
+
- Cross-lingual instruction tuning experiments
|
70 |
+
- Evaluation of generative capabilities in low-resource languages
|
71 |
|
72 |
+
## Related Models
|
73 |
|
74 |
+
The dataset is used to fine-tune:
|
75 |
+
- [`EthioNLP/walia-llama-2`](https://huggingface.co/EthioNLP/walia-llama-2)
|
76 |
+
- Other LLaMA variants for Amharic
|
77 |
|
78 |
+
## Citation
|
79 |
|
80 |
+
Please cite the following paper if you use this dataset:
|
81 |
|
82 |
+
```bibtex
|
83 |
+
@inproceedings{azime-etal-2024-walia,
|
84 |
+
title = "Walia-{LLM}: Enhancing {A}mharic-{LL}a{MA} by Integrating Task-Specific and Generative Datasets",
|
85 |
+
author = "Azime, Israel Abebe and Tonja, Atnafu Lambebo and Belay, Tadesse Destaw and Fuge, Mitiku Yohannes and Wassie, Aman Kassahun and Jada, Eyasu Shiferaw and Chanie, Yonas and Sewunetie, Walelign Tewabe and Yimam, Seid Muhie",
|
86 |
+
editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung",
|
87 |
+
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2024",
|
88 |
+
month = nov,
|
89 |
+
year = "2024",
|
90 |
+
address = "Miami, Florida, USA",
|
91 |
+
publisher = "Association for Computational Linguistics",
|
92 |
+
url = "https://aclanthology.org/2024.findings-emnlp.25/",
|
93 |
+
doi = "10.18653/v1/2024.findings-emnlp.25",
|
94 |
+
pages = "432--444"
|
95 |
+
}
|
96 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|