Datasets:
pretty_name: ShamNER
license: cc-by-4.0
task_categories:
- token-classification
language:
- ar
data_files:
train: train.parquet
validation: validation.parquet
test: test.parquet
dataset_info:
features:
- name: doc_id
dtype: int64
- name: doc_name
dtype: string
- name: sent_id
dtype: int64
- name: orig_ID
dtype: int64
- name: round
dtype: string
- name: annotator
dtype: string
- name: text
dtype: string
- name: source_type
dtype: string
- name: spans
list:
- name: annotator
dtype: string
- name: end
dtype: int64
- name: label
dtype: string
- name: start
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 5148727
num_examples: 19783
- name: validation
num_bytes: 328887
num_examples: 1795
- name: test
num_bytes: 313228
num_examples: 1844
download_size: 2302809
dataset_size: 5790842
ShamNER – Spoken Arabic Named‑Entity Recognition Corpus (Levantine v1.1)
ShamNER is a curated corpus of Levantine‑Arabic sentences annotated for Named Entities, plus dual annotation to check for consisetency (agreement
) across human annotators.
- Rounds :
pilot
,round1
–round5
(manual, as a rule quality improved across rounds) andround6
(synthetic, post‑edited). Thesythentic
data is done by sampling label-rich annotated spans from an MSA project and writing it with an LLM while force-injecting the annotated spans. Native speakers of Arabic then edited the these chunks to see to it that they sound as fluent and dilactical as possible. They were instructed not to touch the annotated spans. A script validated that no spans were modified. - Strict span‑novel evaluation : validation and test contain no entity surface‑form that appears in train (after normalisation). This probes true generalisation.
- Tokeniser‑agnostic : only raw sentences and character spans are stored; regenerate BIO tags with any tokenizer you wish.
Quick start
# Uncomment next line if you hit a LocalFileSystem / fsspec error on Colab
# !pip install -U "datasets>=2.16.0" "fsspec>=2023.10.0"
from datasets import load_dataset
sham = load_dataset("HebArabNlpProject/ShamNER")
train_ds = sham["train"]
datasets
streams the top‑level *.parquet
files automatically; use the matching *.jsonl
for grep‑friendly inspection.
Split Philosophy
No duplicate documents – A document is identified by the pair
(doc_name, round)
; each such bundle is assigned to exactly one split. This rule holds true for bundles, though individual sentences within bundles might have overlapping spans after post-allocation pruning for specific thresholds.Rounds – Six annotation iterations:
pilot
,round1
–round5
(manual, quality improving each round) andround6
(synthetic, then post-edited).
Early rounds feed train; span-novel slices ofround5
+round6
populate test.Single test set – The corpus ships one held-out test split:
test
= span-novel bundles from round 5 plus span-novel bundles from round 6.
No separatetest_synth
file.Span-novelty rule (Relaxed)
Before allocation, normalise every entity string:
- Convert to lowercase (Latin aphbaet exists in social media)
- Strip Arabic diacritics
- Remove leading “ال”
- Collapse internal whitespace
A bundle is forced to train if any of its normalised spans already occurs in train.
A post-allocation pruning step then moves sentences from validation or test back to train
only if more than 50% of their normalized spans already exist in the training set.
This threshold (0.50) was chosen to provide more learning examples to the model
in the evaluation sets, leading to improved performance.
- Tokeniser-agnostic – Each record stores only raw
text
and character-offsetspans
; no BIO arrays. Users regenerate token-level labels with whichever tokenizer their model requires.
Split sizes
split | sentences | files |
---|---|---|
train | 19 532 | train.jsonl / train.parquet |
validation | 1 931 | validation.* |
test | 1 931 | test.* |
iaa_A | 5 806 | optional, dual annotator A |
iaa_B | 5 806 | optional, annotator B |
Every sentence that appears in iaa_A.jsonl is also in the train split (with the same labels), while iaa_B.jsonl provides the alternative annotation for agreement/noise studies.
Label inventory (computed from unique_sentences.jsonl
)
label | description | count |
---|---|---|
GPE | Geopolitical Entity | 4 601 |
PER | Person | 3 628 |
ORG | Organisation | 1 426 |
MISC | Catch-all category | 1 301 |
FAC | Facility | 947 |
TIMEX | Temporal expression | 926 |
DUC | Product / Brand | 711 |
EVE | Event | 487 |
LOC | (non-GPE/natural) Location | 467 |
ANG | Language | 322 |
WOA | Work of Art | 292 |
TTL | Title / Honorific | 227 |
File schema (*.jsonl
)
{
"doc_id": 137,
"doc_name": "mohamedghalie",
"sent_id": 11,
"orig_ID": 20653,
"round": "round3",
"annotator": "Rawan",
"text": "جيب جوال أو أي اشي ضو هيك",
"spans": [
{"start": 4, "end": 8, "label": "DUC"}
]
}
Inter‑annotator files
iaa_A.jsonl
and iaa_B.jsonl
contain parallel annotations for the same 5 806 sentences. Use them to measure agreement or experiment with noise‑robust training. These sentences do not overlap with the primary train/val/test splits. As stated above, only iaa_A.jsonl
were injected into the train, dev and test set.
© 2025 · CC BY‑4.0