Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      JSON parse error: Column(/label/[]/[]) changed from number to string in row 0
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 160, in _generate_tables
                  df = pandas_read_json(f)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 38, in pandas_read_json
                  return pd.read_json(path_or_buf, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 815, in read_json
                  return json_reader.read()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1025, in read
                  obj = self._get_object_parser(self.data)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1051, in _get_object_parser
                  obj = FrameParser(json, **kwargs).parse()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1187, in parse
                  self._parse()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1403, in _parse
                  ujson_loads(json, precise_float=self.precise_float), dtype=None
              ValueError: Trailing data
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 231, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3335, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2096, in _head
                  return next(iter(self.iter(batch_size=n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2296, in iter
                  for key, example in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1856, in __iter__
                  for key, pa_table in self._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1878, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 476, in _iter_arrow
                  for key, pa_table in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 323, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 163, in _generate_tables
                  raise e
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 137, in _generate_tables
                  pa_table = paj.read_json(
                File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Column(/label/[]/[]) changed from number to string in row 0

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

IAHLT Named Entities Dataset (Arabic Subset)

האיגוד הישראלי לטכנולוגיות שפת אנוש
الرابطة الإسرائيلية لتكنولوجيا اللغة البشرية
The Israeli Association of Human Language Technologies
https://www.iahlt.org

This dataset contains named entity annotations for Arabic texts from various sources, curated as part of the IAHLT multilingual NER project. The Arabic portion is provided here as a cleaned subset intended for training and evaluation in named entity recognition tasks.

Files Included

This release includes the following JSONL files:

  • iahlt_ner_train.jsonl
  • iahlt_ner_val.jsonl
  • iahlt_ner_test.jsonl

Each file contains one JSON object per line with the following fields:

  • text: the raw paragraph text
  • label: a list of triples [start, end, label] where:
    • start is the index of the first character of the entity
    • end is the index of the first character after the entity
    • label is the entity class (e.g., PER, ORG, etc.)
  • metadata: a dictionary with source and annotation metadata

Entity Types

The dataset includes the following entity types (summed across splits):

Entity Count
GPE 26,767
PER 22,694
ORG 19,906
TIMEX 10,288
TTL 10,075
FAC 4,740
MISC 7,087
LOC 5,389
EVE 2,595
WOA 1,781
DUC 1,715
ANG 732
INFORMAL 10

Note: These statistics reflect a cleaned version of the dataset. Some entities and texts have been modified or removed for consistency and usability.

Annotation Notes

  • Entity spans were manually annotated at the grapheme level and then normalized using Arabic-specific punctuation and spacing rules.
  • Nested spans are allowed.
  • The dataset was cleaned to ensure compatibility with common NER training formats.

License

This dataset is released under the Creative Commons Attribution 4.0 International license.

Acknowledgments

We thank all annotators and contributors who worked on this corpus.

Downloads last month
28