Zarma_NER / README.md
Mamadou2727's picture
update README
9db4926 verified
metadata
license: cc-by-4.0
language:
  - dje
task_categories:
  - token-classification
  - question-answering
task_ids:
  - named-entity-recognition
pretty_name: ZarmaNER-600
configs:
  - config_name: default
    data_files:
      - split: gold
        path: NER_GOLD/zarma_NER.jsonl
size_categories:
  - n<1K
tags:
  - low-resource-nlp
  - zarma
  - named-entity-recognition
  - principled-learning

ZarmaNER-600 Dataset

Dataset Description

ZarmaNER-600 is a gold-standard dataset for Named Entity Recognition (NER) in Zarma. This dataset contains 600 manually annotated sentences, making it the first publicly available NER corpus for Zarma. It was created to support research in low-resource NLP, particularly for sequence tagging tasks, as part of the Rule-to-Tag (R2T) framework introduced in our paper, "R2T: A Case Study in Principled Learning for Low-Resource POS Tagging."

The dataset includes annotations for four entity types: Person (PER), Location (LOC), Organization (ORG), and Date (DATE), following the standard BIO (Beginning, Inside, Outside) tagging scheme.

Dataset Structure

The dataset is provided in JSONL format, where each line represents a single sentence with the following fields:

  • 'text': The original, untokenized sentence string.
  • 'tokens': A list of strings representing the tokenized sentence.
  • 'tags': A parallel list of BIO tags corresponding to each token.

Example Entry

'''json { "text": "Ali ga zumbu Niamey ra.", "tokens": ["Ali", "ga", "zumbu", "Niamey", "ra", "."], "tags": ["B-PER", "O", "O", "B-LOC", "O", "O"] } '''

Tagset

  • B-PER: Beginning of a Person entity
  • I-PER: Inside of a Person entity
  • B-LOC: Beginning of a Location entity
  • I-LOC: Inside of a Location entity
  • B-ORG: Beginning of an Organization entity
  • I-ORG: Inside of an Organization entity
  • B-DATE: Beginning of a Date entity
  • I-DATE: Inside of a Date entity
  • O: Outside (non-entity token)

Intended Use

ZarmaNER-600 is designed for:

  • Evaluation
  • Training/Fine-Tuning
  • Research: Serve as a benchmark for low-resource NER, particularly for testing hybrid approaches like R2T or other principled learning paradigms.