Datasets:

Modalities:
Text
Formats:
arrow
Languages:
Kazakh
Libraries:
Datasets
License:

You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

KazNERD: Kazakh Named Entity Recognition Dataset

KazNERD is a corpus of 112,702 sentences extracted from television news text, annotated with 136,333 named entities across 25 entity classes. Annotation was performed by two native Kazakh speakers using the IOB2 scheme. The data are provided in the CoNLL 2002 format.

Models

Models with following architecures were trained on the dataset: CRF, BiLSTM-CNN-CRF, BERT, and XLM-RoBERTa. The best-performing XLM-RoBERTa model achieved an exact match F1-score of 97.22% on the test set.

Citation

@inproceedings{yeshpanov-etal-2022-kaznerd,
    title = "{K}az{NERD}: {K}azakh Named Entity Recognition Dataset",
    author = "Yeshpanov, Rustem  and
      Khassanov, Yerbolat  and
      Varol, Huseyin Atakan",
    booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
    month = jun,
    year = "2022",
    address = "Marseille, France",
    publisher = "European Language Resources Association",
    url = "https://aclanthology.org/2022.lrec-1.44",
    pages = "417--426",
    abstract = "We present the development of a dataset for Kazakh named entity recognition. The dataset was built as there is a clear need for publicly available annotated corpora in Kazakh, as well as annotation guidelines containing straightforward{---}but rigorous{---}rules and examples. The dataset annotation, based on the IOB2 scheme, was carried out on television news text by two native Kazakh speakers under the supervision of the first author. The resulting dataset contains 112,702 sentences and 136,333 annotations for 25 entity classes. State-of-the-art machine learning models to automatise Kazakh named entity recognition were also built, with the best-performing model achieving an exact match F1-score of 97.22{\%} on the test set. The annotated dataset, guidelines, and codes used to train the models are freely available for download under the CC BY 4.0 licence from https://github.com/IS2AI/KazNERD.",
}

https://github.com/IS2AI/KazNERD

Downloads last month
35

Models trained or fine-tuned on issai/KazNERD