category
string
split
string
Name
string
Subsets
string
HF Link
null
Link
string
License
string
Year
int64
Language
string
Dialect
string
Domain
string
Form
string
Collection Style
null
Description
string
Volume
float64
Unit
string
Ethical Risks
null
Provider
string
Derived From
null
Paper Title
null
Paper Link
null
Script
string
Tokenized
bool
Host
string
Access
string
Cost
string
Test Split
null
Tasks
string
Venue Title
null
Venue Type
null
Venue Name
null
Authors
string
Affiliations
string
Abstract
string
Name_exist
int64
Subsets_exist
int64
HF Link_exist
null
Link_exist
int64
License_exist
int64
Year_exist
int64
Language_exist
int64
Dialect_exist
int64
Domain_exist
int64
Form_exist
int64
Collection Style_exist
null
Description_exist
int64
Volume_exist
int64
Unit_exist
int64
Ethical Risks_exist
null
Provider_exist
int64
Derived From_exist
null
Paper Title_exist
null
Paper Link_exist
null
Script_exist
int64
Tokenized_exist
int64
Host_exist
int64
Access_exist
int64
Cost_exist
int64
Test Split_exist
null
Tasks_exist
int64
Venue Title_exist
null
Venue Type_exist
null
Venue Name_exist
null
Authors_exist
int64
Affiliations_exist
int64
Abstract_exist
int64
ar
valid
101 Billion Arabic Words Dataset
[]
null
https://hf.co/datasets/ClusterlabAi/101_billion_arabic_words_dataset
Apache-2.0
2,024
ar
mixed
['web pages']
text
null
The 101 Billion Arabic Words Dataset is curated by the Clusterlab team and consists of 101 billion words extracted and cleaned from web content, specifically targeting Arabic text. This dataset is intended for use in natural language processing applications, particularly in training and fine-tuning Large Language Models (LLMs)
101,000,000,000
tokens
null
['Clusterlab']
null
null
null
Arab
false
HuggingFace
Free
null
['text generation', 'language modeling']
null
null
null
['Manel Aloui', 'Hasna Chouikhi', 'Ghaith Chaabane', 'Haithem Kchaou', 'Chehir Dhaouadi']
['Clusterlab']
In recent years, Large Language Models (LLMs) have revolutionized the field of natural language processing, showcasing an impressive rise predominantly in English-centric domains. These advancements have set a global benchmark, inspiring significant efforts toward developing Arabic LLMs capable of understanding and generating the Arabic language with remarkable accuracy. Despite these advancements, a critical challenge persists: the potential bias in Arabic LLMs, primarily attributed to their reliance on datasets comprising English data that has been translated into Arabic. This reliance not only compromises the authenticity of the generated content but also reflects a broader issue—the scarcity of original quality Arabic linguistic data. This study aims to address the data scarcity in the Arab world and to encourage the development of Arabic Language Models that are true to both the linguistic and nuances of the region. We undertook a large-scale data mining project, extracting a substantial volume of text from the Common Crawl WET files, specifically targeting Arabic content. The extracted data underwent a rigorous cleaning and deduplication process, using innovative techniques to ensure the integrity and uniqueness of the dataset. The result is the 101 Billion Arabic Words Dataset, the largest Arabic dataset available to date, which can significantly contribute to the development of authentic Arabic LLMs. This study not only highlights the potential for creating linguistically and culturally accurate Arabic LLMs but also sets a precedent for future research in enhancing the authenticity of Arabic language models.
1
1
null
1
1
1
1
1
1
1
null
1
1
1
null
1
null
null
null
1
1
1
1
1
null
0
null
null
null
1
1
1
ar
valid
WinoMT
[]
null
https://github.com/gabrielStanovsky/mt_gender
MIT License
2,019
multilingual
Modern Standard Arabic
['public datasets']
text
null
Evaluating Gender Bias in Machine Translation
3,888
sentences
null
[]
null
null
null
Arab
false
GitHub
Free
null
['machine translation']
null
null
null
['Gabriel Stanovsky', 'Noah A. Smith', 'Luke Zettlemoyer']
['Allen Institute for Artificial Intelligence', 'University of Washington', 'University of Washington', 'Facebook']
We present the first challenge set and evaluation protocol for the analysis of gender bias in machine translation (MT). Our approach uses two recent coreference resolution datasets composed of English sentences which cast participants into non-stereotypical gender roles (e.g., “The doctor asked the nurse to help her in the operation”). We devise an automatic gender bias evaluation method for eight target languages with grammatical gender, based on morphological analysis (e.g., the use of female inflection for the word “doctor”). Our analyses show that four popular industrial MT systems and two recent state-of-the-art academic MT models are significantly prone to gender-biased translation errors for all tested target languages. Our data and code are publicly available at https://github.com/gabrielStanovsky/mt_gender.
1
1
null
0
0
1
1
0
1
1
null
1
1
1
null
1
null
null
null
1
1
0
0
1
null
1
null
null
null
1
1
1
ar
valid
ArabicMMLU
[]
null
https://github.com/mbzuai-nlp/ArabicMMLU
CC BY-NC-SA 4.0
2,024
ar
Modern Standard Arabic
['web pages']
text
null
ArabicMMLU is the first multi-task language understanding benchmark for Arabic language, sourced from school exams across diverse educational levels in different countries spanning North Africa, the Levant, and the Gulf regions. Our data comprises 40 tasks and 14,575 multiple-choice questions in Modern Standard Arabic (MSA).
14,575
sentences
null
['MBZUAI']
null
null
null
Arab
false
GitHub
Free
null
['question answering', 'multiple choice question answering']
null
null
null
['Fajri Koto', 'Haonan Li', 'Sara Shatnawi', 'Jad Doughman', 'Abdelrahman Boda Sadallah', 'Aisha Alraeesi', 'Khalid Almubarak', 'Zaid Alyafeai', 'Neha Sengupta', 'Shady Shehata', 'Nizar Habash', 'Preslav Nakov', 'Timothy Baldwin']
[]
The focus of language model evaluation has transitioned towards reasoning and knowledge-intensive tasks, driven by advancements in pretraining large models. While state-of-the-art models are partially trained on large Arabic texts, evaluating their performance in Arabic remains challenging due to the limited availability of relevant datasets. To bridge this gap, we present ArabicMMLU, the first multi-task language understanding benchmark for Arabic language, sourced from school exams across diverse educational levels in different countries spanning North Africa, the Levant, and the Gulf regions. Our data comprises 40 tasks and 14,575 multiple-choice questions in Modern Standard Arabic (MSA), and is carefully constructed by collaborating with native speakers in the region. Our comprehensive evaluations of 35 models reveal substantial room for improvement, particularly among the best open-source models. Notably, BLOOMZ, mT0, LLama2, and Falcon struggle to achieve a score of 50%, while even the top-performing Arabic-centric model only achieves a score of 62.3%.
1
1
null
1
0
1
1
1
1
1
null
1
1
1
null
1
null
null
null
1
1
1
1
1
null
1
null
null
null
1
1
1
ar
valid
CIDAR
[]
null
https://hf.co/datasets/arbml/CIDAR
CC BY-NC 4.0
2,024
ar
Modern Standard Arabic
['commentary', 'LLM']
text
null
CIDAR contains 10,000 instructions and their output. The dataset was created by selecting around 9,109 samples from Alpagasus dataset then translating it to Arabic using ChatGPT. In addition, we append that with around 891 Arabic grammar instructions from the webiste Ask the teacher.
10,000
sentences
null
['ARBML']
null
null
null
Arab
false
HuggingFace
Free
null
['instruction tuning', 'question answering']
null
null
null
['Zaid Alyafeai', 'Khalid Almubarak', 'Ahmed Ashraf', 'Deema Alnuhait', 'Saied Alshahrani', 'Gubran A. Q. Abdulrahman', 'Gamil Ahmed', 'Qais Gawah', 'Zead Saleh', 'Mustafa Ghaleb', 'Yousef Ali', 'Maged S. Al-Shaibani']
[]
Instruction tuning has emerged as a prominent methodology for teaching Large Language Models (LLMs) to follow instructions. However, current instruction datasets predominantly cater to English or are derived from English-dominated LLMs, resulting in inherent biases toward Western culture. This bias significantly impacts the linguistic structures of non-English languages such as Arabic, which has a distinct grammar reflective of the diverse cultures across the Arab region. This paper addresses this limitation by introducing CIDAR1 the first open Arabic instructiontuning dataset culturally-aligned by human reviewers. CIDAR contains 10,000 instruction and output pairs that represent the Arab region. We discuss the cultural relevance of CIDAR via the analysis and comparison to other models fine-tuned on other datasets. Our experiments show that CIDAR can help enrich research efforts in aligning LLMs with the Arabic culture.
1
1
null
1
0
1
1
1
1
1
null
1
1
1
null
1
null
null
null
1
1
1
1
1
null
1
null
null
null
1
1
1
ar
valid
Belebele
[{'Name': 'acm_Arab', 'Dialect': 'Iraq', 'Volume': 900.0, 'Unit': 'sentences'}, {'Name': 'arb_Arab', 'Dialect': 'Modern Standard Arabic', 'Volume': 900.0, 'Unit': 'sentences'}, {'Name': 'apc_Arab', 'Dialect': 'Levant', 'Volume': 900.0, 'Unit': 'sentences'}, {'Name': 'ars_Arab', 'Dialect': 'Saudi Arabia', 'Volume': 900.0, 'Unit': 'sentences'}, {'Name': 'ary_Arab', 'Dialect': 'Morocco', 'Volume': 900.0, 'Unit': 'sentences'}, {'Name': 'arz_Arab', 'Dialect': 'Egypt', 'Volume': 900.0, 'Unit': 'sentences'}]
null
https://github.com/facebookresearch/belebele
CC BY-SA 4.0
2,024
multilingual
mixed
['wikipedia', 'public datasets']
text
null
A multiple-choice machine reading comprehension (MRC) dataset spanning 122 language variants.
5,400
sentences
null
['Facebook']
null
null
null
Arab
false
GitHub
Free
null
['question answering', 'multiple choice question answering']
null
null
null
['Lucas Bandarkar', 'Davis Liang', 'Benjamin Muller', 'Mikel Artetxe', 'Satya Narayan Shukla', 'Donald Husa', 'Naman Goyal', 'Abhinandan Krishnan', 'Luke Zettlemoyer', 'Madian Khabsa']
[]
We present Belebele, a multiple-choice machine reading comprehension (MRC) dataset spanning 122 language variants. Significantly expanding the language coverage of natural language understanding (NLU) benchmarks, this dataset enables the evaluation of text models in high-, medium-, and low-resource languages. Each question is based on a short passage from the Flores-200 dataset and has four multiple-choice answers. The questions were carefully curated to discriminate between models with different levels of general language comprehension. The English dataset on its own proves difficult enough to challenge state-of-the-art language models. Being fully parallel, this dataset enables direct comparison of model performance across all languages. We use this dataset to evaluate the capabilities of multilingual masked language models (MLMs) and large language models (LLMs). We present extensive results and find that despite significant cross-lingual transfer in English-centric LLMs, much smaller MLMs pretrained on balanced multilingual data still understand far more languages. We also observe that larger vocabulary size and conscious vocabulary construction correlate with better performance on low-resource languages. Overall, Belebele opens up new avenues for evaluating and analyzing the multilingual capabilities of NLP systems.
1
1
null
1
1
1
1
1
1
1
null
1
1
1
null
1
null
null
null
1
1
1
1
1
null
1
null
null
null
1
1
1
ar
valid
MGB-2
[]
null
https://arabicspeech.org/resources/mgb2
unknown
2,019
ar
Modern Standard Arabic
['TV Channels', 'captions']
audio
null
from Aljazeera TV programs have been manually captioned with no timing information
1,200
hours
null
['QCRI']
null
null
null
Arab
false
other
Upon-Request
null
['speech recognition']
null
null
null
['Ahmed Ali', 'Peter Bell', 'James Glass', 'Yacine Messaoui', 'Hamdy Mubarak', 'Steve Renals', 'Yifan Zhang']
[]
This paper describes the Arabic MGB-3 Challenge — Arabic Speech Recognition in the Wild. Unlike last year's Arabic MGB-2 Challenge, for which the recognition task was based on more than 1,200 hours broadcast TV news recordings from Aljazeera Arabic TV programs, MGB-3 emphasises dialectal Arabic using a multi-genre collection of Egyptian YouTube videos. Seven genres were used for the data collection: comedy, cooking, family/kids, fashion, drama, sports, and science (TEDx). A total of 16 hours of videos, split evenly across the different genres, were divided into adaptation, development and evaluation data sets. The Arabic MGB-Challenge comprised two tasks: A) Speech transcription, evaluated on the MGB-3 test set, along with the 10 hour MGB-2 test set to report progress on the MGB-2 evaluation; B) Arabic dialect identification, introduced this year in order to distinguish between four major Arabic dialects — Egyptian, Levantine, North African, Gulf, as well as Modern Standard Arabic. Two hours of audio per dialect were released for development and a further two hours were used for evaluation. For dialect identification, both lexical features and i-vector bottleneck features were shared with participants in addition to the raw audio recordings. Overall, thirteen teams submitted ten systems to the challenge. We outline the approaches adopted in each system, and summarise the evaluation results.
1
1
null
0
1
1
1
1
1
1
null
1
1
1
null
1
null
null
null
1
1
1
0
1
null
1
null
null
null
1
1
1
ar
test
ANETAC
[]
null
https://github.com/MohamedHadjAmeur/ANETAC
unknown
2,019
multilingual
Modern Standard Arabic
['public datasets']
text
null
English-Arabic named entity transliteration and classification dataset
79,924
tokens
null
['USTHB University', 'University of Salford']
null
null
null
Arab
false
GitHub
Free
null
['named entity recognition', 'transliteration', 'machine translation']
null
null
null
['Mohamed Seghir Hadj Ameur', 'Farid Meziane', 'Ahmed Guessoum']
['USTHB University', 'University of Salford', 'USTHB University']
In this paper, we make freely accessible ANETAC our English-Arabic named entity transliteration and classification dataset that we built from freely available parallel translation corpora. The dataset contains 79,924 instances, each instance is a triplet (e, a, c), where e is the English named entity, a is its Arabic transliteration and c is its class that can be either a Person, a Location, or an Organization. The ANETAC dataset is mainly aimed for the researchers that are working on Arabic named entity transliteration, but it can also be used for named entity classification purposes.
1
1
null
1
0
1
1
1
1
1
null
1
1
1
null
1
null
null
null
1
1
1
1
1
null
1
null
null
null
1
1
1
ar
test
TUNIZI
[]
null
https://github.com/chaymafourati/TUNIZI-Sentiment-Analysis-Tunisian-Arabizi-Dataset
unknown
2,020
ar
Tunisia
['social media', 'commentary']
text
null
first Tunisian Arabizi Dataset including 3K sentences, balanced, covering different topics, preprocessed and annotated as positive and negative
9,210
sentences
null
['iCompass']
null
null
null
Latin
false
GitHub
Free
null
['sentiment analysis']
null
null
null
['Chayma Fourati', 'Abir Messaoudi', 'Hatem Haddad']
['iCompass', 'iCompass', 'iCompass']
On social media, Arabic people tend to express themselves in their own local dialects. More particularly, Tunisians use the informal way called "Tunisian Arabizi". Analytical studies seek to explore and recognize online opinions aiming to exploit them for planning and prediction purposes such as measuring the customer satisfaction and establishing sales and marketing strategies. However, analytical studies based on Deep Learning are data hungry. On the other hand, African languages and dialects are considered low resource languages. For instance, to the best of our knowledge, no annotated Tunisian Arabizi dataset exists. In this paper, we introduce TUNIZI a sentiment analysis Tunisian Arabizi Dataset, collected from social networks, preprocessed for analytical studies and annotated manually by Tunisian native speakers.
1
1
null
1
0
1
1
1
1
1
null
1
1
1
null
1
null
null
null
1
1
1
1
1
null
1
null
null
null
1
1
1
ar
test
Shamela
[]
null
https://github.com/OpenArabic/
unknown
2,016
ar
Classical Arabic
['books']
text
null
a large-scale, historical corpus of Arabic of about 1 billion words from diverse periods of time
6,100
documents
null
[]
null
null
null
Arab
true
GitHub
Free
null
['text generation', 'language modeling', 'part of speech tagging', 'morphological analysis']
null
null
null
['Yonatan Belinkov', 'Alexander Magidow', 'Maxim Romanov', 'Avi Shmidman', 'Moshe Koppel']
[]
Arabic is a widely-spoken language with a rich and long history spanning more than fourteen centuries. Yet existing Arabic corpora largely focus on the modern period or lack sufficient diachronic information. We develop a large-scale, historical corpus of Arabic of about 1 billion words from diverse periods of time. We clean this corpus, process it with a morphological analyzer, and enhance it by detecting parallel passages and automatically dating undated texts. We demonstrate its utility with selected case-studies in which we show its application to the digital humanities.
1
1
null
0
0
1
1
1
1
1
null
1
1
1
null
0
null
null
null
1
1
1
1
1
null
1
null
null
null
1
1
1
ar
test
POLYGLOT-NER
[]
null
https://huggingface.co/datasets/rmyeid/polyglot_ner
unknown
2,014
multilingual
Modern Standard Arabic
['wikipedia']
text
null
Polyglot-NER A training dataset automatically generated from Wikipedia and Freebase the task of named entity recognition. The dataset contains the basic Wikipedia based training data for 40 languages we have (with coreference resolution) for the task of named entity recognition.
10,000,144
tokens
null
['Stony Brook University']
null
null
null
Arab
false
HuggingFace
Free
null
['named entity recognition']
null
null
null
['Rami Al-Rfou', 'Vivek Kulkarni', 'Bryan Perozzi', 'Steven Skiena']
['Stony Brook University']
The increasing diversity of languages used on the web introduces a new level of complexity to Information Retrieval (IR) systems. We can no longer assume that textual content is written in one language or even the same language family. In this paper, we demonstrate how to build massive multilingual annotators with minimal human expertise and intervention. We describe a system that builds Named Entity Recognition (NER) annotators for 40 major languages using Wikipedia and Freebase. Our approach does not require NER human annotated datasets or language specific resources like treebanks, parallel corpora, and orthographic rules. The novelty of approach lies therein - using only language agnostic techniques, while achieving competitive performance. Our method learns distributed word representations (word embeddings) which encode semantic and syntactic features of words in each language. Then, we automatically generate datasets from Wikipedia link structure and Freebase attributes. Finally, we apply two preprocessing stages (oversampling and exact surface form matching) which do not require any linguistic expertise. Our evaluation is two fold: First, we demonstrate the system performance on human annotated datasets. Second, for languages where no gold-standard benchmarks are available, we propose a new method, distant evaluation, based on statistical machine translation.
1
1
null
0
0
1
1
1
1
1
null
1
0
1
null
1
null
null
null
1
1
0
0
0
null
1
null
null
null
1
1
1
ar
test
DODa
[]
null
https://github.com/darija-open-dataset/dataset
MIT License
2,021
multilingual
Morocco
['other']
text
null
DODa presents words under different spellings, offers verb-to-noun and masculine-to-feminine correspondences contains the conjugation of hundreds of verbs in different tenses,
10,000
tokens
null
[]
null
null
null
Arab-Latin
true
GitHub
Free
null
['transliteration', 'machine translation', 'part of speech tagging']
null
null
null
['Aissam Outchakoucht', 'Hamza Es-Samaali']
[]
Darija Open Dataset (DODa) is an open-source project for the Moroccan dialect. With more than 10,000 entries DODa is arguably the largest open-source collaborative project for Darija-English translation built for Natural Language Processing purposes. In fact, besides semantic categorization, DODa also adopts a syntactic one, presents words under different spellings, offers verb-to-noun and masculine-to-feminine correspondences, contains the conjugation of hundreds of verbs in different tenses, and many other subsets to help researchers better understand and study Moroccan dialect. This data paper presents a description of DODa, its features, how it was collected, as well as a first application in Image Classification using ImageNet labels translated to Darija. This collaborative project is hosted on GitHub platform under MIT's Open-Source license and aims to be a standard resource for researchers, students, and anyone who is interested in Moroccan Dialect
1
1
null
1
1
0
1
1
1
1
null
1
1
1
null
1
null
null
null
1
1
1
1
1
null
1
null
null
null
1
1
1
ar
test
LASER
[]
null
https://github.com/facebookresearch/LASER
BSD
2,019
multilingual
Modern Standard Arabic
['public datasets']
text
null
Aligned sentences in 112 languages extracted from Tatoeba
8,200,000
sentences
null
['Facebook']
null
null
null
Arab
false
GitHub
Free
null
['machine translation', 'embedding evaluation']
null
null
null
['Mikel Artetxe', 'Holger Schwenk']
['University of the Basque Country', 'Facebook AI Research']
We introduce an architecture to learn joint multilingual sentence representations for 93 languages, belonging to more than 30 different families and written in 28 different scripts. Our system uses a single BiLSTM encoder with a shared BPE vocabulary for all languages, which is coupled with an auxiliary decoder and trained on publicly available parallel corpora. This enables us to learn a classifier on top of the resulting embeddings using English annotated data only, and transfer it to any of the 93 languages without any modification. Our experiments in cross-lingual natural language inference (XNLI dataset), cross-lingual document classification (MLDoc dataset) and parallel corpus mining (BUCC dataset) show the effectiveness of our approach. We also introduce a new test set of aligned sentences in 112 languages, and show that our sentence embeddings obtain strong results in multilingual similarity search even for low-resource languages. Our implementation, the pre-trained encoder and the multilingual test set are available at https://github.com/facebookresearch/LASER
1
1
null
1
0
1
1
1
1
1
null
1
1
1
null
1
null
null
null
1
1
1
1
1
null
1
null
null
null
1
1
1
ar
test
MGB-3
[]
null
https://github.com/qcri/dialectID
MIT License
2,017
ar
Egypt
['social media', 'captions']
audio
null
A multi-genre collection of Egyptian YouTube videos. Seven genres were used for the data collection: comedy, cooking, family/kids, fashion, drama, sports, and science (TEDx). A total of 16 hours of videos, split evenly across the different genres
16
hours
null
['QCRI']
null
null
null
Arab
false
GitHub
Free
null
['speech recognition']
null
null
null
['Ahmed Ali', 'Stephan Vogel', 'Steve Renals']
[]
This paper describes the Arabic MGB-3 Challenge - Arabic Speech Recognition in the Wild. Unlike last year's Arabic MGB-2 Challenge, for which the recognition task was based on more than 1,200 hours broadcast TV news recordings from Aljazeera Arabic TV programs, MGB-3 emphasises dialectal Arabic using a multi-genre collection of Egyptian YouTube videos. Seven genres were used for the data collection: comedy, cooking, family/kids, fashion, drama, sports, and science (TEDx). A total of 16 hours of videos, split evenly across the different genres, were divided into adaptation, development and evaluation data sets. The Arabic MGB-Challenge comprised two tasks: A) Speech transcription, evaluated on the MGB-3 test set, along with the 10 hour MGB-2 test set to report progress on the MGB-2 evaluation; B) Arabic dialect identification, introduced this year in order to distinguish between four major Arabic dialects - Egyptian, Levantine, North African, Gulf, as well as Modern Standard Arabic. Two hours of audio per dialect were released for development and a further two hours were used for evaluation. For dialect identification, both lexical features and i-vector bottleneck features were shared with participants in addition to the raw audio recordings. Overall, thirteen teams submitted ten systems to the challenge. We outline the approaches adopted in each system, and summarise the evaluation results.
1
1
null
1
0
1
1
1
1
1
null
1
1
1
null
1
null
null
null
1
1
1
1
1
null
1
null
null
null
1
1
1
ar
test
Arap-Tweet
[]
null
https://arap.qatar.cmu.edu/templates/research.html
unknown
2,018
ar
mixed
['social media']
text
null
Arap-Tweet is a large-scale, multi-dialectal Arabic Twitter corpus containing 2.4 million tweets from 11 regions across 16 countries in the Arab world. The dataset includes annotations for dialect, age group, and gender of the users.
2,400,000
sentences
null
['Hamad Bin Khalifa University', 'Carnegie Mellon University Qatar']
null
null
null
Arab
false
other
Upon-Request
null
['dialect identification', 'gender identification']
null
null
null
['Wajdi Zaghouani', 'Anis Charfi']
['Hamad Bin Khalifa University', 'Carnegie Mellon University Qatar']
In this paper, we present Arap-Tweet, which is a large-scale and multi-dialectal corpus of Tweets from 11 regions and 16 countries in the Arab world representing the major Arabic dialectal varieties. To build this corpus, we collected data from Twitter and we provided a team of experienced annotators with annotation guidelines that they used to annotate the corpus for age categories, gender, and dialectal variety. During the data collection effort, we based our search on distinctive keywords that are specific to the different Arabic dialects and we also validated the location using Twitter API. In this paper, we report on the corpus data collection and annotation efforts. We also present some issues that we encountered during these phases. Then, we present the results of the evaluation performed to ensure the consistency of the annotation. The provided corpus will enrich the limited set of available language resources for Arabic and will be an invaluable enabler for developing author profiling tools and NLP tools for Arabic.
1
1
null
0
0
1
1
1
1
1
null
1
1
1
null
1
null
null
null
1
1
1
0
1
null
1
null
null
null
1
1
1
ar
test
FLORES-101
[]
null
https://github.com/facebookresearch/flores/tree/main/previous_releases/flores101
CC BY-SA 4.0
2,021
multilingual
Modern Standard Arabic
['wikipedia', 'books', 'news articles']
text
null
The FLORES-101 evaluation benchmark consists of 3001 sentences extracted from English Wikipedia and covers various topics and domains. These sentences have been translated into 101 languages by professional translators through a carefully controlled process.
3,001
sentences
null
['Facebook']
null
null
null
Arab
false
GitHub
Free
null
['machine translation']
null
null
null
['Naman Goyal', 'Cynthia Gao', 'Vishrav Chaudhary', 'Guillaume Wenzek', 'Da Ju', 'Sanjan Krishnan', "Marc'Aurelio Ranzato", 'Francisco Guzmán', 'Angela Fan']
['Facebook AI Research']
One of the biggest challenges hindering progress in low-resource and multilingual machine translation is the lack of good evaluation benchmarks. Current evaluation benchmarks either lack good coverage of low-resource languages, consider only restricted domains, or are low quality because they are constructed using semi-automatic procedures. In this work, we introduce the FLORES-101 evaluation benchmark, consisting of 3001 sentences extracted from English Wikipedia and covering a variety of different topics and domains. These sentences have been translated in 101 languages by professional translators through a carefully controlled process. The resulting dataset enables better assessment of model quality on the long tail of low-resource languages, including the evaluation of many-to-many multilingual translation systems, as all translations are multilingually aligned. By publicly releasing such a high-quality and high-coverage dataset, we hope to foster progress in the machine translation community and beyond.
1
1
null
0
0
1
1
1
1
1
null
1
1
1
null
1
null
null
null
1
1
0
0
0
null
1
null
null
null
1
1
1
ar
test
Transliteration
[]
null
https://github.com/google/transliteration
Apache-2.0
2,016
multilingual
Modern Standard Arabic
['wikipedia']
text
null
Arabic-English transliteration dataset mined from Wikipedia.
15,898
tokens
null
['Google']
null
null
null
Arab-Latin
false
GitHub
Free
null
['transliteration', 'machine translation']
null
null
null
['Mihaela Rosca', 'Thomas Breuel']
['Google']
Transliteration is a key component of machine translation systems and software internationalization. This paper demonstrates that neural sequence-to-sequence models obtain state of the art or close to state of the art results on existing datasets. In an effort to make machine transliteration accessible, we open source a new Arabic to English transliteration dataset and our trained models.
1
1
null
1
0
1
1
1
1
1
null
1
1
1
null
1
null
null
null
1
1
1
1
1
null
1
null
null
null
1
1
1
ar
test
ADI-5
[{'Name': 'Egyptian', 'Dialect': 'Egypt', 'Volume': 14.4, 'Unit': 'hours'}, {'Name': 'Gulf', 'Dialect': 'Gulf', 'Volume': 14.1, 'Unit': 'hours'}, {'Name': 'Levantine', 'Dialect': 'Levant', 'Volume': 14.3, 'Unit': 'hours'}, {'Name': 'MSA', 'Dialect': 'Modern Standard Arabic', 'Volume': 14.3, 'Unit': 'hours'}, {'Name': 'North African', 'Dialect': 'North Africa', 'Volume': 14.6, 'Unit': 'hours'}]
null
https://github.com/Qatar-Computing-Research-Institute/dialectID
MIT License
2,016
ar
mixed
['TV Channels']
audio
null
This will be divided across the five major Arabic dialects; Egyptian (EGY), Levantine (LAV), Gulf (GLF), North African (NOR), and Modern Standard Arabic (MSA)
74.5
hours
null
['QCRI']
null
null
null
Arab
false
GitHub
Free
null
['dialect identification']
null
null
null
['A. Ali', 'Najim Dehak', 'P. Cardinal', 'Sameer Khurana', 'S. Yella', 'James R. Glass', 'P. Bell', 'S. Renals']
[]
We investigate different approaches for dialect identification in Arabic broadcast speech, using phonetic, lexical features obtained from a speech recognition system, and acoustic features using the i-vector framework. We studied both generative and discriminate classifiers, and we combined these features using a multi-class Support Vector Machine (SVM). We validated our results on an Arabic/English language identification task, with an accuracy of 100%. We used these features in a binary classifier to discriminate between Modern Standard Arabic (MSA) and Dialectal Arabic, with an accuracy of 100%. We further report results using the proposed method to discriminate between the five most widely used dialects of Arabic: namely Egyptian, Gulf, Levantine, North African, and MSA, with an accuracy of 52%. We discuss dialect identification errors in the context of dialect code-switching between Dialectal Arabic and MSA, and compare the error pattern between manually labeled data, and the output from our classifier. We also release the train and test data as standard corpus for dialect identification.
1
1
null
1
0
0
1
1
1
1
null
1
1
1
null
1
null
null
null
1
1
1
1
1
null
1
null
null
null
1
1
1
ar
test
Maknuune
[]
null
https://www.palestine-lexicon.org
CC BY-SA 4.0
2,022
multilingual
Palestine
['captions', 'public datasets', 'other']
text
null
A large open lexicon for the Palestinian Arabic dialect. Maknuune has over 36K entries from 17K lemmas,and 3.7K roots. All entries include diacritized Arabic orthography, phonological transcription and English glosses.
36,302
tokens
null
['New York University Abu Dhabi', 'University of Oxford', 'UNRWA']
null
null
null
Arab-Latin
true
Gdrive
Free
null
['morphological analysis', 'lexicon analysis']
null
null
null
['Shahd Dibas', 'Christian Khairallah', 'Nizar Habash', 'Omar Fayez Sadi', 'Tariq Sairafy', 'Karmel Sarabta', 'Abrar Ardah']
['NYUAD', 'University of Oxford', 'UNRWA']
We present Maknuune, a large open lexicon for the Palestinian Arabic dialect. Maknuune has over 36K entries from 17K lemmas, and 3.7K roots. All entries include diacritized Arabic orthography, phonological transcription and English glosses. Some entries are enriched with additional information such as broken plurals and templatic feminine forms, associated phrases and collocations, Standard Arabic glosses, and examples or notes on grammar, usage, or location of collected entry.
1
1
null
1
0
1
1
1
1
1
null
1
1
1
null
1
null
null
null
1
1
0
0
1
null
1
null
null
null
1
1
1
ar
test
EmojisAnchors
[]
null
https://codalab.lisn.upsaclay.fr/competitions/2324
custom
2,022
ar
mixed
['social media', 'public datasets']
text
null
Fine-Grained Hate Speech Detection on Arabic Twitter
12,698
sentences
null
['QCRI', 'University of Pittsburgh']
null
null
null
Arab
false
CodaLab
Free
null
['offensive language detection']
null
null
null
['Hamdy Mubarak', 'Hend Al-Khalifa', 'AbdulMohsen Al-Thubaity']
['Qatar Computing Research Institute', 'King Saud University', 'King Abdulaziz City for Science and Technology (KACST)']
We introduce a generic, language-independent method to collect a large percentage of offensive and hate tweets regardless of their topics or genres. We harness the extralinguistic information embedded in the emojis to collect a large number of offensive tweets. We apply the proposed method on Arabic tweets and compare it with English tweets - analysing key cultural differences. We observed a constant usage of these emojis to represent offensiveness throughout different timespans on Twitter. We manually annotate and publicly release the largest Arabic dataset for offensive, fine-grained hate speech, vulgar and violence content. Furthermore, we benchmark the dataset for detecting offensiveness and hate speech using different transformer architectures and perform in-depth linguistic analysis. We evaluate our models on external datasets - a Twitter dataset collected using a completely different method, and a multi-platform dataset containing comments from Twitter, YouTube and Facebook, for assessing generalization capability. Competitive results on these datasets suggest that the data collected using our method captures universal characteristics of offensive language. Our findings also highlight the common words used in offensive communications, common targets for hate speech, specific patterns in violence tweets; and pinpoint common classification errors that can be attributed to limitations of NLP models. We observe that even state-of-the-art transformer models may fail to take into account culture, background and context or understand nuances present in real-world data such as sarcasm.
1
1
null
0
0
1
1
1
1
1
null
1
1
1
null
1
null
null
null
1
1
0
0
0
null
1
null
null
null
1
1
1
ar
test
Calliar
[]
null
https://github.com/ARBML/Calliar
MIT License
2,021
ar
Modern Standard Arabic
['web pages']
images
null
Calliar is a dataset for Arabic calligraphy. The dataset consists of 2500 json files that contain strokes manually annotated for Arabic calligraphy. This repository contains the dataset for the following paper
2,500
images
null
['ARBML']
null
null
null
Arab
false
GitHub
Free
null
['optical character recognition']
null
null
null
['Zaid Alyafeai', 'Maged S. Al-shaibani', 'Mustafa Ghaleb & Yousif Ahmed Al-Wajih']
['KFUPM', 'KFUPM', 'KFUPM', 'KFUPM']
Calligraphy is an essential part of the Arabic heritage and culture. It has been used in the past for the decoration of houses and mosques. Usually, such calligraphy is designed manually by experts with aesthetic insights. In the past few years, there has been a considerable effort to digitize such type of art by either taking a photo of decorated buildings or drawing them using digital devices. The latter is considered an online form where the drawing is tracked by recording the apparatus movement, an electronic pen for instance, on a screen. In the literature, there are many offline datasets collected with a diversity of Arabic styles for calligraphy. However, there is no available online dataset for Arabic calligraphy. In this paper, we illustrate our approach for the collection and annotation of an online dataset for Arabic calligraphy called Calliar that consists of 2,500 sentences. Calliar is annotated for stroke, character, word and sentence level prediction.
1
1
null
1
1
1
1
1
1
1
null
1
1
1
null
1
null
null
null
1
1
1
1
1
null
1
null
null
null
1
1
1
ar
test
LABR
[]
null
https://github.com/mohamedadaly/LABR
GPL-2.0
2,015
ar
mixed
['social media', 'reviews']
text
null
A large Arabic book review dataset for sentiment analysis
63,257
sentences
null
['Cairo University']
null
null
null
Arab
false
GitHub
Free
null
['review classification', 'sentiment analysis']
null
null
null
['Mahmoud Nabil', 'Mohamed Aly', 'Amir F. Atiya']
['Cairo University', 'Cairo University', 'Cairo University']
We introduce LABR, the largest sentiment analysis dataset to-date for the Arabic language. It consists of over 63,000 book reviews, each rated on a scale of 1 to 5 stars. We investigate the properties of the dataset, and present its statistics. We explore using the dataset for two tasks: (1) sentiment polarity classification; and (2) ratings classification. Moreover, we provide standard splits of the dataset into training, validation and testing, for both polarity and ratings classification, in both balanced and unbalanced settings. We extend our previous work by performing a comprehensive analysis on the dataset. In particular, we perform an extended survey of the different classifiers typically used for the sentiment polarity classification problem. We also construct a sentiment lexicon from the dataset that contains both single and compound sentiment words and we explore its effectiveness. We make the dataset and experimental details publicly available.
1
1
null
0
0
1
1
1
1
1
null
1
1
1
null
1
null
null
null
1
1
0
1
1
null
1
null
null
null
1
1
1
ar
test
ACVA
[]
null
https://github.com/FreedomIntelligence/AceGPT
Apache-2.0
2,023
ar
Modern Standard Arabic
['LLM']
text
null
ACVA is a Yes-No question dataset, comprising over 8000 questions, generated by GPT-3.5 Turbo from 50 designed Arabic topics to assess model alignment with Arabic values and cultures
8,000
sentences
null
['FreedomIntelligence']
null
null
null
Arab
false
GitHub
Free
null
['question answering']
null
null
null
['Huang Huang', 'Fei Yu', 'Jianqing Zhu', 'Xuening Sun', 'Hao Cheng', 'Dingjie Song', 'Zhihong Chen', 'Abdulmohsen Alharthi', 'Bang An', 'Juncai He', 'Ziche Liu', 'Zhiyi Zhang', 'Junying Chen', 'Jianquan Li', 'Benyou Wang', 'Lian Zhang', 'Ruoyu Sun', 'Xiang Wan', 'Haizhou Li', 'Jinchao Xu']
[]
This paper is devoted to the development of a localized Large Language Model (LLM) specifically for Arabic, a language imbued with unique cultural characteristics inadequately addressed by current mainstream models. Significant concerns emerge when addressing cultural sensitivity and local values. To address this, the paper proposes a comprehensive solution that includes further pre-training with Arabic texts, Supervised Fine-Tuning (SFT) utilizing native Arabic instructions, and GPT-4 responses in Arabic, alongside Reinforcement Learning with AI Feedback (RLAIF) employing a reward model attuned to local culture and values. The goal is to cultivate culturally cognizant and value-aligned Arabic LLMs capable of accommodating the diverse, application-specific needs of Arabic-speaking communities. Comprehensive evaluations reveal that the resulting model, dubbed `AceGPT', sets the state-of-the-art standard for open Arabic LLMs across various benchmarks. Codes, data, and models are in https://github.com/FreedomIntelligence/AceGPT.
1
1
null
1
0
0
1
1
1
1
null
1
1
1
null
1
null
null
null
1
1
1
1
1
null
1
null
null
null
1
1
1
ar
test
ATHAR
[]
null
https://hf.co/datasets/mohamed-khalil/ATHAR
CC BY-SA 4.0
2,024
multilingual
Classical Arabic
['books']
text
null
The ATHAR dataset comprises 66,000 translation pairs from Classical Arabic to English. It spans a wide array of subjects, aiming to enhance the development of NLP models specialized in Classical Arabic.
66,000
sentences
null
['ADAPT/DCU']
null
null
null
Arab
false
HuggingFace
Free
null
['machine translation']
null
null
null
['Mohammed Khalil', 'Mohammed Sabry']
['Independent Researcher', 'ADAPT/DCU']
Classical Arabic represents a significant era, encompassing the golden age of Arab culture, philosophy, and scientific literature. With a broad consensus on the importance of translating these literatures to enrich knowledge dissemination across communities, the advent of large language models (LLMs) and translation systems offers promising tools to facilitate this goal. However, we have identified a scarcity of translation datasets in Classical Arabic, which are often limited in scope and topics, hindering the development of high-quality translation systems. In response, we present the ATHAR dataset, comprising 66,000 high-quality Classical Arabic to English translation samples that cover a wide array of subjects including science, culture, and philosophy. Furthermore, we assess the performance of current state-of-the-art LLMs under various settings, concluding that there is a need for such datasets in current systems. Our findings highlight how models can benefit from fine-tuning or incorporating this dataset into their pretraining pipelines. The dataset is publicly available on the HuggingFace Data Hub at \url{https://huggingface.co/datasets/mohamed-khalil/ATHAR}.
1
1
null
1
0
1
1
1
1
1
null
1
1
1
null
1
null
null
null
1
1
1
1
1
null
1
null
null
null
1
1
1
ar
test
OpenITI-proc
[]
null
https://zenodo.org/record/2535593#.YWh7FS8RozU
CC BY 4.0
2,018
ar
Classical Arabic
['public datasets', 'books']
text
null
A linguistically annotated version of the OpenITI corpus, with annotations for lemmas, POS tags, parse trees, and morphological segmentation
7,144
documents
null
[]
null
null
null
Arab
false
zenodo
Free
null
['text generation', 'language modeling']
null
null
null
['Yonatan Belinkov', 'Alexander Magidow', 'Alberto Barrón-Cedeño', 'Avi Shmidman', 'Maxim Romanov']
[]
Arabic is a widely-spoken language with a long and rich history, but existing corpora and language technology focus mostly on modern Arabic and its varieties. Therefore, studying the history of the language has so far been mostly limited to manual analyses on a small scale. In this work, we present a large-scale historical corpus of the written Arabic language, spanning 1400 years. We describe our efforts to clean and process this corpus using Arabic NLP tools, including the identification of reused text. We study the history of the Arabic language using a novel automatic periodization algorithm, as well as other techniques. Our findings confirm the established division of written Arabic into Modern Standard and Classical Arabic, and confirm other established periodizations, while suggesting that written Arabic may be divisible into still further periods of development.
1
1
null
0
0
1
1
1
1
1
null
1
1
1
null
0
null
null
null
1
1
0
0
0
null
1
null
null
null
1
1
1
ar
test
AraDangspeech
[]
null
https://github.com/UBC-NLP/Arabic-Dangerous-Dataset
unknown
2,020
ar
mixed
['social media']
text
null
Dangerous speech detection
5,011
sentences
null
['The University of British Columbia']
null
null
null
Arab
false
GitHub
Free
null
['offensive language detection']
null
null
null
['Ali Alshehri', 'El Moatez Billah Nagoudi', 'Muhammad Abdul-Mageed']
['The University of British Columbia']
Social media communication has become a significant part of daily activity in modern societies. For this reason, ensuring safety in social media platforms is a necessity. Use of dangerous language such as physical threats in online environments is a somewhat rare, yet remains highly important. Although several works have been performed on the related issue of detecting offensive and hateful language, dangerous speech has not previously been treated in any significant way. Motivated by these observations, we report our efforts to build a labeled dataset for dangerous speech. We also exploit our dataset to develop highly effective models to detect dangerous content. Our best model performs at 59.60% macro F1, significantly outperforming a competitive baseline.
1
1
null
0
0
1
1
1
1
1
null
1
1
1
null
1
null
null
null
1
1
1
1
1
null
1
null
null
null
1
1
1
ar
test
Arabic-Hebrew TED Talks Parallel Corpus
[]
null
https://github.com/ajinkyakulkarni14/TED-Multilingual-Parallel-Corpus
unknown
2,016
multilingual
Modern Standard Arabic
['captions', 'public datasets']
text
null
This dataset consists of 2023 TED talks with aligned Arabic and Hebrew subtitles. Sentences were rebuilt and aligned using English as a pivot to improve accuracy, offering a valuable resource for Arabic-Hebrew machine translation tasks.
225,000
sentences
null
['FBK']
null
null
null
Arab
false
GitHub
Free
null
['machine translation']
null
null
null
['Mauro Cettolo']
['Fondazione Bruno Kessler (FBK)']
We describe an Arabic-Hebrew parallel corpus of TED talks built upon WIT3, the Web inventory that repurposes the original content of the TED website in a way which is more convenient for MT researchers. The benchmark consists of about 2,000 talks, whose subtitles in Arabic and Hebrew have been accurately aligned and rearranged in sentences, for a total of about 3.5M tokens per language. Talks have been partitioned in train, development and test sets similarly in all respects to the MT tasks of the IWSLT 2016 evaluation campaign. In addition to describing the benchmark, we list the problems encountered in preparing it and the novel methods designed to solve them. Baseline MT results and some measures on sentence length are provided as an extrinsic evaluation of the quality of the benchmark.
1
1
null
0
0
1
1
1
1
1
null
1
1
1
null
1
null
null
null
1
1
0
0
0
null
1
null
null
null
1
1
1
ar
test
ARASPIDER
[]
null
https://github.com/ahmedheakl/AraSpider
MIT License
2,024
ar
Modern Standard Arabic
['public datasets', 'LLM']
text
null
AraSpider is a translated version of the Spider dataset, which is commonly used for semantic parsing and text-to-SQL generation. The dataset includes 200 databases across 138 domains with 10,181 questions and 5,693 unique complex SQL queries.
10,181
sentences
null
['Egypt-Japan University of Science and Technology']
null
null
null
Arab
false
GitHub
Free
null
['semantic parsing', 'text to SQL']
null
null
null
['Ahmed Heakl', 'Youssef Mohamed', 'Ahmed B. Zaky']
['Egypt-Japan University of Science and Technology', 'Egypt-Japan University of Science and Technology', 'Egypt-Japan University of Science and Technology']
This study presents AraSpider, the first Arabic version of the Spider dataset, aimed at improving natural language processing (NLP) in the Arabic-speaking community. Four multilingual translation models were tested for their effectiveness in translating English to Arabic. Additionally, two models were assessed for their ability to generate SQL queries from Arabic text. The results showed that using back translation significantly improved the performance of both ChatGPT 3.5 and SQLCoder models, which are considered top performers on the Spider dataset. Notably, ChatGPT 3.5 demonstrated high-quality translation, while SQLCoder excelled in text-to-SQL tasks. The study underscores the importance of incorporating contextual schema and employing back translation strategies to enhance model performance in Arabic NLP tasks. Moreover, the provision of detailed methodologies for reproducibility and translation of the dataset into other languages highlights the research's commitment to promoting transparency and collaborative knowledge sharing in the field. Overall, these contributions advance NLP research, empower Arabic-speaking researchers, and enrich the global discourse on language comprehension and database interrogation.
1
1
null
1
0
1
1
1
1
1
null
1
1
1
null
1
null
null
null
1
1
1
1
1
null
1
null
null
null
1
1
1
en
test
HellaSwag
null
null
https://rowanzellers.com/hellaswag
MIT License
2,019
en
null
['captions', 'public datasets', 'wikipedia']
text
null
HellaSwag is a dataset for physically situated commonsense reasoning.
70,000
sentences
null
['Allen Institute of Artificial Intelligence']
null
null
null
null
false
other
Free
null
['natural language inference']
null
null
null
['Rowan Zellers', 'Ari Holtzman', 'Yonatan Bisk', 'Ali Farhadi', 'Yejin Choi']
['University of Washington', 'Allen Institute of Artificial Intelligence']
Recent work by Zellers et al. (2018) introduced a new task of commonsense natural language inference: given an event description such as "A woman sits at a piano," a machine must select the most likely followup: "She sets her fingers on the keys." With the introduction of BERT, near human-level performance was reached. Does this mean that machines can perform human level commonsense inference? In this paper, we show that commonsense inference still proves difficult for even state-of-the-art models, by presenting HellaSwag, a new challenge dataset. Though its questions are trivial for humans (>95% accuracy), state-of-the-art models struggle (<48%). We achieve this via Adversarial Filtering (AF), a data collection paradigm wherein a series of discriminators iteratively select an adversarial set of machine-generated wrong answers. AF proves to be surprisingly robust. The key insight is to scale up the length and complexity of the dataset examples towards a critical 'Goldilocks' zone wherein generated text is ridiculous to humans, yet often misclassified by state-of-the-art models. Our construction of HellaSwag, and its resulting difficulty, sheds light on the inner workings of deep pretrained models. More broadly, it suggests a new path forward for NLP research, in which benchmarks co-evolve with the evolving state-of-the-art in an adversarial way, so as to present ever-harder challenges.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
en
test
GPQA
null
null
https://github.com/idavidrein/gpqa/
CC BY 4.0
2,023
en
null
['other']
text
null
A challenging dataset of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry. The questions are designed to be difficult for both state-of-the-art AI systems and skilled non-experts, even with access to the web.
448
sentences
null
['New York University', 'Cohere', 'Anthropic']
null
null
null
null
false
GitHub
Free
null
['multiple choice question answering']
null
null
null
['David Rein', 'Betty Li Hou', 'Asa Cooper Stickland', 'Jackson Petty', 'Richard Yuanzhe Pang', 'Julien Dirani', 'Julian Michael', 'Samuel R. Bowman']
['New York University', 'Cohere', 'Anthropic, PBC']
We present GPQA, a challenging dataset of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry. We ensure that the questions are high-quality and extremely difficult: experts who have or are pursuing PhDs in the corresponding domains reach 65% accuracy (74% when discounting clear mistakes the experts identified in retrospect), while highly skilled non-expert validators only reach 34% accuracy, despite spending on average over 30 minutes with unrestricted access to the web (i.e., the questions are "Google-proof"). The questions are also difficult for state-of-the-art AI systems, with our strongest GPT-4 based baseline achieving 39% accuracy. If we are to use future AI systems to help us answer very hard questions, for example, when developing new scientific knowledge, we need to develop scalable oversight methods that enable humans to supervise their outputs, which may be difficult even if the supervisors are themselves skilled and knowledgeable. The difficulty of GPQA both for skilled non-experts and frontier AI systems should enable realistic scalable oversight experiments, which we hope can help devise ways for human experts to reliably get truthful information from AI systems that surpass human capabilities.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
en
test
GoEmotions
null
null
https://github.com/google-research/google-research/tree/master/goemotions
Apache-2.0
2,020
en
null
['social media']
text
null
A large-scale, manually annotated dataset of 58,009 English Reddit comments. The comments are labeled for 27 fine-grained emotion categories or Neutral, designed for emotion classification and understanding tasks. The dataset was curated to balance sentiment and reduce profanity and harmful content.
58,009
sentences
null
['Google Research']
null
null
null
null
false
GitHub
Free
null
['emotion classification']
null
null
null
['Dorottya Demszky', 'Dana Movshovitz-Attias', 'Jeongwoo Ko', 'Alan Cowen', 'Gaurav Nemade', 'Sujith Ravi']
['Stanford Linguistics', 'Google Research', 'Amazon Alexa']
Understanding emotion expressed in language has a wide range of applications, from building empathetic chatbots to detecting harmful online behavior. Advancement in this area can be improved using large-scale datasets with a fine-grained typology, adaptable to multiple downstream tasks. We introduce GoEmotions, the largest manually annotated dataset of 58k English Reddit comments, labeled for 27 emotion categories or Neutral. We demonstrate the high quality of the annotations via Principal Preserved Component Analysis. We conduct transfer learning experiments with existing emotion benchmarks to show that our dataset generalizes well to other domains and different emotion taxonomies. Our BERT-based model achieves an average F1-score of .46 across our proposed taxonomy, leaving much room for improvement.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
en
test
SQuAD 2.0
null
null
https://rajpurkar.github.io/SQuAD-explorer/
CC BY-SA 4.0
2,018
en
null
['wikipedia']
text
null
A version of the Stanford Question Answering Dataset (SQuAD) that combines existing SQuAD 1.1 data with over 50,000 new, unanswerable questions written adversarially by crowdworkers.
151,054
sentences
null
['Stanford University']
null
null
null
null
false
GitHub
Free
null
['question answering']
null
null
null
['Pranav Rajpurkar', 'Robin Jia', 'Percy Liang']
['Stanford University']
Extractive reading comprehension systems can often locate the correct answer to a question in a context document, but they also tend to make unreliable guesses on questions for which the correct answer is not stated in the context. Existing datasets either focus exclusively on answerable questions, or use automatically generated unanswerable questions that are easy to identify. To address these weaknesses, we present SQuAD 2.0, the latest version of the Stanford Question Answering Dataset (SQuAD). SQuAD 2.0 combines existing SQuAD data with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD 2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. SQuAD 2.0 is a challenging natural language understanding task for existing models: a strong neural system that gets 86% F1 on SQuAD 1.1 achieves only 66% F1 on SQuAD 2.0.
1
null
null
0
1
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
en
test
LAMBADA
null
null
https://huggingface.co/datasets/cimec/lambada
unknown
2,016
en
null
['books', 'public datasets']
text
null
A dataset of narrative passages designed for a word prediction task. The key characteristic is that human subjects can easily guess the final word of a passage when given the full context, but find it nearly impossible when only shown the last sentence.
10,022
documents
null
['University of Trento', 'University of Amsterdam']
null
null
null
null
false
HuggingFace
Free
null
['word prediction']
null
null
null
['Denis Paperno', 'Germán Kruszewski', 'Angeliki Lazaridou', 'Quan Ngoc Pham', 'Raffaella Bernardi', 'Sandro Pezzelle', 'Marco Baroni', 'Gemma Boleda', 'Raquel Fernández']
['University of Trento', 'University of Amsterdam']
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
1
null
null
0
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
0
1
1
null
1
null
null
null
1
1
1
en
test
ClimbMix
null
null
https://huggingface.co/datasets/nvidia/ClimbMix
CC BY-NC 4.0
2,025
en
null
['web pages', 'LLM', 'other']
text
null
ClimbMix is a compact 400-billion-token dataset designed for efficient language model pre-training. It was created by applying the CLIMB framework to find an optimal mixture from the ClimbLab corpus (derived from Nemotron-CC and smollm-corpus), delivering superior performance under an equal token budget.
400,000,000,000
tokens
null
['NVIDIA']
null
null
null
null
true
HuggingFace
Free
null
['language modeling']
null
null
null
['Shizhe Diao', 'Yu Yang', 'Yonggan Fu', 'Xin Dong', 'Dan Su', 'Markus Kliegl', 'Zijia Chen', 'Peter Belcak', 'Yoshi Suhara', 'Hongxu Yin', 'Mostofa Patwary', 'Yingyan (Celine) Lin', 'Jan Kautz', 'Pavlo Molchanov']
['NVIDIA']
Pre-training datasets are typically collected from web content and lack inherent domain divisions. For instance, widely used datasets like Common Crawl do not include explicit domain labels, while manually curating labeled datasets such as The Pile is labor-intensive. Consequently, identifying an optimal pre-training data mixture remains a challenging problem, despite its significant benefits for pre-training performance. To address these challenges, we propose CLustering-based Iterative Data Mixture Bootstrapping (CLIMB), an automated framework that discovers, evaluates, and refines data mixtures in a pre-training setting. Specifically, CLIMB embeds and clusters large-scale datasets in a semantic space and then iteratively searches for optimal mixtures using a smaller proxy model and a predictor. When continuously trained on 400B tokens with this mixture, our 1B model exceeds the state-of-the-art Llama-3.2-1B by 2.0%. Moreover, we observe that optimizing for a specific domain (e.g., Social Sciences) yields a 5% improvement over random sampling. Finally, we introduce ClimbLab, a filtered 1.2-trillion-token corpus with 20 clusters as a research playground, and ClimbMix, a compact yet powerful 400-billion-token dataset designed for efficient pre-training that delivers superior performance under an equal token budget. We analyze the final data mixture, elucidating the characteristics of an optimal data mixture. Our data is available at: https://research.nvidia.com/labs/lpr/climb/
1
null
null
1
1
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
en
test
RACE
null
null
http://www.cs.cmu.edu/~glai1/data/race/
custom
2,017
en
null
['web pages']
text
null
A large-scale reading comprehension dataset collected from English exams for middle and high school Chinese students. It consists of nearly 28,000 passages and 100,000 multiple-choice questions designed by human experts to evaluate understanding and reasoning abilities, covering a variety of topics.
97,687
sentences
null
['Carnegie Mellon University']
null
null
null
null
false
other
Free
null
['multiple choice question answering']
null
null
null
['Guokun Lai', 'Qizhe Xie', 'Hanxiao Liu', 'Yiming Yang', 'Eduard Hovy']
['Carnegie Mellon University']
We present RACE, a new dataset for benchmark evaluation of methods in the reading comprehension task. Collected from the English exams for middle and high school Chinese students in the age range between 12 to 18, RACE consists of near 28,000 passages and near 100,000 questions generated by human experts (English instructors), and covers a variety of topics which are carefully designed for evaluating the students' ability in understanding and reasoning. In particular, the proportion of questions that requires reasoning is much larger in RACE than that in other benchmark datasets for reading comprehension, and there is a significant gap between the performance of the state-of-the-art models (43%) and the ceiling human performance (95%). We hope this new dataset can serve as a valuable resource for research and evaluation in machine comprehension. The dataset is freely available at http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at https://github.com/qizhex/RACE_AR_baselines.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
en
test
MMLU-Pro
null
null
https://huggingface.co/datasets/TIGER-Lab/MMLU-Pro
MIT License
2,024
en
null
['public datasets', 'web pages', 'LLM']
text
null
An enhanced version of the MMLU benchmark, MMLU-Pro features more challenging, reasoning-focused questions with an expanded choice set of ten options. It was created by filtering trivial and noisy questions from MMLU and integrating new questions, followed by expert review, to be more discriminative and robust.
12,032
sentences
null
['University of Waterloo', 'University of Toronto', 'Carnegie Mellon University']
null
null
null
null
false
HuggingFace
Free
null
['multiple choice question answering']
null
null
null
['Yubo Wang', 'Xueguang Ma', 'Ge Zhang', 'Yuansheng Ni', 'Abhranil Chandra', 'Shiguang Guo', 'Weiming Ren', 'Aaran Arulraj', 'Xuan He', 'Ziyan Jiang', 'Tianle Li', 'Max Ku', 'Kai Wang', 'Alex Zhuang', 'Rongqi Fan', 'Xiang Yue', 'Wenhu Chen']
['University of Waterloo', 'University of Toronto', 'Carnegie Mellon University']
In the age of large-scale language models, benchmarks like the Massive Multitask Language Understanding (MMLU) have been pivotal in pushing the boundaries of what AI can achieve in language comprehension and reasoning across diverse domains. However, as models continue to improve, their performance on these benchmarks has begun to plateau, making it increasingly difficult to discern differences in model capabilities. This paper introduces MMLU-Pro, an enhanced dataset designed to extend the mostly knowledge-driven MMLU benchmark by integrating more challenging, reasoning-focused questions and expanding the choice set from four to ten options. Additionally, MMLU-Pro eliminates the trivial and noisy questions in MMLU. Our experimental results show that MMLU-Pro not only raises the challenge, causing a significant drop in accuracy by 16% to 33% compared to MMLU but also demonstrates greater stability under varying prompts. With 24 different prompt styles tested, the sensitivity of model scores to prompt variations decreased from 4-5% in MMLU to just 2% in MMLU-Pro. Additionally, we found that models utilizing Chain of Thought (CoT) reasoning achieved better performance on MMLU-Pro compared to direct answering, which is in stark contrast to the findings on the original MMLU, indicating that MMLU-Pro includes more complex reasoning questions. Our assessments confirm that MMLU-Pro is a more discriminative benchmark to better track progress in the field.
1
null
null
1
1
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
en
test
BoolQ
null
null
https://github.com/google-research-datasets/boolean-questions
CC BY-SA 3.0
2,019
en
null
['wikipedia', 'web pages']
text
null
A reading comprehension dataset of 16,000 naturally occurring yes/no questions. Questions are gathered from unprompted Google search queries and paired with a Wikipedia paragraph containing the answer. The dataset is designed to be challenging, requiring complex, non-factoid inference.
16,000
sentences
null
['Google AI']
null
null
null
null
false
GitHub
Free
null
['question answering']
null
null
null
['Christopher Clark', 'Kenton Lee', 'Ming-Wei Chang', 'Tom Kwiatkowski', 'Michael Collins', 'Kristina Toutanova']
['Paul G. Allen School of CSE, University of Washington', 'Google AI Language']
In this paper we study yes/no questions that are naturally occurring --- meaning that they are generated in unprompted and unconstrained settings. We build a reading comprehension dataset, BoolQ, of such questions, and show that they are unexpectedly challenging. They often query for complex, non-factoid information, and require difficult entailment-like inference to solve. We also explore the effectiveness of a range of transfer learning baselines. We find that transferring from entailment data is more effective than transferring from paraphrase or extractive QA data, and that it, surprisingly, continues to be very beneficial even when starting from massive pre-trained language models such as BERT. Our best method trains BERT on MultiNLI and then re-trains it on our train set. It achieves 80.4% accuracy compared to 90% accuracy of human annotators (and 62% majority-baseline), leaving a significant gap for future work.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
en
test
GSM8K
null
null
https://github.com/openai/grade-school-math
MIT License
2,021
en
null
['other']
text
null
GSM8K is a dataset of 8.5K high quality grade school math problems created by human problem writers. The dataset is designed to have high linguistic diversity while relying on relatively simple grade school math concepts.
8,500
sentences
null
['OpenAI']
null
null
null
null
false
GitHub
Free
null
['question answering']
null
null
null
['Karl Cobbe', 'Vineet Kosaraju', 'Mohammad Bavarian', 'Mark Chen', 'Heewoo Jun', 'Łukasz Kaiser', 'Matthias Plappert', 'Jerry Tworek', 'Jacob Hilton', 'Reiichiro Nakano', 'Christopher Hesse', 'John Schulman']
['OpenAI']
State-of-the-art language models can match human performance on many tasks, but they still struggle to robustly perform multi-step mathematical reasoning. To diagnose the failures of current models and support research, we introduce GSM8K, a dataset of 8.5K high quality linguistically diverse grade school math word problems. We find that even the largest transformer models fail to achieve high test performance, despite the conceptual simplicity of this problem distribution. To increase performance, we propose training verifiers to judge the correctness of model completions. At test time, we generate many candidate solutions and select the one ranked highest by the verifier. We demonstrate that verification significantly improves performance on GSM8K, and we provide strong empirical evidence that verification scales more effectively with increased data than a finetuning baseline.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
en
test
HotpotQA
null
null
https://hotpotqa.github.io
CC BY-SA 4.0
2,018
en
null
['wikipedia']
text
null
A large-scale dataset with 113k Wikipedia-based question-answer pairs. It requires reasoning over multiple supporting documents, features diverse questions, provides sentence-level supporting facts for explainability, and includes factoid comparison questions to test systems' ability to extract and compare facts.
112,779
sentences
null
['Carnegie Mellon University', 'Stanford University', 'Mila, Université de Montréal', 'Google AI']
null
null
null
null
true
GitHub
Free
null
['question answering']
null
null
null
['Zhilin Yang', 'Peng Qi', 'Saizheng Zhang', 'Yoshua Bengio', 'William W. Cohen', 'Ruslan Salakhutdinov', 'Christopher D. Manning']
['Carnegie Mellon University', 'Stanford University', 'Mila, Université de Montréal', 'Google AI']
Existing question answering (QA) datasets fail to train QA systems to perform complex reasoning and provide explanations for answers. We introduce HotpotQA, a new dataset with 113k Wikipedia-based question-answer pairs with four key features: (1) the questions require finding and reasoning over multiple supporting documents to answer; (2) the questions are diverse and not constrained to any pre-existing knowledge bases or knowledge schemas; (3) we provide sentence-level supporting facts required for reasoning, allowing QA systems to reason with strong supervision and explain the predictions; (4) we offer a new type of factoid comparison questions to test QA systems' ability to extract relevant facts and perform necessary comparison. We show that HotpotQA is challenging for the latest QA systems, and the supporting facts enable models to improve performance and make explainable predictions.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
en
test
SQuAD
null
null
https://rajpurkar.github.io/SQuAD-explorer/
CC BY-SA 4.0
2,016
en
null
['wikipedia']
text
null
A reading comprehension dataset consisting of over 100,000 questions posed by crowdworkers on a set of Wikipedia articles. The answer to each question is a segment of text, or span, from the corresponding reading passage.
107,785
sentences
null
['Stanford University']
null
null
null
null
false
GitHub
Free
null
['question answering']
null
null
null
['Pranav Rajpurkar', 'Jian Zhang', 'Konstantin Lopyrev', 'Percy Liang']
['Stanford University']
We present the Stanford Question Answering Dataset (SQuAD), a new reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage. We analyze the dataset to understand the types of reasoning required to answer the questions, leaning heavily on dependency and constituency trees. We build a strong logistic regression model, which achieves an F1 score of 51.0%, a significant improvement over a simple baseline (20%). However, human performance (86.8%) is much higher, indicating that the dataset presents a good challenge problem for future research. The dataset is freely available at https://stanford-qa.com
1
null
null
0
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
en
test
RefinedWeb
null
null
https://huggingface.co/datasets/tiiuae/falcon-refinedweb
ODC-By
2,023
en
null
['web pages']
text
null
A large-scale, five trillion token English pretraining dataset derived from CommonCrawl. It was created using extensive filtering and deduplication to demonstrate that high-quality web data alone can produce models that outperform those trained on curated corpora. A 600 billion token extract is publicly available.
600,000,000,000
tokens
null
['Technology Innovation Institute']
null
null
null
null
false
HuggingFace
Free
null
['language modeling']
null
null
null
['Guilherme Penedo', 'Quentin Malartic', 'Daniel Hesslow', 'Ruxandra Cojocaru', 'Alessandro Cappelli', 'Hamza Alobeidli', 'Baptiste Pannier', 'Ebtesam Almazrouei', 'Julien Launay']
['LightOn', 'Technology Innovation Institute', 'LPENS, École normale supérieure']
Large language models are commonly trained on a mixture of filtered web data and curated high-quality corpora, such as social media conversations, books, or technical papers. This curation process is believed to be necessary to produce performant models with broad zero-shot generalization abilities. However, as larger models requiring pretraining on trillions of tokens are considered, it is unclear how scalable is curation and whether we will run out of unique high-quality data soon. At variance with previous beliefs, we show that properly filtered and deduplicated web data alone can lead to powerful models; even significantly outperforming models from the state-of-the-art trained on The Pile. Despite extensive filtering, the high-quality data we extract from the web is still plentiful, and we are able to obtain five trillion tokens from CommonCrawl. We publicly release an extract of 600 billion tokens from our RefinedWeb dataset, and 1.3/7.5B parameters language models trained on it.
1
null
null
1
1
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
en
test
MMLU
null
null
https://github.com/hendrycks/test
MIT License
2,021
en
null
['web pages', 'books']
text
null
The MMLU dataset is a collection of 57 tasks covering a wide range of subjects, including elementary mathematics, US history, computer science, law, and more. The dataset is designed to measure a text model's multitask accuracy and requires models to possess extensive world knowledge and problem-solving ability.
15,908
sentences
null
['UC Berkeley', 'Columbia University', 'UChicago', 'UIUC']
null
null
null
null
false
GitHub
Free
null
['multiple choice question answering']
null
null
null
['Dan Hendrycks', 'Collin Burns', 'Steven Basart', 'Andy Zou', 'Mantas Mazeika', 'Dawn Song', 'Jacob Steinhardt']
['UC Berkeley', 'Columbia University', 'UChicago', 'UIUC']
We propose a new test to measure a text model's multitask accuracy. The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more. To attain high accuracy on this test, models must possess extensive world knowledge and problem solving ability. We find that while most recent models have near random-chance accuracy, the very largest GPT-3 model improves over random chance by almost 20 percentage points on average. However, on every one of the 57 tasks, the best models still need substantial improvements before they can reach expert-level accuracy. Models also have lopsided performance and frequently do not know when they are wrong. Worse, they still have near-random accuracy on some socially important subjects such as morality and law. By comprehensively evaluating the breadth and depth of a model's academic and professional understanding, our test can be used to analyze models across many tasks and to identify important shortcomings.
1
null
null
0
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
en
test
PIQA
null
null
http://yonatanbisk.com/piqa
AFL-3.0
2,019
en
null
['web pages']
text
null
A benchmark dataset for physical commonsense reasoning, presented as multiple-choice question answering. It contains goal-solution pairs inspired by how-to instructions from instructables.com, designed to test a model's understanding of physical properties, affordances, and object manipulation. The dataset was cleaned of artifacts using the AFLite algorithm.
21,000
sentences
null
['Allen Institute for Artificial Intelligence', 'Microsoft Research AI', 'Carnegie Mellon University', 'University of Washington']
null
null
null
null
false
GitHub
Free
null
['question answering', 'multiple choice question answering', 'commonsense reasoning']
null
null
null
['Yonatan Bisk', 'Rowan Zellers', 'Ronan Le Bras', 'Jianfeng Gao', 'Yejin Choi']
['Allen Institute for Artificial Intelligence', 'Microsoft Research AI', 'Carnegie Mellon University', 'Paul G. Allen School for Computer Science and Engineering, University of Washington']
To apply eyeshadow without a brush, should I use a cotton swab or a toothpick? Questions requiring this kind of physical commonsense pose a challenge to today's natural language understanding systems. While recent pretrained models (such as BERT) have made progress on question answering over more abstract domains - such as news articles and encyclopedia entries, where text is plentiful - in more physical domains, text is inherently limited due to reporting bias. Can AI systems learn to reliably answer physical common-sense questions without experiencing the physical world? In this paper, we introduce the task of physical commonsense reasoning and a corresponding benchmark dataset Physical Interaction: Question Answering or PIQA. Though humans find the dataset easy (95% accuracy), large pretrained models struggle (77%). We provide analysis about the dimensions of knowledge that existing models lack, which offers significant opportunities for future research.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
0
1
1
null
1
null
null
null
1
1
1
en
test
BRIGHT
null
null
https://github.com/xlang-ai/BRIGHT
CC BY 4.0
2,025
en
null
['web pages', 'public datasets', 'LLM']
text
null
BRIGHT is a new benchmark for reasoning-intensive retrieval. It consists of 12 datasets from diverse and advanced domains where relevance between queries and documents requires intensive reasoning beyond simple keyword or semantic matching.
1,384
sentences
null
['The University of Hong Kong', 'Princeton University', 'Stanford University', 'University of Washington', 'Google Cloud AI Research']
null
null
null
null
false
GitHub
Free
null
['information retrieval', 'question answering']
null
null
null
['Hongjin Su', 'Howard Yen', 'Mengzhou Xia', 'Weijia Shi', 'Niklas Muennighoff', 'Han-yu Wang', 'Haisu Liu', 'Quan Shi', 'Zachary S. Siegel', 'Michael Tang', 'Ruoxi Sun', 'Jinsung Yoon', 'Sercan Ö. Arık', 'Danqi Chen', 'Tao Yu']
['The University of Hong Kong', 'Princeton University', 'Stanford University', 'University of Washington', 'Google Cloud AI Research']
Existing retrieval benchmarks primarily consist of information-seeking queries (e.g., aggregated questions from search engines) where keyword or semantic-based retrieval is usually sufficient. However, many complex real-world queries require in-depth reasoning to identify relevant documents that go beyond surface form matching. For example, finding documentation for a coding question requires understanding the logic and syntax of the functions involved. To better benchmark retrieval on such challenging queries, we introduce BRIGHT, the first text retrieval benchmark that requires intensive reasoning to retrieve relevant documents. Our dataset consists of 1,384 real-world queries spanning diverse domains, such as economics, psychology, mathematics, and coding. These queries are drawn from naturally occurring and carefully curated human data. Extensive evaluation reveals that even state-of-the-art retrieval models perform poorly on BRIGHT. The leading model on the MTEB leaderboard (Muennighoff et al., 2023) SFR-Embedding-Mistral (Meng et al., 2024), which achieves a score of 59.0 nDCG@10,1 produces a score of nDCG@10 of 18.3 on BRIGHT. We show that incorporating explicit reasoning about the query improves retrieval performance by up to 12.2 points. Moreover, incorporating retrieved documents from the top-performing retriever boosts question-answering performance. We believe that BRIGHT paves the way for future research on retrieval systems in more realistic and challenging settings.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
en
test
HLE
null
null
https://lastexam.ai
MIT License
2,025
en
null
['other']
text
null
Humanity's Last Exam (HLE) is a dataset of 3,000 challenging questions designed to assess the capabilities of large language models (LLMs). The questions are diverse, covering a wide range of topics and requiring different reasoning abilities. The dataset is still under development and accepting new questions.
2,500
sentences
null
['Center for AI Safety', 'Scale AI']
null
null
null
null
false
other
Free
null
['question answering', 'multiple choice question answering']
null
null
null
['Long Phan', 'Alice Gatti', 'Ziwen Han', 'Nathaniel Li', 'Josephina Hu', 'Hugh Zhang', 'Sean Shi', 'Michael Choi', 'Anish Agrawal', 'Arnav Chopra', 'Adam Khoja', 'Ryan Kim', 'Richard Ren', 'Jason Hausenloy', 'Oliver Zhang', 'Mantas Mazeika', 'Summer Yue', 'Alexandr Wang', 'Dan Hendrycks']
['Center for AI Safety', 'Scale AI']
Benchmarks are important tools for tracking the rapid advancements in large language model (LLM) capabilities. However, benchmarks are not keeping pace in difficulty: LLMs now achieve over 90\% accuracy on popular benchmarks like MMLU, limiting informed measurement of state-of-the-art LLM capabilities. In response, we introduce Humanity's Last Exam (HLE), a multi-modal benchmark at the frontier of human knowledge, designed to be the final closed-ended academic benchmark of its kind with broad subject coverage. HLE consists of 2,500 questions across dozens of subjects, including mathematics, humanities, and the natural sciences. HLE is developed globally by subject-matter experts and consists of multiple-choice and short-answer questions suitable for automated grading. Each question has a known solution that is unambiguous and easily verifiable, but cannot be quickly answered via internet retrieval. State-of-the-art LLMs demonstrate low accuracy and calibration on HLE, highlighting a significant gap between current LLM capabilities and the expert human frontier on closed-ended academic questions. To inform research and policymaking upon a clear understanding of model capabilities, we publicly release HLE at https://lastexam.ai.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
en
test
TinyStories
null
null
https://huggingface.co/datasets/roneneldan/TinyStories
CDLA-SHARING-1.0
2,023
en
null
['LLM']
text
null
TinyStories is a synthetic dataset of short stories generated by GPT-3.5 and GPT-4. The stories are designed to be simple, using only words that a typical 3 to 4-year-old understands. It is intended to train and evaluate small language models on their ability to generate coherent text and demonstrate reasoning.
2,141,709
documents
null
['Microsoft Research']
null
null
null
null
false
HuggingFace
Free
null
['language modeling', 'text generation', 'instruction tuning']
null
null
null
['Ronen Eldan', 'Yuanzhi Li']
['Microsoft Research']
Language models (LMs) are powerful tools for natural language processing, but they often struggle to produce coherent and fluent text when they are small. Models with around 125M parameters such as GPT-Neo (small) or GPT-2 (small) can rarely generate coherent and consistent English text beyond a few words even after extensive training. This raises the question of whether the emergence of the ability to produce coherent English text only occurs at larger scales (with hundreds of millions of parameters or more) and complex architectures (with many layers of global attention). In this work, we introduce TinyStories, a synthetic dataset of short stories that only contain words that a typical 3 to 4-year-olds usually understand, generated by GPT-3.5 and GPT-4. We show that TinyStories can be used to train and evaluate LMs that are much smaller than the state-of-the-art models (below 10 million total parameters), or have much simpler architectures (with only one transformer block), yet still produce fluent and consistent stories with several paragraphs that are diverse and have almost perfect grammar, and demonstrate reasoning capabilities. We also introduce a new paradigm for the evaluation of language models: We suggest a framework which uses GPT-4 to grade the content generated by these models as if those were stories written by students and graded by a (human) teacher. This new paradigm overcomes the flaws of standard benchmarks which often requires the model's output to be very structures, and moreover provides a multidimensional score for the model, providing scores for different capabilities such as grammar, creativity and consistency. We hope that TinyStories can facilitate the development, analysis and research of LMs, especially for low-resource or specialized domains, and shed light on the emergence of language capabilities in LMs.
1
null
null
1
0
1
1
null
1
1
null
1
0
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
en
test
WinoGrande
null
null
http://winogrande.allenai.org
CC BY 4.0
2,019
en
null
['web pages']
text
null
WinoGrande is a large-scale dataset of 44,000 problems inspired by the original Winograd Schema Challenge (WSC). The dataset was constructed through a carefully designed crowdsourcing procedure followed by a systematic bias reduction.
43,972
sentences
null
['Allen Institute for Artificial Intelligence', 'University of Washington']
null
null
null
null
false
other
Free
null
['commonsense reasoning']
null
null
null
['Keisuke Sakaguchi', 'Ronan Le Bras', 'Chandra Bhagavatula', 'Yejin Choi']
['Allen Institute for Artificial Intelligence', 'University of Washington']
The Winograd Schema Challenge (WSC) (Levesque, Davis, and Morgenstern 2011), a benchmark for commonsense reasoning, is a set of 273 expert-crafted pronoun resolution problems originally designed to be unsolvable for statistical models that rely on selectional preferences or word associations. However, recent advances in neural language models have already reached around 90% accuracy on variants of WSC. This raises an important question whether these models have truly acquired robust commonsense capabilities or whether they rely on spurious biases in the datasets that lead to an overestimation of the true capabilities of machine commonsense. To investigate this question, we introduce WinoGrande, a large-scale dataset of 44k problems, inspired by the original WSC design, but adjusted to improve both the scale and the hardness of the dataset. The key steps of the dataset construction consist of (1) a carefully designed crowdsourcing procedure, followed by (2) systematic bias reduction using a novel AfLite algorithm that generalizes human-detectable word associations to machine-detectable embedding associations. The best state-of-the-art methods on WinoGrande achieve 59.4-79.1%, which are 15-35% below human performance of 94.0%, depending on the amount of the training data allowed. Furthermore, we establish new state-of-the-art results on five related benchmarks - WSC (90.1%), DPR (93.1%), COPA (90.6%), KnowRef (85.6%), and Winogender (97.1%). These results have dual implications: on one hand, they demonstrate the effectiveness of WinoGrande when used as a resource for transfer learning. On the other hand, they raise a concern that we are likely to be overestimating the true capabilities of machine commonsense across all these benchmarks. We emphasize the importance of algorithmic bias reduction in existing and future benchmarks to mitigate such overestimation.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
en
test
SciQ
null
null
https://huggingface.co/datasets/allenai/sciq
CC BY-NC 3.0
2,017
en
null
['books', 'web pages']
text
null
SciQ is a dataset containing 13,679 multiple-choice science exam questions. It was created using a novel crowdsourcing method that leverages a large corpus of domain-specific text (science textbooks) and a model trained on existing questions to suggest document selections and answer distractors, aiding human workers in the question generation process.
13,679
sentences
null
['Allen Institute for Artificial Intelligence']
null
null
null
null
false
HuggingFace
Free
null
['multiple choice question answering', 'question answering']
null
null
null
['Johannes Welbl', 'Nelson F. Liu', 'Matt Gardner']
['Allen Institute for Artificial Intelligence', 'University of Washington', 'University College London']
We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.
1
null
null
0
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
0
1
1
null
1
null
null
null
1
1
1
en
test
TriviaQA
null
null
http://nlp.cs.washington.edu/triviaqa
Apache-2.0
2,017
en
null
['wikipedia', 'web pages']
text
null
TriviaQA is a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions.
650,000
documents
null
['University of Washington']
null
null
null
null
false
other
Free
null
['question answering', 'information retrieval']
null
null
null
['Mandar Joshi', 'Eunsol Choi', 'Daniel S. Weld', 'Luke Zettlemoyer']
['Allen Institute for Artificial Intelligence', 'University of Washington']
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
jp
test
JEC
null
null
https://github.com/tmu-nlp/autoJQE
unknown
2,022
jp
null
['public datasets']
text
null
A quality estimation (QE) dataset created for building an automatic evaluation model for Japanese Grammatical Error Correction (GEC).
4,391
sentences
null
['Tokyo Metropolitan University', 'RIKEN']
null
null
null
mixed
false
GitHub
Free
null
['grammatical error correction']
null
null
null
['Daisuke Suzuki', 'Yujin Takahashi', 'Ikumi Yamashita', 'Taichi Aida', 'Tosho Hirasawa', 'Michitaka Nakatsuji', 'Masato Mita', 'Mamoru Komachi']
['Tokyo Metropolitan University', 'RIKEN']
In grammatical error correction (GEC), automatic evaluation is an important factor for research and development of GEC systems. Previous studies on automatic evaluation have demonstrated that quality estimation models built from datasets with manual evaluation can achieve high performance in automatic evaluation of English GEC without using reference sentences.. However, quality estimation models have not yet been studied in Japanese, because there are no datasets for constructing quality estimation models. Therefore, in this study, we created a quality estimation dataset with manual evaluation to build an automatic evaluation model for Japanese GEC. Moreover, we conducted a meta-evaluation to verify the dataset's usefulness in building the Japanese quality estimation model.
1
null
null
0
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
1
1
0
0
0
null
1
null
null
null
1
1
1
jp
test
JParaCrawl
null
null
http://www.kecl.ntt.co.jp/icl/lirg/jparacrawl
custom
2,020
multilingual
null
['web pages']
text
null
JParaCrawl is a large web-based English-Japanese parallel corpus that was created by crawling the web and finding English-Japanese bitexts. It contains around 8.7 million parallel sentences.
8,763,995
sentences
null
['NTT']
null
null
null
mixed
false
other
Free
null
['machine translation']
null
null
null
['Makoto Morishita', 'Jun Suzuki', 'Masaaki Nagata']
['NTT Corporation']
Recent machine translation algorithms mainly rely on parallel corpora. However, since the availability of parallel corpora remains limited, only some resource-rich language pairs can benefit from them. We constructed a parallel corpus for English-Japanese, for which the amount of publicly available parallel corpora is still limited. We constructed the parallel corpus by broadly crawling the web and automatically aligning parallel sentences. Our collected corpus, called JParaCrawl, amassed over 8.7 million sentence pairs. We show how it includes a broader range of domains and how a neural machine translation model trained with it works as a good pre-trained model for fine-tuning specific domains. The pre-training and fine-tuning approaches achieved or surpassed performance comparable to model training from the initial state and reduced the training time. Additionally, we trained the model with an in-domain dataset and JParaCrawl to show how we achieved the best performance with them. JParaCrawl and the pre-trained models are freely available online for research purposes.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
1
1
1
1
1
null
1
null
null
null
1
1
1
jp
test
KaoKore
null
null
https://github.com/rois-codh/kaokore
CC BY-SA 4.0
2,020
jp
null
['web pages']
images
null
KaoKore is a dataset of 5,552 face images extracted from pre-modern Japanese artwork from the 16th to 17th centuries. It is derived from the 'Collection of Facial Expressions' dataset and provides labels for gender and social status, along with official train/dev/test splits for classification and generative tasks.
5,552
images
null
['Google Brain', 'ROIS-DS Center for Open Data in the Humanities', 'NII', 'University of Cambridge', 'MILA', "Universit'e de Montr'eal"]
null
null
null
mixed
false
GitHub
Free
null
['gender identification', 'other']
null
null
null
['Yingtao Tian', 'Chikahiko Suzuki', 'Tarin Clanuwat', 'Mikel Bober-Irizar', 'Alex Lamb', 'Asanobu Kitamoto']
['Google Brain', 'ROIS-DS Center for Open Data in the Humanities', 'NII', 'University of Cambridge', 'MILA', "Universit'e de Montr'eal"]
From classifying handwritten digits to generating strings of text, the datasets which have received long-time focus from the machine learning community vary greatly in their subject matter. This has motivated a renewed interest in building datasets which are socially and culturally relevant, so that algorithmic research may have a more direct and immediate impact on society. One such area is in history and the humanities, where better and relevant machine learning models can accelerate research across various fields. To this end, newly released benchmarks and models have been proposed for transcribing historical Japanese cursive writing, yet for the field as a whole using machine learning for historical Japanese artworks still remains largely uncharted. To bridge this gap, in this work we propose a new dataset KaoKore which consists of faces extracted from pre-modern Japanese artwork. We demonstrate its value as both a dataset for image classification as well as a creative and artistic dataset, which we explore using generative models. Dataset available at https://github.com/rois-codh/kaokore
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
1
1
1
1
1
null
1
null
null
null
1
1
1
jp
test
llm-japanese-dataset v0
null
null
https://huggingface.co/datasets/izumi-lab/llm-japanese-dataset
CC BY-SA 4.0
2,023
multilingual
null
['public datasets', 'wikipedia', 'news articles', 'LLM']
text
null
A Japanese chat dataset of approximately 8.4 million records, created for tuning large language models. It is composed of various sub-datasets covering tasks like translation, knowledge-based Q&A, summarization, and more, derived from sources like Wikipedia, WordNet, and other publicly available corpora.
8,393,726
sentences
null
['The University of Tokyo']
null
null
null
mixed
false
HuggingFace
Free
null
['machine translation', 'text generation', 'instruction tuning']
null
null
null
['Masanori HIRANO', 'Masahiro SUZUKI', 'Hiroki SAKAJI']
['The University of Tokyo']
This study constructed a Japanese chat dataset for tuning large language models (LLMs), which consist of about 8.4 million records. Recently, LLMs have been developed and gaining popularity. However, high-performing LLMs are usually mainly for English. There are two ways to support languages other than English by those LLMs: constructing LLMs from scratch or tuning existing models. However, in both ways, datasets are necessary parts. In this study, we focused on supporting Japanese in those LLMs and making a dataset for training or tuning LLMs in Japanese. The dataset we constructed consisted of various tasks, such as translation and knowledge tasks. In our experiment, we tuned an existing LLM using our dataset and evaluated the performance qualitatively. The results suggest that our dataset is possibly beneficial for LLMs. However, we also revealed some difficulties in constructing LLMs in languages other than English.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
1
1
1
1
1
null
1
null
null
null
1
1
1
jp
test
JaLeCoN
null
null
https://github.com/naist-nlp/jalecon
CC BY-NC-SA 3.0
2,023
jp
null
['news articles', 'public datasets', 'web pages']
text
null
JaLeCoN is a Dataset of Japanese Lexical Complexity for Non-Native Readers. It can be used to train or evaluate Japanese lexical complexity prediction models.
600
sentences
null
['NAIST']
null
null
null
mixed
false
GitHub
Free
null
['other']
null
null
null
['Yusuke Ide', 'Masato Mita', 'Adam Nohejl', 'Hiroki Ouchi', 'Taro Watanabe']
['NAIST', 'CyberAgent Inc.', 'RIKEN']
Lexical complexity prediction (LCP) is the task of predicting the complexity of words in a text on a continuous scale. It plays a vital role in simplifying or annotating complex words to assist readers. To study lexical complexity in Japanese, we construct the first Japanese LCP dataset. Our dataset provides separate complexity scores for Chinese/Korean annotators and others to address the readers' L1-specific needs. In the baseline experiment, we demonstrate the effectiveness of a BERT-based system for Japanese LCP.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
1
1
1
1
1
null
1
null
null
null
1
1
1
jp
test
JaQuAD
null
null
https://github.com/SkelterLabsInc/JaQuAD
CC BY-SA 3.0
2,022
jp
null
['wikipedia']
text
null
JaQuAD is a Japanese Question Answering dataset consisting of 39,696 extractive question-answer pairs on Japanese Wikipedia articles. The dataset was annotated by humans and is available on GitHub.
39,696
sentences
null
['Skelter Labs']
null
null
null
mixed
false
GitHub
Free
null
['question answering']
null
null
null
['ByungHoon So', 'Kyuhong Byun', 'Kyungwon Kang', 'Seongjin Cho']
['Skelter Labs']
Question Answering (QA) is a task in which a machine understands a given document and a question to find an answer. Despite impressive progress in the NLP area, QA is still a challenging problem, especially for non-English languages due to the lack of annotated datasets. In this paper, we present the Japanese Question Answering Dataset, JaQuAD, which is annotated by humans. JaQuAD consists of 39,696 extractive question-answer pairs on Japanese Wikipedia articles. We finetuned a baseline model which achieves 78.92% for F1 score and 63.38% for EM on test set. The dataset and our experiments are available at https://github.com/SkelterLabsInc/JaQuAD.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
1
1
1
1
1
null
1
null
null
null
1
1
1
jp
test
JAFFE
null
null
https://zenodo.org/record/3451524
custom
2,021
jp
null
['other']
images
null
The Japanese Female Facial Expression (JAFFE) dataset is a set of 213 images depicting facial expressions posed by 10 Japanese women. The set includes six basic facial expressions plus a neutral face. The dataset also includes semantic ratings for each image from 60 Japanese female observers.
213
images
null
['Kyushu University', 'Advanced Telecommunications Research Institute International']
null
null
null
mixed
false
zenodo
Free
null
['other']
null
null
null
['Michael J. Lyons']
['Ritsumeikan University']
Twenty-five years ago, my colleagues Miyuki Kamachi and Jiro Gyoba and I designed and photographed JAFFE, a set of facial expression images intended for use in a study of face perception. In 2019, without seeking permission or informing us, Kate Crawford and Trevor Paglen exhibited JAFFE in two widely publicized art shows. In addition, they published a nonfactual account of the images in the essay "Excavating AI: The Politics of Images in Machine Learning Training Sets." The present article recounts the creation of the JAFFE dataset and unravels each of Crawford and Paglen's fallacious statements. I also discuss JAFFE more broadly in connection with research on facial expression, affective computing, and human-computer interaction.
1
null
null
1
1
1
1
null
1
1
null
1
1
1
null
1
null
null
null
1
1
1
1
1
null
1
null
null
null
1
1
1
jp
test
JaFIn
null
null
https://huggingface.co/datasets/Sakaji-Lab/JaFIn
CC BY-NC-SA 4.0
2,024
jp
null
['wikipedia', 'web pages']
text
null
JaFIn is a Japanese financial instruction dataset that was manually curated from various sources, including government websites, Wikipedia, and financial institutions.
1,490
sentences
null
['Hokkaido University', 'University of Tokyo']
null
null
null
mixed
false
HuggingFace
Free
null
['instruction tuning', 'question answering']
null
null
null
['Kota Tanabe', 'Masahiro Suzuki', 'Hiroki Sakaji', 'Itsuki Noda']
['Hokkaido University', 'University of Tokyo']
We construct an instruction dataset for the large language model (LLM) in the Japanese finance domain. Domain adaptation of language models, including LLMs, is receiving more attention as language models become more popular. This study demonstrates the effectiveness of domain adaptation through instruction tuning. To achieve this, we propose an instruction tuning data in Japanese called JaFIn, the Japanese Financial Instruction Dataset. JaFIn is manually constructed based on multiple data sources, including Japanese government websites, which provide extensive financial knowledge. We then utilize JaFIn to apply instruction tuning for several LLMs, demonstrating that our models specialized in finance have better domain adaptability than the original models. The financial-specialized LLMs created were evaluated using a quantitative Japanese financial benchmark and qualitative response comparisons, showing improved performance over the originals.
1
null
null
0
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
1
1
0
0
0
null
1
null
null
null
1
1
1
jp
test
Japanese Fake News Dataset
null
null
https://hkefka385.github.io/dataset/fakenews-japanese/
CC BY-NC-ND 4.0
2,022
jp
null
['news articles', 'social media']
text
null
The first Japanese fake news dataset, annotated with a novel, fine-grained scheme. It goes beyond factuality to include disseminator's intent, harm, target, and purpose, based on 307 news stories from Twitter verified by Fact Check Initiative Japan.
307
documents
null
['SANKEN Osaka University', 'NAIST']
null
null
null
mixed
false
GitHub
Free
null
['fake news detection']
null
null
null
['Taichi Murayama', 'Shohei Hisada', 'Makoto Uehara', 'Shoko Wakamiya', 'Eiji Aramaki']
['SANKEN Osaka University', 'NARA Institute of Science and Technology']
Fake news provokes many societal problems; therefore, there has been extensive research on fake news detection tasks to counter it. Many fake news datasets were constructed as resources to facilitate this task. Contemporary research focuses almost exclusively on the factuality aspect of the news. However, this aspect alone is insufficient to explain "fake news," which is a complex phenomenon that involves a wide range of issues. To fully understand the nature of each instance of fake news, it is important to observe it from various perspectives, such as the intention of the false news disseminator, the harmfulness of the news to our society, and the target of the news. We propose a novel annotation scheme with fine-grained labeling based on detailed investigations of existing fake news datasets to capture these various aspects of fake news. Using the annotation scheme, we construct and publish the first Japanese fake news dataset. The annotation scheme is expected to provide an in-depth understanding of fake news. We plan to build datasets for both Japanese and other languages using our scheme. Our Japanese dataset is published at https://hkefka385.github.io/dataset/fakenews-japanese/.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
1
1
1
1
1
null
1
null
null
null
1
1
1
jp
test
JMultiWOZ
null
null
https://github.com/nu-dialogue/jmultiwoz
CC BY-SA 4.0
2,024
jp
null
['web pages']
text
null
JMultiWOZ is the first Japanese language large-scale multi-domain task-oriented dialogue dataset. It contains 4,246 conversations spanning six travel-related domains: tourist attractions, accommodation, restaurants, shopping facilities, taxis, and weather. It provides dialogue state annotations for benchmarking dialogue state tracking and response generation.
52,405
sentences
null
['Nagoya University']
null
null
null
mixed
false
GitHub
Free
null
['other']
null
null
null
['Atsumoto Ohashi', 'Ryu Hirai', 'Shinya Iizuka', 'Ryuichiro Higashinaka']
['Nagoya University']
Dialogue datasets are crucial for deep learning-based task-oriented dialogue system research. While numerous English language multi-domain task-oriented dialogue datasets have been developed and contributed to significant advancements in task-oriented dialogue systems, such a dataset does not exist in Japanese, and research in this area is limited compared to that in English. In this study, towards the advancement of research and development of task-oriented dialogue systems in Japanese, we constructed JMultiWOZ, the first Japanese language large-scale multi-domain task-oriented dialogue dataset. Using JMultiWOZ, we evaluated the dialogue state tracking and response generation capabilities of the state-of-the-art methods on the existing major English benchmark dataset MultiWOZ2.2 and the latest large language model (LLM)-based methods. Our evaluation results demonstrated that JMultiWOZ provides a benchmark that is on par with MultiWOZ2.2. In addition, through evaluation experiments of interactive dialogues with the models and human participants, we identified limitations in the task completion capabilities of LLMs in Japanese.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
1
1
1
1
1
null
1
null
null
null
1
1
1
jp
test
Japanese Web Corpus
null
null
https://github.com/llm-jp/llm-jp-corpus
unknown
2,024
jp
null
['web pages', 'public datasets']
text
null
A large-scale Japanese web corpus created from 21 Common Crawl snapshots (crawled between 2020 and 2023). The corpus consists of approximately 312.1 billion characters from 173 million pages.
173,350,375
documents
null
['Tokyo Institute of Technology']
null
null
null
mixed
false
GitHub
Free
null
['language modeling']
null
null
null
['Naoaki Okazaki', 'Kakeru Hattori', 'Hirai Shota', 'Hiroki Iida', 'Masanari Ohi', 'Kazuki Fujii', 'Taishi Nakamura', 'Mengsay Loem', 'Rio Yokota', 'Sakae Mizuki']
['Tokyo Institute of Technology']
Open Japanese large language models (LLMs) have been trained on the Japanese portions of corpora such as CC-100, mC4, and OSCAR. However, these corpora were not created for the quality of Japanese texts. This study builds a large Japanese web corpus by extracting and refining text from the Common Crawl archive (21 snapshots of approximately 63.4 billion pages crawled between 2020 and 2023). This corpus consists of approximately 312.1 billion characters (approximately 173 million pages), which is the largest of all available training corpora for Japanese LLMs, surpassing CC-100 (approximately 25.8 billion characters), mC4 (approximately 239.7 billion characters) and OSCAR 23.10 (approximately 74 billion characters). To confirm the quality of the corpus, we performed continual pre-training on Llama 2 7B, 13B, 70B, Mistral 7B v0.1, and Mixtral 8x7B Instruct as base LLMs and gained consistent (6.6-8.1 points) improvements on Japanese benchmark datasets. We also demonstrate that the improvement on Llama 2 13B brought from the presented corpus was the largest among those from other existing corpora.
1
null
null
0
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
1
1
0
1
1
null
1
null
null
null
1
1
1
jp
test
J-CRe3
null
null
https://github.com/riken-grp/J-CRe3
CC BY-SA 4.0
2,024
jp
null
['other']
text
null
A Japanese multimodal dataset containing egocentric video and dialogue audio of real-world conversations between a master and an assistant robot at home.
11,000
images
null
['RIKEN']
null
null
null
mixed
false
GitHub
Free
null
['other']
null
null
null
['Nobuhiro Ueda', 'Hideko Habe', 'Yoko Matsui', 'Akishige Yuguchi', 'Seiya Kawano', 'Yasutomo Kawanishi', 'Sadao Kurohashi', 'Koichiro Yoshino']
['Kyoto University', 'Guardian Robot Project, R-IH, RIKEN', 'Tokyo University of Science', 'Nara Institute of Science and Technology', 'National Institute of Informatics']
Understanding expressions that refer to the physical world is crucial for such human-assisting systems in the real world, as robots that must perform actions that are expected by users. In real-world reference resolution, a system must ground the verbal information that appears in user interactions to the visual information observed in egocentric views. To this end, we propose a multimodal reference resolution task and construct a Japanese Conversation dataset for Real-world Reference Resolution (J-CRe3). Our dataset contains egocentric video and dialogue audio of real-world conversations between two people acting as a master and an assistant robot at home. The dataset is annotated with crossmodal tags between phrases in the utterances and the object bounding boxes in the video frames. These tags include indirect reference relations, such as predicate-argument structures and bridging references as well as direct reference relations. We also constructed an experimental model and clarified the challenges in multimodal reference resolution tasks.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
1
1
1
1
1
null
1
null
null
null
1
1
1
jp
test
JSUT
null
null
https://sites.google.com/site/shinnosuketakamichi/publication/jsut
CC BY-SA 4.0
2,017
jp
null
['wikipedia', 'public datasets']
audio
null
The corpus consists of 10 hours of reading-style speech data and its transcription and covers all of the main pronun- ciations of daily-use Japanese characters.
10
hours
null
[]
null
null
null
mixed
false
other
Free
null
['speech recognition']
null
null
null
['Ryosuke Sonobe', 'Shinnosuke Takamichi', 'Hiroshi Saruwatari']
['University of Tokyo']
Thanks to improvements in machine learning techniques including deep learning, a free large-scale speech corpus that can be shared between academic institutions and commercial companies has an important role. However, such a corpus for Japanese speech synthesis does not exist. In this paper, we designed a novel Japanese speech corpus, named the "JSUT corpus," that is aimed at achieving end-to-end speech synthesis. The corpus consists of 10 hours of reading-style speech data and its transcription and covers all of the main pronunciations of daily-use Japanese characters. In this paper, we describe how we designed and analyzed the corpus. The corpus is freely available online.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
1
1
1
1
1
null
1
null
null
null
1
1
1
jp
test
Japanese Word Similarity Dataset
null
null
https://github.com/tmu-nlp/JapaneseWordSimilarityDataset
CC BY-SA 3.0
2,018
jp
null
['public datasets']
text
null
A Japanese word similarity dataset (JWSD) containing 4,851 word pairs with human-annotated similarity scores. The dataset includes various parts of speech (nouns, verbs, adjectives, adverbs) and covers both common and rare words, designed for evaluating Japanese distributed word representations.
4,851
sentences
null
['Tokyo Metropolitan University']
null
null
null
mixed
false
GitHub
Free
null
['word similarity']
null
null
null
['Yuya Sakaizawa', 'Mamoru Komachi']
['Tokyo Metropolitan University']
An evaluation of distributed word representation is generally conducted using a word similarity task and/or a word analogy task. There are many datasets readily available for these tasks in English. However, evaluating distributed representation in languages that do not have such resources (e.g., Japanese) is difficult. Therefore, as a first step toward evaluating distributed representations in Japanese, we constructed a Japanese word similarity dataset. To the best of our knowledge, our dataset is the first resource that can be used to evaluate distributed representations in Japanese. Moreover, our dataset contains various parts of speech and includes rare words in addition to common words.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
1
1
1
1
1
null
1
null
null
null
1
1
1
jp
test
STAIR Captions
null
null
http://captions.stair.center/
CC BY 4.0
2,017
jp
null
['captions', 'public datasets']
text
null
A large-scale image caption dataset in Japanese, based on the COCO dataset. It contains 820,310 Japanese captions for 164,062 images, collected via crowdsourcing.
820,310
sentences
null
['The University of Tokyo', 'National Institute of Informatics']
null
null
null
mixed
false
other
Free
null
['image captioning']
null
null
null
['Yuya Yoshikawa', 'Yutaro Shigeto', 'Akikazu Takeuchi']
['The University of Tokyo', 'National Institute of Informatics']
In recent years, automatic generation of image descriptions (captions), that is, image captioning, has attracted a great deal of attention. In this paper, we particularly consider generating Japanese captions for images. Since most available caption datasets have been constructed for English language, there are few datasets for Japanese. To tackle this problem, we construct a large-scale Japanese image caption dataset based on images from MS-COCO, which is called STAIR Captions. STAIR Captions consists of 820,310 Japanese captions for 164,062 images. In the experiment, we show that a neural network trained using STAIR Captions can generate more natural and better Japanese captions, compared to those generated using English-Japanese machine translation after generating English captions.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
1
1
1
1
1
null
1
null
null
null
1
1
1
jp
test
Arukikata Travelogue
null
null
https://www.nii.ac.jp/news/release/2022/1124.html
custom
2,023
jp
null
['web pages']
text
null
A Japanese text dataset with over 31 million characters, comprising 4,672 domestic and 9,607 overseas travelogues from the Arukikata website. It was created to provide a shared resource for research, ensuring transparency and reproducibility in analyzing human-place interactions from text.
14,279
documents
null
['Arukikata Co., Ltd.']
null
null
null
mixed
false
other
Free
null
['other']
null
null
null
['Hiroki Ouchi', 'Hiroyuki Shindo', 'Shoko Wakamiya', 'Yuki Matsuda', 'Naoya Inoue', 'Shohei Higashiyama', 'Satoshi Nakamura', 'Taro Watanabe']
['NAIST', 'JAIST', 'NICT', 'RIKEN']
We have constructed Arukikata Travelogue Dataset and released it free of charge for academic research. This dataset is a Japanese text dataset with a total of over 31 million words, comprising 4,672 Japanese domestic travelogues and 9,607 overseas travelogues. Before providing our dataset, there was a scarcity of widely available travelogue data for research purposes, and each researcher had to prepare their own data. This hinders the replication of existing studies and fair comparative analysis of experimental results. Our dataset enables any researchers to conduct investigation on the same data and to ensure transparency and reproducibility in research. In this paper, we describe the academic significance, characteristics, and prospects of our dataset.
1
null
null
0
1
1
1
null
1
1
null
1
1
1
null
1
null
null
null
1
1
1
1
1
null
1
null
null
null
1
1
1
jp
test
JTubeSpeech
null
null
https://github.com/sarulab-speech/jtubespeech
Apache-2.0
2,021
jp
null
['social media']
audio
null
A large-scale Japanese speech corpus collected from YouTube videos and their subtitles. It is designed for both automatic speech recognition (ASR) and automatic speaker verification (ASV) tasks, containing over 1,300 hours for ASR and 900 hours for ASV.
1,300
hours
null
['The University of Tokyo', 'Technical University of Munich', 'Tokyo Metropolitan University', 'Carnegie Mellon University']
null
null
null
mixed
false
GitHub
Free
null
['speech recognition', 'speaker identification']
null
null
null
['Shinnosuke Takamichi', 'Ludwig Kürzinger', 'Takaaki Saeki', 'Sayaka Shiota', 'Shinji Watanabe']
['The University of Tokyo', 'Technical University of Munich', 'Tokyo Metropolitan University', 'Carnegie Mellon University']
In this paper, we construct a new Japanese speech corpus called "JTubeSpeech." Although recent end-to-end learning requires large-size speech corpora, open-sourced such corpora for languages other than English have not yet been established. In this paper, we describe the construction of a corpus from YouTube videos and subtitles for speech recognition and speaker verification. Our method can automatically filter the videos and subtitles with almost no language-dependent processes. We consistently employ Connectionist Temporal Classification (CTC)-based techniques for automatic speech recognition (ASR) and a speaker variation-based method for automatic speaker verification (ASV). We build 1) a large-scale Japanese ASR benchmark with more than 1,300 hours of data and 2) 900 hours of data for Japanese ASV.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
1
1
1
1
1
null
1
null
null
null
1
1
1
jp
test
JCoLA
null
null
https://github.com/osekilab/JCoLA
custom
2,023
jp
null
['books']
text
null
JCoLA (Japanese Corpus of Linguistic Acceptability) consists of 10,020 sentences annotated with binary acceptability judgments. The sentences are manually extracted from linguistics textbooks, handbooks, and journal articles, and are split into in-domain and out-of-domain data.
10,020
sentences
null
['The University of Tokyo']
null
null
null
mixed
false
GitHub
Free
null
['linguistic acceptability']
null
null
null
['Taiga Someya', 'Yushi Sugimoto', 'Yohei Oseki']
['The University of Tokyo']
Neural language models have exhibited outstanding performance in a range of downstream tasks. However, there is limited understanding regarding the extent to which these models internalize syntactic knowledge, so that various datasets have recently been constructed to facilitate syntactic evaluation of language models across languages. In this paper, we introduce JCoLA (Japanese Corpus of Linguistic Acceptability), which consists of 10,020 sentences annotated with binary acceptability judgments. Specifically, those sentences are manually extracted from linguistics textbooks, handbooks and journal articles, and split into in-domain data (86 %; relatively simple acceptability judgments extracted from textbooks and handbooks) and out-of-domain data (14 %; theoretically significant acceptability judgments extracted from journal articles), the latter of which is categorized by 12 linguistic phenomena. We then evaluate the syntactic knowledge of 9 different types of Japanese language models on JCoLA. The results demonstrated that several models could surpass human performance for the in-domain data, while no models were able to exceed human performance for the out-of-domain data. Error analyses by linguistic phenomena further revealed that although neural language models are adept at handling local syntactic dependencies like argument structure, their performance wanes when confronted with long-distance syntactic dependencies like verbal agreement and NPI licensing.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
1
1
1
1
1
null
1
null
null
null
1
1
1
jp
test
JESC
null
null
https://nlp.stanford.edu/projects/jesc
CC BY-SA 4.0
2,018
multilingual
null
['captions', 'TV Channels']
text
null
JESC is a large Japanese-English parallel corpus covering the underrepresented domain of conversational dialogue. It consists of more than 3.2 million examples, making it the largest freely available dataset of its kind.
3,240,661
sentences
null
['Stanford University', 'Rakuten Institute of Technology', 'Google Brain']
null
null
null
mixed
false
other
Free
null
['machine translation']
null
null
null
['Reid Pryzant', 'Youngjoo Chung', 'Dan Jurafsky', 'Denny Britz']
['Stanford University', 'Rakuten Institute of Technology', 'Google Brain']
In this paper we describe the Japanese-English Subtitle Corpus (JESC). JESC is a large Japanese-English parallel corpus covering the underrepresented domain of conversational dialogue. It consists of more than 3.2 million examples, making it the largest freely available dataset of its kind. The corpus was assembled by crawling and aligning subtitles found on the web. The assembly process incorporates a number of novel preprocessing elements to ensure high monolingual fluency and accurate bilingual alignments. We summarize its contents and evaluate its quality using human experts and baseline machine translation (MT) systems.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
1
1
1
1
1
null
1
null
null
null
1
1
1
jp
test
JDocQA
null
null
https://github.com/mizuumi/JDocQA
CC BY-SA 3.0
2,024
jp
null
['web pages']
images
null
JDocQA is a large-scale Japanese document-based QA dataset, comprising 5,504 documents in PDF format and 11,600 annotated question-and-answer instances. It requires both visual and textual information to answer questions, and includes multiple question categories and unanswerable questions to mitigate model hallucination.
11,600
sentences
null
['Nara Institute of Science and Technology', 'RIKEN', 'ATR']
null
null
null
mixed
false
GitHub
Free
null
['question answering']
null
null
null
['Eri Onami', 'Shuhei Kurita', 'Taiki Miyanishi', 'Taro Watanabe']
['Nara Institute of Science and Technology', 'RIKEN', 'ATR']
Document question answering is a task of question answering on given documents such as reports, slides, pamphlets, and websites, and it is a truly demanding task as paper and electronic forms of documents are so common in our society. This is known as a quite challenging task because it requires not only text understanding but also understanding of figures and tables, and hence visual question answering (VQA) methods are often examined in addition to textual approaches. We introduce Japanese Document Question Answering (JDocQA), a large-scale document-based QA dataset, essentially requiring both visual and textual information to answer questions, which comprises 5,504 documents in PDF format and annotated 11,600 question-and-answer instances in Japanese. Each QA instance includes references to the document pages and bounding boxes for the answer clues. We incorporate multiple categories of questions and unanswerable questions from the document for realistic question-answering applications. We empirically evaluate the effectiveness of our dataset with text-based large language models (LLMs) and multimodal models. Incorporating unanswerable questions in finetuning may contribute to harnessing the so-called hallucination generation.
1
null
null
0
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
1
1
0
0
0
null
1
null
null
null
1
1
1
jp
test
Jamp
null
null
https://github.com/tomo-ut/temporalNLI_dataset
CC BY-SA 4.0
2,023
jp
null
['public datasets']
text
null
Jamp is a Japanese Natural Language Inference (NLI) benchmark focused on temporal inference. It was created using a template-based approach, generating diverse examples by combining templates derived from formal semantics test suites with a Japanese case frame dictionary, allowing for controlled distribution of inference patterns.
10,094
sentences
null
['The University of Tokyo']
null
null
null
mixed
false
GitHub
Free
null
['natural language inference']
null
null
null
['Tomoki Sugimoto', 'Yasumasa Onoe', 'Hitomi Yanaka']
['The University of Tokyo', 'The University of Texas at Austin']
Natural Language Inference (NLI) tasks involving temporal inference remain challenging for pre-trained language models (LMs). Although various datasets have been created for this task, they primarily focus on English and do not address the need for resources in other languages. It is unclear whether current LMs realize the generalization capacity for temporal inference across languages. In this paper, we present Jamp, a Japanese NLI benchmark focused on temporal inference. Our dataset includes a range of temporal inference patterns, which enables us to conduct fine-grained analysis. To begin the data annotation process, we create diverse inference templates based on the formal semantics test suites. We then automatically generate diverse NLI examples by using the Japanese case frame dictionary and well-designed templates while controlling the distribution of inference patterns and gold labels. We evaluate the generalization capacities of monolingual/multilingual LMs by splitting our dataset based on tense fragments (i.e., temporal inference patterns). Our findings demonstrate that LMs struggle with specific linguistic phenomena, such as habituality, indicating that there is potential for the development of more effective NLI models across languages.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
1
1
1
1
1
null
1
null
null
null
1
1
1
ru
test
RuBQ
null
null
https://github.com/vladislavneon/RuBQ
CC BY-SA 4.0
2,020
multilingual
null
['web pages', 'wikipedia']
text
null
RuBQ is a Russian knowledge base question answering (KBQA) dataset that consists of 1,500 Russian questions of varying complexity along with their English machine translations, corresponding SPARQL queries, answers, as well as a subset of Wikidata covering entities with Russian labels.
1,500
sentences
null
['JetBrains Research', 'ITMO University', 'Ural Federal University']
null
null
null
null
false
GitHub
Free
null
['named entity recognition', 'machine translation']
null
null
null
['Vladislav Korablinov', 'Pavel Braslavski']
['ITMO University, Saint Petersburg, Russia', 'JetBrains Research, Saint Petersburg, Russia', 'Ural Federal University, Yekaterinburg, Russia']
The paper presents RuBQ, the first Russian knowledge base question answering (KBQA) dataset. The high-quality dataset consists of 1,500 Russian questions of varying complexity, their English machine translations, SPARQL queries to Wikidata, reference answers, as well as a Wikidata sample of triples containing entities with Russian labels. The dataset creation started with a large collection of question-answer pairs from online quizzes. The data underwent automatic filtering, crowd-assisted entity linking, automatic generation of SPARQL queries, and their subsequent in-house verification.
1
null
null
1
1
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
ru
test
RuCoLA
null
null
https://github.com/RussianNLP/RuCoLA
Apache-2.0
2,022
ru
null
['books', 'wikipedia', 'public datasets', 'LLM']
text
null
RuCoLA is a dataset of Russian sentences labeled as acceptable or not. It consists of 9.8k in-domain sentences from linguistic publications and 3.6k out-of-domain sentences produced by generative models.
13,445
sentences
null
['RussianNLP']
null
null
null
null
false
GitHub
Free
null
['linguistic acceptability']
null
null
null
['Vladislav Mikhailov', 'Tatiana Shamardina', 'Max Ryabinin', 'Alena Pestova', 'Ivan Smurov', 'Ekaterina Artemova']
['SberDevices', 'ABBYY', 'HSE University', 'Yandex', "Huawei Noah's Ark Lab", 'Center for Information and Language Processing (CIS), MaiNLP lab, LMU Munich, Germany']
Linguistic acceptability (LA) attracts the attention of the research community due to its many uses, such as testing the grammatical knowledge of language models and filtering implausible texts with acceptability classifiers. However, the application scope of LA in languages other than English is limited due to the lack of high-quality resources. To this end, we introduce the Russian Corpus of Linguistic Acceptability (RuCoLA), built from the ground up under the well-established binary LA approach. RuCoLA consists of $9.8$k in-domain sentences from linguistic publications and $3.6$k out-of-domain sentences produced by generative models. The out-of-domain set is created to facilitate the practical use of acceptability for improving language generation. Our paper describes the data collection protocol and presents a fine-grained analysis of acceptability classification experiments with a range of baseline approaches. In particular, we demonstrate that the most widely used language models still fall behind humans by a large margin, especially when detecting morphological and semantic errors. We release RuCoLA, the code of experiments, and a public leaderboard (rucola-benchmark.com) to assess the linguistic competence of language models for Russian.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
ru
test
Slovo
null
null
https://github.com/hukenovs/slovo
custom
2,024
ru
null
['web pages']
videos
null
A Russian Sign Language (RSL) video dataset containing 20,000 FullHD recordings of 1,000 isolated RSL gestures from 194 signers. The data was collected via crowdsourcing platforms and covers frequently used words like food, animals, emotions, and colors.
20,000
sentences
null
['SaluteDevices']
null
null
null
null
false
GitHub
Free
null
['sign language recognition']
null
null
null
['Kapitanov Alexander', 'Kvanchiani Karina', 'Nagaev Alexander', 'Petrova Elizaveta']
['SaluteDevices, Russia']
One of the main challenges of the sign language recognition task is the difficulty of collecting a suitable dataset due to the gap between hard-of-hearing and hearing societies. In addition, the sign language in each country differs significantly, which obliges the creation of new data for each of them. This paper presents the Russian Sign Language (RSL) video dataset Slovo, produced using crowdsourcing platforms. The dataset contains 20,000 FullHD recordings, divided into 1,000 classes of isolated RSL gestures received by 194 signers. We also provide the entire dataset creation pipeline, from data collection to video annotation, with the following demo application. Several neural networks are trained and evaluated on the Slovo to demonstrate its teaching ability. Proposed data and pre-trained models are publicly available.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
ru
test
WikiOmnia
null
null
https://huggingface.co/datasets/RussianNLP/wikiomnia
Apache-2.0
2,022
ru
null
['wikipedia']
text
null
WikiOmnia is a large-scale Russian question-answering dataset generated from the summary sections of all Russian Wikipedia articles. It was created using a fully automated pipeline with ruGPT-3XL and ruT5-large models and includes both a raw version and a filtered, high-quality version.
3,560,000
sentences
null
['Artificial Intelligence Research Institute (AIRI)', 'SberDevices']
null
null
null
null
false
HuggingFace
Free
null
['question answering', 'information retrieval']
null
null
null
['Dina Pisarevskaya', 'Tatiana Shavrina']
['Independent Researcher', 'Artificial Intelligence Research Institute (AIRI)', 'SberDevices']
The General QA field has been developing the methodology referencing the Stanford Question answering dataset (SQuAD) as the significant benchmark. However, compiling factual questions is accompanied by time- and labour-consuming annotation, limiting the training data's potential size. We present the WikiOmnia dataset, a new publicly available set of QA-pairs and corresponding Russian Wikipedia article summary sections, composed with a fully automated generative pipeline. The dataset includes every available article from Wikipedia for the Russian language. The WikiOmnia pipeline is available open-source and is also tested for creating SQuAD-formatted QA on other domains, like news texts, fiction, and social media. The resulting dataset includes two parts: raw data on the whole Russian Wikipedia (7,930,873 QA pairs with paragraphs for ruGPT-3 XL and 7,991,040 QA pairs with paragraphs for ruT5-large) and cleaned data with strict automatic verification (over 160,000 QA pairs with paragraphs for ruGPT-3 XL and over 3,400,000 QA pairs with paragraphs for ruT5-large).
1
null
null
0
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
0
1
1
null
1
null
null
null
1
1
1
ru
test
DaNetQA
null
null
https://github.com/PragmaticsLab/DaNetQA
CC0
2,020
ru
null
['wikipedia']
text
null
DaNetQA is a question-answering dataset for the Russian language. It comprises natural yes/no questions paired with a paragraph from Wikipedia and an answer derived from the paragraph. The task is to take both the question and a paragraph as input and come up with a yes/no answer.
2,691
sentences
null
['Pragmatics Lab']
null
null
null
null
false
GitHub
Free
null
['question answering']
null
null
null
['Taisia Glushkova', 'Alexey Machnev', 'Alena Fenogenova', 'Tatiana Shavrina', 'Ekaterina Artemova', 'Dmitry I. Ignatov']
['National Research University Higher School of Economics', 'Sberbank']
DaNetQA, a new question-answering corpus, follows (Clark et. al, 2019) design: it comprises natural yes/no questions. Each question is paired with a paragraph from Wikipedia and an answer, derived from the paragraph. The task is to take both the question and a paragraph as input and come up with a yes/no answer, i.e. to produce a binary output. In this paper, we present a reproducible approach to DaNetQA creation and investigate transfer learning methods for task and language transferring. For task transferring we leverage three similar sentence modelling tasks: 1) a corpus of paraphrases, Paraphraser, 2) an NLI task, for which we use the Russian part of XNLI, 3) another question answering task, SberQUAD. For language transferring we use English to Russian translation together with multilingual language fine-tuning.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
0
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
ru
test
RuSemShift
null
null
https://github.com/juliarodina/RuSemShift
CC BY-SA 4.0
2,020
ru
null
['public datasets']
text
null
A large-scale manually annotated test set for the task of semantic change modeling in Russian for two long-term time period pairs: from the pre-Soviet through the Soviet times and from the Soviet through the post-Soviet times.
7,846
sentences
null
['National Research University Higher School of Economics', 'University of Oslo']
null
null
null
null
false
GitHub
Free
null
['word similarity']
null
null
null
['Julia Rodina', 'Andrey Kutuzov']
['National Research University Higher School of Economics', 'University of Oslo']
We present RuSemShift, a large-scale manually annotated test set for the task of semantic change modeling in Russian for two long-term time period pairs: from the pre-Soviet through the Soviet times and from the Soviet through the post-Soviet times. Target words were annotated by multiple crowd-source workers. The annotation process was organized following the DURel framework and was based on sentence contexts extracted from the Russian National Corpus. Additionally, we report the performance of several distributional approaches on RuSemShift, achieving promising results, which at the same time leave room for other researchers to improve.
1
null
null
1
1
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
ru
test
REPA
null
null
https://huggingface.co/datasets/RussianNLP/repa
MIT License
2,025
ru
null
['public datasets', 'LLM']
text
null
The Russian Error tyPes Annotation dataset (REPA) consists of 1k user queries and 2k LLM-generated responses. Human annotators labeled each response pair expressing their preferences across ten specific error types, as well as selecting an overall preference.
1,003
sentences
null
['HSE University']
null
null
null
null
false
HuggingFace
Free
null
['question answering', 'instruction tuning']
null
null
null
['Alexander Pugachev', 'Alena Fenogenova', 'Vladislav Mikhailov', 'Ekaterina Artemova']
['HSE University', 'SaluteDevices', 'University of Oslo', 'Toloka AI']
Recent advances in large language models (LLMs) have introduced the novel paradigm of using LLMs as judges, where an LLM evaluates and scores the outputs of another LLM, which often correlates highly with human preferences. However, the use of LLM-as-a-judge has been primarily studied in English. In this paper, we evaluate this framework in Russian by introducing the Russian Error tyPes Annotation dataset (REPA), a dataset of 1k user queries and 2k LLM-generated responses. Human annotators labeled each response pair expressing their preferences across ten specific error types, as well as selecting an overall preference. We rank six generative LLMs across the error types using three rating systems based on human preferences. We also evaluate responses using eight LLM judges in zero-shot and few-shot settings. We describe the results of analyzing the judges and position and length biases. Our findings reveal a notable gap between LLM judge performance in Russian and English. However, rankings based on human and LLM preferences show partial alignment, suggesting that while current LLM judges struggle with fine-grained evaluation in Russian, there is potential for improvement.
1
null
null
0
1
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
0
1
1
null
1
null
null
null
1
1
1
ru
test
SberQuAD
null
null
https://github.com/sberbank-ai/data-science-journey-2017
Apache-2.0
2,020
ru
null
['wikipedia']
text
null
SberQuAD is a large-scale Russian reading comprehension dataset, analogous to SQuAD. It contains over 50,000 question-answer pairs created by crowdworkers based on paragraphs from Russian Wikipedia articles. The task is to find a contiguous text span in the paragraph that answers the question.
50,364
sentences
null
['Sberbank']
null
null
null
null
false
GitHub
Free
null
['question answering']
null
null
null
['Pavel Efimov', 'Andrey Chertok', 'Leonid Boytsov', 'Pavel Braslavski']
['Saint Petersburg State University', 'Sberbank', 'Ural Federal University', 'JetBrains Research']
SberQuAD -- a large scale analog of Stanford SQuAD in the Russian language - is a valuable resource that has not been properly presented to the scientific community. We fill this gap by providing a description, a thorough analysis, and baseline experimental results.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
ru
test
RuBLiMP
null
null
https://github.com/RussianNLP/RuBLiMP
Apache-2.0
2,024
ru
null
['wikipedia', 'news articles', 'public datasets']
text
null
The Russian Benchmark of Linguistic Minimal Pairs (RuBLiMP) includes 45,000 pairs of sentences that differ in grammaticality and isolate a morphological, syntactic, or semantic phenomenon. It is created by applying linguistic perturbations to automatically annotated sentences from open text corpora and decontaminating test data.
45,000
sentences
null
['University of Edinburgh', 'HSE University', 'University of Groningen', 'Ghent University', 'SaluteDevices', 'Toloka AI', 'University of Oslo']
null
null
null
null
false
GitHub
Free
null
['linguistic acceptability', 'morphological analysis']
null
null
null
['Ekaterina Taktasheva', 'Maxim Bazhukov', 'Kirill Koncha', 'Alena Fenogenova', 'Ekaterina Artemova', 'Vladislav Mikhailov']
['University of Edinburgh', 'HSE University', 'University of Groningen', 'Ghent University', 'SaluteDevices', 'Toloka AI', 'University of Oslo']
Minimal pairs are a well-established approach to evaluating the grammatical knowledge of language models. However, existing resources for minimal pairs address a limited number of languages and lack diversity of language-specific grammatical phenomena. This paper introduces the Russian Benchmark of Linguistic Minimal Pairs (RuBLiMP), which includes 45k pairs of sentences that differ in grammaticality and isolate a morphological, syntactic, or semantic phenomenon. In contrast to existing benchmarks of linguistic minimal pairs, RuBLiMP is created by applying linguistic perturbations to automatically annotated sentences from open text corpora and carefully curating test data. We describe the data collection protocol and present the results of evaluating 25 language models in various scenarios. We find that the widely used language models for Russian are sensitive to morphological and agreement-oriented contrasts but fall behind humans on phenomena requiring understanding of structural relations, negation, transitivity, and tense. RuBLiMP, the codebase, and other materials are publicly available.
1
null
null
0
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
ru
test
Golos
null
null
https://github.com/sberdevices/golos
custom
2,021
ru
null
['other']
audio
null
Golos is a large Russian speech dataset consisting of 1240 hours of manually annotated audio. It was collected using crowd-sourcing and studio recordings with far-field settings.
1,240
hours
null
['Sber']
null
null
null
null
false
GitHub
Free
null
['speech recognition']
null
null
null
['Nikolay Karpov', 'Alexander Denisenko', 'Fedor Minkin']
['Sber, Russia']
This paper introduces a novel Russian speech dataset called Golos, a large corpus suitable for speech research. The dataset mainly consists of recorded audio files manually annotated on the crowd-sourcing platform. The total duration of the audio is about 1240 hours. We have made the corpus freely available to download, along with the acoustic model with CTC loss prepared on this corpus. Additionally, transfer learning was applied to improve the performance of the acoustic model. In order to evaluate the quality of the dataset with the beam-search algorithm, we have built a 3-gram language model on the open Common Crawl dataset. The total word error rate (WER) metrics turned out to be about 3.3% and 11.5%.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
ru
test
RFSD
null
null
https://github.com/irlcode/RFSD
CC BY 4.0
2,025
ru
null
['web pages']
text
null
The Russian Financial Statements Database (RFSD) is an open, harmonized collection of annual unconsolidated financial statements of the universe of Russian firms from 2011–2023.
56,150,173
documents
null
['European University at Saint Petersburg']
null
null
null
null
false
GitHub
Free
null
['other']
null
null
null
['Sergey Bondarkov', 'Viktor Ledenev', 'Dmitry Skougarevskiy']
['European University at Saint Petersburg']
The Russian Financial Statements Database (RFSD) is an open, harmonized collection of annual unconsolidated financial statements of the universe of Russian firms in 2011-2023. It is the first open data set with information on every active firm in the country, including non-filing firms. With 56.6 million geolocated firm-year observations gathered from two official sources, the RFSD features multiple end-user quality-of-life improvements such as data imputation, statement articulation, harmonization across data providers and formats, and data enrichment. Extensive internal and external validation shows that most statements articulate well while their aggregates display higher correlation with the regional GDP than the previous gridded GDP data products. We also examine the direction and magnitude of the reporting bias by comparing the universe of firms that are required to file with the actual filers. The RFSD can be used in various economic applications as diverse as calibration of micro-founded models, estimation of markups and productivity, or assessing industry organization and market power.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
ru
test
RuDSI
null
null
https://github.com/kategavrishina/RuDSI
unknown
2,022
ru
null
['public datasets', 'wikipedia']
text
null
A benchmark dataset for word sense induction (WSI) in Russian. It was created using manual annotation and semi-automatic clustering of Word Usage Graphs (WUGs) from the Russian National Corpus. The sense inventories are data-driven, not based on predefined dictionaries.
840
sentences
null
['HSE University']
null
null
null
null
false
GitHub
Free
null
['other']
null
null
null
['Anna Aksenova', 'Ekaterina Gavrishina', 'Elisey Rykov', 'Andrey Kutuzov']
['National Research University Higher School of Economics', 'University of Oslo']
We present RuDSI, a new benchmark for word sense induction (WSI) in Russian. The dataset was created using manual annotation and semi-automatic clustering of Word Usage Graphs (WUGs). Unlike prior WSI datasets for Russian, RuDSI is completely data-driven (based on texts from Russian National Corpus), with no external word senses imposed on annotators. Depending on the parameters of graph clustering, different derivative datasets can be produced from raw annotation. We report the performance that several baseline WSI methods obtain on RuDSI and discuss possibilities for improving these scores.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
ru
test
HeadlineCause
null
null
https://github.com/IlyaGusev/HeadlineCause
CC0
2,021
multilingual
null
['news articles', 'public datasets', 'web pages']
text
null
A dataset for detecting implicit causal relations between pairs of news headlines in English and Russian. The pairs are labeled through crowdsourcing and vary from unrelated to having causation or refutation relations.
9,553
sentences
null
['Moscow Institute of Physics and Technology', 'Yandex']
null
null
null
null
false
GitHub
Free
null
['natural language inference']
null
null
null
['Ilya Gusev', 'Alexey Tikhonov']
['Moscow Institute of Physics and Technology', 'Yandex']
Detecting implicit causal relations in texts is a task that requires both common sense and world knowledge. Existing datasets are focused either on commonsense causal reasoning or explicit causal relations. In this work, we present HeadlineCause, a dataset for detecting implicit causal relations between pairs of news headlines. The dataset includes over 5000 headline pairs from English news and over 9000 headline pairs from Russian news labeled through crowdsourcing. The pairs vary from totally unrelated or belonging to the same general topic to the ones including causation and refutation relations. We also present a set of models and experiments that demonstrates the dataset validity, including a multilingual XLM-RoBERTa based model for causality detection and a GPT-2 based model for possible effects prediction.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
ru
test
RusTitW
null
null
https://github.com/markovivl/SynthText
Apache-2.0
2,023
ru
null
['public datasets']
images
null
A large-scale, human-labeled dataset for Russian text recognition in natural images (text-in-the-wild). It contains over 13k real-world images. The paper also presents a larger synthetic dataset (over 900k images) and the code for its generation.
900,701
images
null
['AIRI', 'SberAI']
null
null
null
null
false
GitHub
Free
null
['optical character recognition']
null
null
null
['Igor Markov', 'Sergey Nesteruk', 'Andrey Kuznetsov', 'Denis Dimitrov']
['AIRI', 'SberAI']
Information surrounds people in modern life. Text is a very efficient type of information that people use for communication for centuries. However, automated text-in-the-wild recognition remains a challenging problem. The major limitation for a DL system is the lack of training data. For the competitive performance, training set must contain many samples that replicate the real-world cases. While there are many high-quality datasets for English text recognition; there are no available datasets for Russian language. In this paper, we present a large-scale human-labeled dataset for Russian text recognition in-the-wild. We also publish a synthetic dataset and code to reproduce the generation process
1
null
null
0
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
ru
test
CRAFT
null
null
unknown
2,025
ru
null
['web pages']
images
null
A dataset of 200,000 image-text pairs focusing on the Russian cultural code, designed to enhance text-to-image models' understanding of Russian, Soviet, and post-Soviet culture. It was created by crawling the web, followed by manual filtering and detailed human captioning to ensure quality and cultural relevance.
200,000
images
null
['SberAI']
null
null
null
null
false
other
Free
null
['other']
null
null
null
['Viacheslav Vasilev', 'Vladimir Arkhipkin', 'Julia Agafonova', 'Tatiana Nikulina', 'Evelina Mironova', 'Alisa Shichanina', 'Nikolai Gerasimenko', 'Mikhail Shoytov', 'Denis Dimitrov']
['SberAI', 'Moscow Institute of Physics and Technology (MIPT)', 'Information Technologies, Mechanics and Optics University (ITMO University)', 'Artificial Intelligence Research Institute (AIRI)']
Despite the fact that popular text-to-image generation models cope well with international and general cultural queries, they have a significant knowledge gap regarding individual cultures. This is due to the content of existing large training datasets collected on the Internet, which are predominantly based on Western European or American popular culture. Meanwhile, the lack of cultural adaptation of the model can lead to incorrect results, a decrease in the generation quality, and the spread of stereotypes and offensive content. In an effort to address this issue, we examine the concept of cultural code and recognize the critical importance of its understanding by modern image generation models, an issue that has not been sufficiently addressed in the research community to date. We propose the methodology for collecting and processing the data necessary to form a dataset based on the cultural code, in particular the Russian one. We explore how the collected data affects the quality of generations in the national domain and analyze the effectiveness of our approach using the Kandinsky 3.1 text-to-image model. Human evaluation results demonstrate an increase in the level of awareness of Russian culture in the model.
1
null
null
0
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
ru
test
Gazeta
null
null
https://github.com/IlyaGusev/gazeta
unknown
2,021
ru
null
['news articles', 'web pages']
text
null
Gazeta is the first dataset for summarizing Russian news, consisting of over 63,000 article-summary pairs collected from the Gazeta.ru news website. The paper describes the dataset's properties and benchmarks several extractive and abstractive models, demonstrating that a pretrained mBART model is particularly effective for this task.
63,435
documents
null
['Moscow Institute of Physics and Technology']
null
null
null
null
false
GitHub
Free
null
['summarization']
null
null
null
['Ilya Gusev']
['Moscow Institute of Physics and Technology']
Automatic text summarization has been studied in a variety of domains and languages. However, this does not hold for the Russian language. To overcome this issue, we present Gazeta, the first dataset for summarization of Russian news. We describe the properties of this dataset and benchmark several extractive and abstractive models. We demonstrate that the dataset is a valid task for methods of text summarization for Russian. Additionally, we prove the pretrained mBART model to be useful for Russian text summarization.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
ru
test
RuBia
null
null
https://github.com/vergrig/RuBia-Dataset
CC BY-SA 4.0
2,024
ru
null
['social media']
text
null
RuBia is a Russian language bias detection dataset divided into 4 domains: gender, nationality, socio-economic status, and diverse. Each example consists of two sentences, one reinforcing a harmful stereotype and the other contradicting it. The data was written by volunteers and validated by crowdworkers.
1,989
sentences
null
['HSE University']
null
null
null
null
false
GitHub
Free
null
['other']
null
null
null
['Veronika Grigoreva', 'Anastasiia Ivanova', 'Ilseyar Alimova', 'Ekaterina Artemova']
['Queen’s University', 'Higher School of Economics', 'Wildberries', 'Linguistic Convergence Laboratory', 'Toloka AI']
Warning: this work contains upsetting or disturbing content. Large language models (LLMs) tend to learn the social and cultural biases present in the raw pre-training data. To test if an LLM's behavior is fair, functional datasets are employed, and due to their purpose, these datasets are highly language and culture-specific. In this paper, we address a gap in the scope of multilingual bias evaluation by presenting a bias detection dataset specifically designed for the Russian language, dubbed as RuBia. The RuBia dataset is divided into 4 domains: gender, nationality, socio-economic status, and diverse, each of the domains is further divided into multiple fine-grained subdomains. Every example in the dataset consists of two sentences with the first reinforcing a potentially harmful stereotype or trope and the second contradicting it. These sentence pairs were first written by volunteers and then validated by native-speaking crowdsourcing workers. Overall, there are nearly 2,000 unique sentence pairs spread over 19 subdomains in RuBia. To illustrate the dataset's purpose, we conduct a diagnostic evaluation of state-of-the-art or near-state-of-the-art LLMs and discuss the LLMs' predisposition to social biases.
1
null
null
1
1
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
ru
test
RusCode
null
null
https://github.com/ai-forever/RusCode
MIT License
2,025
multilingual
null
['other']
text
null
RusCode is a benchmark dataset designed to evaluate the quality of text-to-image generation models on their understanding of the Russian cultural code. It consists of 1250 text prompts in Russian, with English translations, covering 19 categories that represent Russian visual culture, including art, folklore, famous personalities, and scientific achievements.
1,250
sentences
null
['SberAI']
null
null
null
null
false
GitHub
Free
null
['instruction tuning']
null
null
null
['Viacheslav Vasilev', 'Julia Agafonova', 'Nikolai Gerasimenko', 'Alexander Kapitanov', 'Polina Mikhailova', 'Evelina Mironova', 'Denis Dimitrov']
['SberAI', 'MIPT', 'ITMO University', 'SberDevices', 'AIRI']
Text-to-image generation models have gained popularity among users around the world. However, many of these models exhibit a strong bias toward English-speaking cultures, ignoring or misrepresenting the unique characteristics of other language groups, countries, and nationalities. The lack of cultural awareness can reduce the generation quality and lead to undesirable consequences such as unintentional insult, and the spread of prejudice. In contrast to the field of natural language processing, cultural awareness in computer vision has not been explored as extensively. In this paper, we strive to reduce this gap. We propose a RusCode benchmark for evaluating the quality of text-to-image generation containing elements of the Russian cultural code. To do this, we form a list of 19 categories that best represent the features of Russian visual culture. Our final dataset consists of 1250 text prompts in Russian and their translations into English. The prompts cover a wide range of topics, including complex concepts from art, popular culture, folk traditions, famous people's names, natural objects, scientific achievements, etc. We present the results of a human evaluation of the side-by-side comparison of Russian visual concepts representations using popular generative models.
1
null
null
1
1
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
ru
test
Russian Jeopardy
null
null
https://github.com/evrog/Russian-QA-Jeopardy
custom
2,024
ru
null
['TV Channels']
text
null
A dataset of 29,375 Russian quiz-show questions of the Jeopardy! type, collected from the ChGK database. The questions are fact-oriented and suitable for open-domain question answering tasks.
29,375
sentences
null
['Tyumen State University']
null
null
null
null
false
GitHub
Free
null
['question answering']
null
null
null
['Elena Mikhalkova', 'Alexander Khlyupin']
['Tyumen State University']
Question answering (QA) is one of the most common NLP tasks that relates to named entity recognition, fact extraction, semantic search and some other fields. In industry, it is much appreciated in chatbots and corporate information systems. It is also a challenging task that attracted the attention of a very general audience at the quiz show Jeopardy! In this article we describe a Jeopardy!-like Russian QA data set collected from the official Russian quiz database Chgk (che ge ka). The data set includes 379,284 quiz-like questions with 29,375 from the Russian analogue of Jeopardy! - "Own Game". We observe its linguistic features and the related QA-task. We conclude about perspectives of a QA competition based on the data set collected from this database.
1
null
null
1
1
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
ru
test
Russian Multimodal Summarization
null
null
https://github.com/iis-research-team/summarization-dataset
unknown
2,024
ru
null
['web pages']
text
null
A multimodal dataset of 420 Russian-language scientific papers from 7 domains (Economics, History, IT, Journalism, Law, Linguistics, Medicine). The dataset includes texts, abstracts, tables, and figures, designed for the task of automatic text summarization.
420
documents
null
['Novosibirsk State University', 'A.P. Ershov Institute of Informatics Systems']
null
null
null
null
false
GitHub
Free
null
['summarization']
null
null
null
['Tsanda Alena', 'Bruches Elena']
['Novosibirsk State University', 'A.P. Ershov Institute of Informatics Systems']
The paper discusses the creation of a multimodal dataset of Russian-language scientific papers and testing of existing language models for the task of automatic text summarization. A feature of the dataset is its multimodal data, which includes texts, tables and figures. The paper presents the results of experiments with two language models: Gigachat from SBER and YandexGPT from Yandex. The dataset consists of 420 papers and is publicly available on https://github.com/iis-research-team/summarization-dataset.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
ru
test
NEREL
null
null
https://github.com/nerel-ds/NEREL
unknown
2,021
ru
null
['news articles', 'wikipedia']
text
null
NEREL is a Russian dataset for named entity recognition and relation extraction. It contains 56K annotated named entities and 39K annotated relations.
56,000
tokens
null
['HSE University']
null
null
null
null
false
GitHub
Free
null
['named entity recognition', 'relation extraction']
null
null
null
['Natalia Loukachevitch', 'Ekaterina Artemova', 'Tatiana Batura', 'Pavel Braslavski', 'Ilia Denisov', 'Vladimir Ivanov', 'Suresh Manandhar', 'Alexander Pugachev', 'Elena Tutubalina']
['Lomonosov Moscow State University', 'HSE University', "Huawei Noah's Ark lab", 'Novosibirsk State University', 'Ural Federal University', 'Innopolis University', 'Kazan Federal University', 'Sber AI', 'Wiseyak']
In this paper, we present NEREL, a Russian dataset for named entity recognition and relation extraction. NEREL is significantly larger than existing Russian datasets: to date it contains 56K annotated named entities and 39K annotated relations. Its important difference from previous datasets is annotation of nested named entities, as well as relations within nested entities and at the discourse level. NEREL can facilitate development of novel models that can extract relations between nested named entities, as well as relations on both sentence and document levels. NEREL also contains the annotation of events involving named entities and their roles in the events. The NEREL collection is available via https://github.com/nerel-ds/NEREL.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
fr
test
HISTOIRESMORALES
null
null
https://github.com/upunaprosk/histoires-morales
MIT License
2,025
fr
null
['public datasets']
text
null
A French dataset for assessing moral alignment in LLMs, derived from the English MORALSTORIES dataset. It consists of 12,000 short narratives describing social situations, moral norms, intentions, and corresponding moral/immoral actions and consequences, adapted to the French cultural context.
12,000
sentences
null
['Laboratoire Hubert Curien']
null
null
null
null
false
GitHub
Free
null
['topic classification', 'natural language inference', 'linguistic acceptability']
null
null
null
['Thibaud Leteno', 'Irina Proskurina', 'Antoine Gourru', 'Julien Velcin', 'Charlotte Laclau', 'Guillaume Metzler', 'Christophe Gravier']
['Laboratoire Hubert Curien', 'Université Lumière Lyon 2', 'Université Claude Bernard Lyon 1', 'ERIC', 'Télécom Paris', 'Institut Polytechnique de Paris']
Aligning language models with human values is crucial, especially as they become more integrated into everyday life. While models are often adapted to user preferences, it is equally important to ensure they align with moral norms and behaviours in real-world social situations. Despite significant progress in languages like English and Chinese, French has seen little attention in this area, leaving a gap in understanding how LLMs handle moral reasoning in this language. To address this gap, we introduce Histoires Morales, a French dataset derived from Moral Stories, created through translation and subsequently refined with the assistance of native speakers to guarantee grammatical accuracy and adaptation to the French cultural context. We also rely on annotations of the moral values within the dataset to ensure their alignment with French norms. Histoires Morales covers a wide range of social situations, including differences in tipping practices, expressions of honesty in relationships, and responsibilities toward animals. To foster future research, we also conduct preliminary experiments on the alignment of multilingual models on French and English data and the robustness of the alignment. We find that while LLMs are generally aligned with human moral norms by default, they can be easily influenced with user-preference optimization for both moral and immoral data.
1
null
null
1
1
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
fr
test
PxSLU
null
null
https://doi.org/10.5281/zenodo.6482586
CC BY 4.0
2,022
fr
null
['books']
audio
null
A spoken medical drug prescription corpus in French, containing about 4 hours of transcribed and annotated dialogues. The data was acquired from 55 expert and non-expert participants interacting with a dialogue system on a smartphone.
4
hours
null
['Univ. Grenoble Alpes']
null
null
null
null
false
zenodo
Free
null
['speech recognition', 'other']
null
null
null
['AliCan Kocabiyikoglu', 'François Portet', 'Prudence Gibert', 'Hervé Blanchon', 'Jean-Marc Babouchkine', 'Gaëtan Gavazzi']
['Univ. Grenoble Alpes', 'CHU Grenoble Alpes', 'Calystene SA', 'Clinique de médecine gériatrique']
Spoken medical dialogue systems are increasingly attracting interest to enhance access to healthcare services and improve quality and traceability of patient care. In this paper, we focus on medical drug prescriptions acquired on smartphones through spoken dialogue. Such systems would facilitate the traceability of care and would free clinicians' time. However, there is a lack of speech corpora to develop such systems since most of the related corpora are in text form and in English. To facilitate the research and development of spoken medical dialogue systems, we present, to the best of our knowledge, the first spoken medical drug prescriptions corpus, named PxSLU. It contains 4 hours of transcribed and annotated dialogues of drug prescriptions in French acquired through an experiment with 55 participants experts and non-experts in prescriptions. We also present some experiments that demonstrate the interest of this corpus for the evaluation and development of medical dialogue systems.
1
null
null
1
1
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
fr
test
French COVID19 Lockdown Twitter Dataset
null
null
https://github.com/calciu/COVID19-LockdownFr
CC BY-NC-SA 4.0
2,020
multilingual
null
['social media']
text
null
A multilingual Twitter dataset, primarily in French, collected during the first COVID-19 lockdown in France starting March 17, 2020. The dataset, collected via Twitter's API using #ConfinementJourXx hashtags, includes sentiment and emotion annotations derived from various lexicons.
2,474,086
sentences
null
['University of Picardie', 'Paris-Nanterre University', 'Lille University']
null
null
null
null
false
GitHub
Free
null
['sentiment analysis']
null
null
null
['Sophie Balech', 'Christophe Benavent', 'Mihai Calciu']
['University of Picardie', 'Paris-Nanterre University', 'Lille University']
In this paper, we present a mainly French coronavirus Twitter dataset that we have been continuously collecting since lockdown restrictions have been enacted in France (in March 17, 2020). We offer our datasets and sentiment analysis annotations to the research community at https://github.com/calciu/COVID19-LockdownFr. They have been obtained using high performance computing (HPC) capabilities of our university's datacenter. We think that our contribution can facilitate analysis of online conversation dynamics reflecting people sentiments when facing severe home confinement restrictions determined by the outbreak of this world wide epidemic. We hope that our contribution will help decode shared experience and mood but also test the sensitivity of sentiment measurement instruments and incite the development of new instruments, methods and approaches.
1
null
null
1
1
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
fr
test
MEDIA with Intents
null
null
https://github.com/Ala-Na/media_benchmark_intent_annotations
unknown
2,024
fr
null
['public datasets']
audio
null
An enhanced version of the French MEDIA Spoken Language Understanding (SLU) dataset. Originally annotated only for slots (named entities), this version adds intent classification labels using a semi-automatic tri-training methodology. The corpus consists of recorded human-machine dialogues for hotel booking.
70
hours
null
['Université Paris-Saclay', 'Avignon Université', 'LIA', 'QWANT']
null
null
null
null
false
GitHub
Free
null
['speech recognition', 'intent classification']
null
null
null
['Nadège Alavoine', 'Gaëlle Laperrière', 'Christophe Servan', 'Sahar Ghannay', 'Sophie Rosset']
['Université Paris-Saclay', 'Avignon Université', 'QWANT']
Intent classification and slot-filling are essential tasks of Spoken Language Understanding (SLU). In most SLUsystems, those tasks are realized by independent modules. For about fifteen years, models achieving both of themjointly and exploiting their mutual enhancement have been proposed. A multilingual module using a joint modelwas envisioned to create a touristic dialogue system for a European project, HumanE-AI-Net. A combination ofmultiple datasets, including the MEDIA dataset, was suggested for training this joint model. The MEDIA SLU datasetis a French dataset distributed since 2005 by ELRA, mainly used by the French research community and free foracademic research since 2020. Unfortunately, it is annotated only in slots but not intents. An enhanced version ofMEDIA annotated with intents has been built to extend its use to more tasks and use cases. This paper presents thesemi-automatic methodology used to obtain this enhanced version. In addition, we present the first results of SLUexperiments on this enhanced dataset using joint models for intent classification and slot-filling.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
fr
test
FrenchMedMCQA
null
null
https://github.com/qanastek/FrenchMedMCQA
Apache-2.0
2,023
fr
null
['web pages']
text
null
FrenchMedMCQA is a multiple-choice question answering dataset in French for the medical domain. It contains 3,105 questions taken from real exams of the French medical specialization diploma in pharmacy, mixing single and multiple answers.
3,105
sentences
null
['Avignon University', 'Nantes University', 'Zenidoc']
null
null
null
null
false
GitHub
Free
null
['multiple choice question answering']
null
null
null
['Yanis Labrak', 'Adrien Bazoge', 'Richard Dufour', 'Béatrice Daille', 'Pierre-Antoine Gourraud', 'Emmanuel Morin', 'Mickael Rouvier']
['LIA - Avignon University', 'LS2N - Nantes University', 'CHU de Nantes - La clinique des données - Nantes University', 'Zenidoc']
This paper introduces FrenchMedMCQA, the first publicly available Multiple-Choice Question Answering (MCQA) dataset in French for medical domain. It is composed of 3,105 questions taken from real exams of the French medical specialization diploma in pharmacy, mixing single and multiple answers. Each instance of the dataset contains an identifier, a question, five possible answers and their manual correction(s). We also propose first baseline models to automatically process this MCQA task in order to report on the current performances and to highlight the difficulty of the task. A detailed analysis of the results showed that it is necessary to have representations adapted to the medical domain or to the MCQA task: in our case, English specialized models yielded better results than generic French ones, even though FrenchMedMCQA is in French. Corpus, models and tools are available online.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
fr
test
FRASIMED
null
null
https://doi.org/10.5281/zenodo.8355629
CC BY 4.0
2,024
fr
null
['public datasets']
text
null
A French annotated corpus comprising 2,051 synthetic clinical cases for Named Entity Recognition (NER) and Entity Linking. It was created by projecting annotations from two Spanish corpora, CANTEMIST and DISTEMIST, using a crosslingual BERT-based method, followed by manual correction. The annotations are linked to medical terminologies like ICD-O and SNOMED-CT.
24,037
tokens
null
['University Hospitals of Geneva', 'University of Geneva']
null
null
null
null
false
zenodo
Free
null
['named entity recognition']
null
null
null
['Jamil Zaghir', 'Mina Bjelogrlic', 'Jean-Philippe Goldman', 'Soukaïna Aananou', 'Christophe Gaudet-Blavignac', 'Christian Lovis']
['Division of Medical Information Sciences (SIMED), University Hospitals of Geneva, Geneva, Switzerland', 'Department of Radiology and Medical Informatics, University of Geneva, Geneva, Switzerland']
Natural language processing (NLP) applications such as named entity recognition (NER) for low-resource corpora do not benefit from recent advances in the development of large language models (LLMs) where there is still a need for larger annotated datasets. This research article introduces a methodology for generating translated versions of annotated datasets through crosslingual annotation projection. Leveraging a language agnostic BERT-based approach, it is an efficient solution to increase low-resource corpora with few human efforts and by only using already available open data resources. Quantitative and qualitative evaluations are often lacking when it comes to evaluating the quality and effectiveness of semi-automatic data generation strategies. The evaluation of our crosslingual annotation projection approach showed both effectiveness and high accuracy in the resulting dataset. As a practical application of this methodology, we present the creation of French Annotated Resource with Semantic Information for Medical Entities Detection (FRASIMED), an annotated corpus comprising 2'051 synthetic clinical cases in French. The corpus is now available for researchers and practitioners to develop and refine French natural language processing (NLP) applications in the clinical field (https://zenodo.org/record/8355629), making it the largest open annotated corpus with linked medical concepts in French.
1
null
null
1
1
0
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
fr
test
FQuAD1.1
null
null
https://fquad.illuin.tech
CC BY-NC-SA 3.0
2,020
fr
null
['wikipedia']
text
null
FQuAD is a French Question Answering dataset composed of 10,000 question-answer pairs extracted from news articles. The dataset is designed to be used for training and evaluating question answering models in French.
62,003
sentences
null
['Illuin Technology']
null
null
null
null
false
other
Free
null
['question answering']
null
null
null
["Martin d'Hoffschmidt", 'Wacim Belblidia', 'Tom Brendlé', 'Quentin Heinrich', 'Maxime Vidal']
['Illuin Technology', 'ETH Zurich']
Recent advances in the field of language modeling have improved state-of-the-art results on many Natural Language Processing tasks. Among them, Reading Comprehension has made significant progress over the past few years. However, most results are reported in English since labeled resources available in other languages, such as French, remain scarce. In the present work, we introduce the French Question Answering Dataset (FQuAD). FQuAD is a French Native Reading Comprehension dataset of questions and answers on a set of Wikipedia articles that consists of 25,000+ samples for the 1.0 version and 60,000+ samples for the 1.1 version. We train a baseline model which achieves an F1 score of 92.2 and an exact match ratio of 82.1 on the test set. In order to track the progress of French Question Answering models we propose a leader-board and we have made the 1.0 version of our dataset freely available at https://illuin-tech.github.io/FQuAD-explorer/.
1
null
null
0
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
0
1
0
null
1
null
null
null
1
1
1
fr
test
FIJO
null
null
https://github.com/iid-ulaval/FIJO-code
CC BY-NC-SA 4.0
2,022
fr
null
['web pages']
text
null
FIJO (French Insurance Job Offer) is a public dataset containing 867 French job offers from the insurance domain. A subset of 47 job ads is annotated with four types of soft skills (Thoughts, Results, Relational, Personal). The dataset was created to facilitate research in automatic skill recognition.
932
tokens
null
['Université Laval']
null
null
null
null
false
GitHub
Free
null
['named entity recognition']
null
null
null
['David Beauchemin', 'Julien Laumonier', 'Yvan Le Ster', 'Marouane Yassine']
['Departement of Computer Science and Software Engineering, Université Laval, Québec, Canada', 'Institute Intelligence and Data, Université Laval, Québec, Canada']
Understanding the evolution of job requirements is becoming more important for workers, companies and public organizations to follow the fast transformation of the employment market. Fortunately, recent natural language processing (NLP) approaches allow for the development of methods to automatically extract information from job ads and recognize skills more precisely. However, these efficient approaches need a large amount of annotated data from the studied domain which is difficult to access, mainly due to intellectual property. This article proposes a new public dataset, FIJO, containing insurance job offers, including many soft skill annotations. To understand the potential of this dataset, we detail some characteristics and some limitations. Then, we present the results of skill detection algorithms using a named entity recognition approach and show that transformers-based models have good token-wise performances on this dataset. Lastly, we analyze some errors made by our best model to emphasize the difficulties that may arise when applying NLP approaches.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
fr
test
BSARD
null
null
https://doi.org/10.5281/zenodo.5217310
CC BY-NC-SA 4.0
2,022
fr
null
['books']
text
null
A dataset of French legal questions posed by Belgian citizens and labelled with relevant articles from the Belgian legislation.
1,108
sentences
null
['Maastricht University']
null
null
null
null
false
zenodo
Free
null
['information retrieval', 'question answering']
null
null
null
['Antoine Louis', 'Gerasimos Spanakis']
['Maastricht University']
Statutory article retrieval is the task of automatically retrieving law articles relevant to a legal question. While recent advances in natural language processing have sparked considerable interest in many legal tasks, statutory article retrieval remains primarily untouched due to the scarcity of large-scale and high-quality annotated datasets. To address this bottleneck, we introduce the Belgian Statutory Article Retrieval Dataset (BSARD), which consists of 1,100+ French native legal questions labeled by experienced jurists with relevant articles from a corpus of 22,600+ Belgian law articles. Using BSARD, we benchmark several state-of-the-art retrieval approaches, including lexical and dense architectures, both in zero-shot and supervised setups. We find that fine-tuned dense retrieval models significantly outperform other systems. Our best performing baseline achieves 74.8% R@100, which is promising for the feasibility of the task and indicates there is still room for improvement. By the specificity of the domain and addressed task, BSARD presents a unique challenge problem for future research on legal information retrieval. Our dataset and source code are publicly available.
1
null
null
1
1
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
fr
test
FairTranslate
null
null
https://github.com/fanny-jourdan/FairTranslate
CC BY 4.0
2,025
multilingual
null
['LLM']
text
null
An English-French dataset for evaluating non-binary gender bias in machine translation. It contains 2,418 human-annotated sentence pairs about occupations, with metadata for gender (male, female, inclusive), stereotype alignment, and ambiguity, designed to test LLMs' handling of inclusive language like the singular 'they'.
2,418
sentences
null
['IRT Saint Exupéry']
null
null
null
null
false
GitHub
Free
null
['machine translation', 'gender identification']
null
null
null
['Fanny Jourdan', 'Yannick Chevalier', 'Cécile Favre']
['IRT Saint Exupery', 'Université Lumière Lyon 2', 'Université Claude Bernard Lyon 1, ERIC']
Large Language Models (LLMs) are increasingly leveraged for translation tasks but often fall short when translating inclusive language -- such as texts containing the singular 'they' pronoun or otherwise reflecting fair linguistic protocols. Because these challenges span both computational and societal domains, it is imperative to critically evaluate how well LLMs handle inclusive translation with a well-founded framework. This paper presents FairTranslate, a novel, fully human-annotated dataset designed to evaluate non-binary gender biases in machine translation systems from English to French. FairTranslate includes 2418 English-French sentence pairs related to occupations, annotated with rich metadata such as the stereotypical alignment of the occupation, grammatical gender indicator ambiguity, and the ground-truth gender label (male, female, or inclusive). We evaluate four leading LLMs (Gemma2-2B, Mistral-7B, Llama3.1-8B, Llama3.3-70B) on this dataset under different prompting procedures. Our results reveal substantial biases in gender representation across LLMs, highlighting persistent challenges in achieving equitable outcomes in machine translation. These findings underscore the need for focused strategies and interventions aimed at ensuring fair and inclusive language usage in LLM-based translation systems. We make the FairTranslate dataset publicly available on Hugging Face, and disclose the code for all experiments on GitHub.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1