category
string
split
string
Name
string
Subsets
string
HF Link
null
Link
string
License
string
Year
int64
Language
string
Dialect
string
Domain
string
Form
string
Collection Style
null
Description
string
Volume
float64
Unit
string
Ethical Risks
null
Provider
string
Derived From
null
Paper Title
null
Paper Link
null
Script
string
Tokenized
bool
Host
string
Access
string
Cost
string
Test Split
null
Tasks
string
Venue Title
null
Venue Type
null
Venue Name
null
Authors
string
Affiliations
string
Abstract
string
Name_exist
int64
Subsets_exist
int64
HF Link_exist
null
Link_exist
int64
License_exist
int64
Year_exist
int64
Language_exist
int64
Dialect_exist
int64
Domain_exist
int64
Form_exist
int64
Collection Style_exist
null
Description_exist
int64
Volume_exist
int64
Unit_exist
int64
Ethical Risks_exist
null
Provider_exist
int64
Derived From_exist
null
Paper Title_exist
null
Paper Link_exist
null
Script_exist
int64
Tokenized_exist
int64
Host_exist
int64
Access_exist
int64
Cost_exist
int64
Test Split_exist
null
Tasks_exist
int64
Venue Title_exist
null
Venue Type_exist
null
Venue Name_exist
null
Authors_exist
int64
Affiliations_exist
int64
Abstract_exist
int64
fr
test
20min-XD
null
null
https://github.com/ZurichNLP/20min-XD
custom
2,025
multilingual
null
['news articles']
text
null
A French-German, document-level comparable corpus of news articles from the Swiss online news outlet 20 Minuten/20 minutes. It contains 15,000 article pairs from 2015-2024, automatically aligned based on semantic similarity, exhibiting a broad spectrum of cross-lingual similarity.
15,000
documents
null
[' University of Zurich', '20 Minuten (TX Group)']
null
null
null
null
false
GitHub
Free
null
['machine translation', 'other']
null
null
null
['Michelle Wastl', 'Jannis Vamvas', 'Selena Calleri', 'Rico Sennrich']
['Department of Computational Linguistics, University of Zurich', '20 Minuten (TX Group)']
We present 20min-XD (20 Minuten cross-lingual document-level), a French-German, document-level comparable corpus of news articles, sourced from the Swiss online news outlet 20 Minuten/20 minutes. Our dataset comprises around 15,000 article pairs spanning 2015 to 2024, automatically aligned based on semantic similarity. We detail the data collection process and alignment methodology. Furthermore, we provide a qualitative and quantitative analysis of the corpus. The resulting dataset exhibits a broad spectrum of cross-lingual similarity, ranging from near-translations to loosely related articles, making it valuable for various NLP applications and broad linguistically motivated studies. We publicly release the dataset in document- and sentence-aligned versions and code for the described experiments.
1
null
null
1
1
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
fr
test
Alloprof
null
null
https://huggingface.co/datasets/antoinelb7/alloprof
MIT License
2,023
multilingual
null
['web pages']
text
null
A French question-answering dataset from the Alloprof educational help website. It contains 29,349 questions from K-12 students and their explanations, often including images and links to 2,596 reference pages, covering various school subjects like math, French, and science.
29,349
sentences
null
['Alloprof', 'Mila']
null
null
null
null
false
HuggingFace
Free
null
['question answering', 'information retrieval']
null
null
null
['Antoine Lefebvre-Brossard', 'Stephane Gazaille', 'Michel C. Desmarais']
['Mila-Quebec AI Institute', 'Polytechnique Montréal']
Teachers and students are increasingly relying on online learning resources to supplement the ones provided in school. This increase in the breadth and depth of available resources is a great thing for students, but only provided they are able to find answers to their queries. Question-answering and information retrieval systems have benefited from public datasets to train and evaluate their algorithms, but most of these datasets have been in English text written by and for adults. We introduce a new public French question-answering dataset collected from Alloprof, a Quebec-based primary and high-school help website, containing 29 349 questions and their explanations in a variety of school subjects from 10 368 students, with more than half of the explanations containing links to other questions or some of the 2 596 reference pages on the website. We also present a case study of this dataset in an information retrieval task. This dataset was collected on the Alloprof public forum, with all questions verified for their appropriateness and the explanations verified both for their appropriateness and their relevance to the question. To predict relevant documents, architectures using pre-trained BERT models were fine-tuned and evaluated. This dataset will allow researchers to develop question-answering, information retrieval and other algorithms specifically for the French speaking education context. Furthermore, the range of language proficiency, images, mathematical symbols and spelling mistakes will necessitate algorithms based on a multimodal comprehension. The case study we present as a baseline shows an approach that relies on recent techniques provides an acceptable performance level, but more work is necessary before it can reliably be used and trusted in a production setting.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
fr
test
FREDSum
null
null
https://github.com/linto-ai/FREDSum
CC BY-SA 4.0
2,023
fr
null
['TV Channels', 'web pages']
text
null
A dataset of manually transcribed and annotated French political debates from 1974-2023. It is designed for multi-party dialogue summarization and includes abstractive/extractive summaries, topic segmentation, and abstractive communities annotations to support research in this area.
142
documents
null
['Linagora Labs']
null
null
null
null
false
GitHub
Free
null
['summarization', 'speech recognition']
null
null
null
['Virgile Rennard', 'Guokan Shang', 'Damien Grari', 'Julie Hunter', 'Michalis Vazirgiannis']
['Linagora, France', 'École Polytechnique', 'Grenoble Ecole de Management']
Recent advances in deep learning, and especially the invention of encoder-decoder architectures, has significantly improved the performance of abstractive summarization systems. The majority of research has focused on written documents, however, neglecting the problem of multi-party dialogue summarization. In this paper, we present a dataset of French political debates for the purpose of enhancing resources for multi-lingual dialogue summarization. Our dataset consists of manually transcribed and annotated political debates, covering a range of topics and perspectives. We highlight the importance of high quality transcription and annotations for training accurate and effective dialogue summarization models, and emphasize the need for multilingual resources to support dialogue summarization in non-English languages. We also provide baseline experiments using state-of-the-art methods, and encourage further research in this area to advance the field of dialogue summarization. Our dataset will be made publicly available for use by the research community.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
fr
test
Vibravox
null
null
https://huggingface.co/datasets/Cnam-LMSSC/vibravox
CC BY 4.0
2,024
fr
null
['wikipedia']
audio
null
Vibravox is a GDPR-compliant dataset containing audio recordings of French speech using five different body-conduction audio sensors and a reference airborne microphone. It includes 45 hours of speech per sensor from 188 participants under various acoustic conditions, with linguistic and phonetic transcriptions.
273.72
hours
null
['LMSSC']
null
null
null
null
false
HuggingFace
Free
null
['speaker identification', 'speech recognition']
null
null
null
['Julien Hauret', 'Malo Olivier', 'Thomas Joubaud', 'Christophe Langrenne', 'Sarah Poire´e', 'Ve´ronique Zimpfer', 'E´ric Bavu']
['Laboratoire de Me´canique des Structures et des Syste`mes Couple´s, Conservatoire national des arts et me´tiers, HESAM Universite´, 75003 Paris, France', 'Department of Acoustics and Soldier Protection, French-German Research Institute of Saint-Louis (ISL)']
Vibravox is a dataset compliant with the General Data Protection Regulation (GDPR) containing audio recordings using five different body-conduction audio sensors: two in-ear microphones, two bone conduction vibration pickups, and a laryngophone. The dataset also includes audio data from an airborne microphone used as a reference. The Vibravox corpus contains 45 hours per sensor of speech samples and physiological sounds recorded by 188 participants under different acoustic conditions imposed by a high order ambisonics 3D spatializer. Annotations about the recording conditions and linguistic transcriptions are also included in the corpus. We conducted a series of experiments on various speech-related tasks, including speech recognition, speech enhancement, and speaker verification. These experiments were carried out using state-of-the-art models to evaluate and compare their performances on signals captured by the different audio sensors offered by the Vibravox dataset, with the aim of gaining a better grasp of their individual characteristics.
1
null
null
1
1
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
fr
test
MTNT
null
null
https://github.com/pmichel31415/mtnt
MIT License
2,018
multilingual
null
['social media', 'commentary']
text
null
A benchmark dataset for Machine Translation of Noisy Text (MTNT), consisting of noisy comments on Reddit and professionally sourced translations. It includes English comments translated into French and Japanese, as well as French and Japanese comments translated into English, on the order of 7k-37k sentences per language pair.
37,930
sentences
null
['Carnegie Mellon University']
null
null
null
null
false
GitHub
Free
null
['machine translation']
null
null
null
['Paul Michel', 'Graham Neubig']
['Language Technologies Institute', 'Carnegie Mellon University']
Noisy or non-standard input text can cause disastrous mistranslations in most modern Machine Translation (MT) systems, and there has been growing research interest in creating noise-robust MT systems. However, as of yet there are no publicly available parallel corpora of with naturally occurring noisy inputs and translations, and thus previous work has resorted to evaluating on synthetically created datasets. In this paper, we propose a benchmark dataset for Machine Translation of Noisy Text (MTNT), consisting of noisy comments on Reddit (www.reddit.com) and professionally sourced translations. We commissioned translations of English comments into French and Japanese, as well as French and Japanese comments into English, on the order of 7k-37k sentences per language pair. We qualitatively and quantitatively examine the types of noise included in this dataset, then demonstrate that existing MT models fail badly on a number of noise-related phenomena, even after performing adaptation on a small training set of in-domain data. This indicates that this dataset can provide an attractive testbed for methods tailored to handling noisy text in MT. The data is publicly available at www.cs.cmu.edu/~pmichel1/mtnt/.
1
null
null
0
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
0
1
1
null
1
null
null
null
1
1
1
fr
test
PIAF
null
null
https://github.com/etalab/piaf
MIT License
2,020
fr
null
['wikipedia']
text
null
PIAF is a French Question Answering dataset that was collected through a participatory approach. The dataset consists of question-answer pairs extracted from Wikipedia articles.
3,835
sentences
null
['Etalab']
null
null
null
null
false
GitHub
Free
null
['question answering']
null
null
null
['Rachel Keraron', 'Guillaume Lancrenon', 'Mathilde Bras', 'Frédéric Allary', 'Gilles Moyse', 'Thomas Scialom', 'Edmundo-Pavel Soriano-Morales', 'Jacopo Staiano']
['reciTAL', 'Etalab', "Sorbonne Universit'e"]
Motivated by the lack of data for non-English languages, in particular for the evaluation of downstream tasks such as Question Answering, we present a participatory effort to collect a native French Question Answering Dataset. Furthermore, we describe and publicly release the annotation tool developed for our collection effort, along with the data obtained and preliminary baselines.
1
null
null
0
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
fr
test
FrenchToxicityPrompts
null
null
https://download.europe.naverlabs.com/FrenchToxicityPrompts/
CC BY-SA 4.0
2,024
fr
null
['social media', 'public datasets']
text
null
A dataset of 50,000 naturally occurring French prompts and their continuations, annotated with toxicity scores from a widely used toxicity classifier. It is designed to evaluate and mitigate toxicity in French language models.
50,000
sentences
null
['NAVER LABS Europe']
null
null
null
null
false
other
Free
null
['offensive language detection']
null
null
null
['Caroline Brun', 'Vassilina Nikoulina']
['NAVER LABS Europe']
Large language models (LLMs) are increasingly popular but are also prone to generating bias, toxic or harmful language, which can have detrimental effects on individuals and communities. Although most efforts is put to assess and mitigate toxicity in generated content, it is primarily concentrated on English, while it's essential to consider other languages as well. For addressing this issue, we create and release FrenchToxicityPrompts, a dataset of 50K naturally occurring French prompts and their continuations, annotated with toxicity scores from a widely used toxicity classifier. We evaluate 14 different models from four prevalent open-sourced families of LLMs against our dataset to assess their potential toxicity across various dimensions. We hope that our contribution will foster future research on toxicity detection and mitigation beyond Englis
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
fr
test
OBSINFOX
null
null
https://github.com/obs-info/obsinfox
CC BY-NC 4.0
2,024
fr
null
['news articles']
text
null
A corpus of 100 French press documents from 17 unreliable sources. The documents were annotated by 8 human annotators using 11 labels (e.g., FakeNews, Subjective, Exaggeration) to analyze the characteristics of fake news.
100
documents
null
['Observatoire']
null
null
null
null
false
GitHub
Free
null
['fake news detection', 'topic classification']
null
null
null
['Benjamin Icard', 'François Maine', 'Morgane Casanova', 'Géraud Faye', 'Julien Chanson', 'Guillaume Gadek', 'Ghislain Atemezing', 'François Bancilhon', 'Paul Égré']
['Sorbonne Université', 'Institut Jean-Nicod', 'Freedom Partners', 'Université de Rennes', 'Airbus Defence and Space', 'Université Paris-Saclay', 'Mondeca', 'European Union Agency for Railways', 'Observatoire des Médias']
We present a corpus of 100 documents, OBSINFOX, selected from 17 sources of French press considered unreliable by expert agencies, annotated using 11 labels by 8 annotators. By collecting more labels than usual, by more annotators than is typically done, we can identify features that humans consider as characteristic of fake news, and compare them to the predictions of automated classifiers. We present a topic and genre analysis using Gate Cloud, indicative of the prevalence of satire-like text in the corpus. We then use the subjectivity analyzer VAGO, and a neural version of it, to clarify the link between ascriptions of the label Subjective and ascriptions of the label Fake News. The annotated dataset is available online at the following url: https://github.com/obs-info/obsinfox Keywords: Fake News, Multi-Labels, Subjectivity, Vagueness, Detail, Opinion, Exaggeration, French Press
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
fr
test
CFDD
null
null
https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-French-0.1
CC BY-NC-SA 4.0
2,023
fr
null
['captions', 'public datasets', 'web pages']
text
null
The Claire French Dialogue Dataset (CFDD) is a corpus containing roughly 160 million words from transcripts and stage plays in French.
160,000,000
tokens
null
['LINAGORA Labs']
null
null
null
null
false
HuggingFace
Free
null
['language modeling', 'text generation']
null
null
null
['Julie Hunter', 'Jérôme Louradour', 'Virgile Rennard', 'Ismaïl Harrando', 'Guokan Shang', 'Jean-Pierre Lorré']
['LINAGORA']
We present the Claire French Dialogue Dataset (CFDD), a resource created by members of LINAGORA Labs in the context of the OpenLLM France initiative. CFDD is a corpus containing roughly 160 million words from transcripts and stage plays in French that we have assembled and publicly released in an effort to further the development of multilingual, open source language models. This paper describes the 24 individual corpora of which CFDD is composed and provides links and citations to their original sources. It also provides our proposed breakdown of the full CFDD dataset into eight categories of subcorpora and describes the process we followed to standardize the format of the final dataset. We conclude with a discussion of similar work and future directions.
1
null
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
fr
test
FREEMmax
null
null
https://github.com/FreEM-corpora/FreEMmax_OA
custom
2,022
fr
null
['web pages', 'public datasets']
text
null
FREEMmax is a large corpus of Early Modern French (16th-18th centuries), with some texts extending to the 1920s. It aggregates texts from various sources, including institutional databases, research projects, and web scraping, covering diverse genres like literature, correspondence, and plays.
185,643,482
tokens
null
['Inria', 'Sorbonne Universite', 'Universite de Geneve', 'LIGM', 'Universite Gustage Eiffel', 'CNRS']
null
null
null
null
false
zenodo
Free
null
['language modeling']
null
null
null
['Simon Gabay', 'Pedro Ortiz Suarez', 'Alexandre Bartz', 'Alix Chague', 'Rachel Bawden', 'Philippe Gambette', 'Benoît Sagot']
['Inria', 'Sorbonne Universite', 'Universite de Geneve', 'LIGM', 'Universite Gustage Eiffel', 'CNRS']
Language models for historical states of language are becoming increasingly important to allow the optimal digitisation and analysis of old textual sources. Because these historical states are at the same time more complex to process and more scarce in the corpora available, specific efforts are necessary to train natural language processing (NLP) tools adapted to the data. In this paper, we present our efforts to develop NLP tools for Early Modern French (historical French from the 16$^\text{th}$ to the 18$^\text{th}$ centuries). We present the $\text{FreEM}_{\text{max}}$ corpus of Early Modern French and D'AlemBERT, a RoBERTa-based language model trained on $\text{FreEM}_{\text{max}}$. We evaluate the usefulness of D'AlemBERT by fine-tuning it on a part-of-speech tagging task, outperforming previous work on the test set. Importantly, we find evidence for the transfer learning capacity of the language model, since its performance on lesser-resourced time periods appears to have been boosted by the more resourced ones. We release D'AlemBERT and the open-sourced subpart of the $\text{FreEM}_{\text{max}}$ corpus.
1
null
null
0
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
fr
test
FQuAD2.0
null
null
https://huggingface.co/datasets/illuin/fquad
CC BY-NC-SA 3.0
2,021
fr
null
['wikipedia']
text
null
A French Question Answering dataset that extends FQuAD1.1 with over 17,000 adversarially created unanswerable questions. The questions are extracted from Wikipedia articles, and the total dataset comprises almost 80,000 questions. It is designed to train models to distinguish answerable from unanswerable questions.
79,768
sentences
null
['Illuin Technology']
null
null
null
null
false
HuggingFace
Free
null
['question answering']
null
null
null
['Quentin Heinrich', 'Gautier Viaud', 'Wacim Belblidia']
['Illuin Technology']
Question Answering, including Reading Comprehension, is one of the NLP research areas that has seen significant scientific breakthroughs over the past few years, thanks to the concomitant advances in Language Modeling. Most of these breakthroughs, however, are centered on the English language. In 2020, as a first strong initiative to bridge the gap to the French language, Illuin Technology introduced FQuAD1.1, a French Native Reading Comprehension dataset composed of 60,000+ questions and answers samples extracted from Wikipedia articles. Nonetheless, Question Answering models trained on this dataset have a major drawback: they are not able to predict when a given question has no answer in the paragraph of interest, therefore making unreliable predictions in various industrial use-cases. In the present work, we introduce FQuAD2.0, which extends FQuAD with 17,000+ unanswerable questions, annotated adversarially, in order to be similar to answerable ones. This new dataset, comprising a total of almost 80,000 questions, makes it possible to train French Question Answering models with the ability of distinguishing unanswerable questions from answerable ones. We benchmark several models with this dataset: our best model, a fine-tuned CamemBERT-large, achieves a F1 score of 82.3% on this classification task, and a F1 score of 83% on the Reading Comprehension task.
1
null
null
0
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
0
1
1
null
1
null
null
null
1
1
1
multi
test
XNLI
[{'Name': 'en', 'Volume': 7500.0, 'Unit': 'sentences', 'Language': 'English'}, {'Name': 'fr', 'Volume': 7500.0, 'Unit': 'sentences', 'Language': 'French'}, {'Name': 'es', 'Volume': 7500.0, 'Unit': 'sentences', 'Language': 'Spanish'}, {'Name': 'de', 'Volume': 7500.0, 'Unit': 'sentences', 'Language': 'German'}, {'Name': 'el', 'Volume': 7500.0, 'Unit': 'sentences', 'Language': 'Greek'}, {'Name': 'bg', 'Volume': 7500.0, 'Unit': 'sentences', 'Language': 'Bulgarian'}, {'Name': 'ru', 'Volume': 7500.0, 'Unit': 'sentences', 'Language': 'Russian'}, {'Name': 'tr', 'Volume': 7500.0, 'Unit': 'sentences', 'Language': 'Turkish'}, {'Name': 'ar', 'Volume': 7500.0, 'Unit': 'sentences', 'Language': 'Arabic'}, {'Name': 'vi', 'Volume': 7500.0, 'Unit': 'sentences', 'Language': 'Vietnamese'}, {'Name': 'th', 'Volume': 7500.0, 'Unit': 'sentences', 'Language': 'Thai'}, {'Name': 'zh', 'Volume': 7500.0, 'Unit': 'sentences', 'Language': 'Chinese'}, {'Name': 'hi', 'Volume': 7500.0, 'Unit': 'sentences', 'Language': 'Hindi'}, {'Name': 'sw', 'Volume': 7500.0, 'Unit': 'sentences', 'Language': 'Swahili'}, {'Name': 'ur', 'Volume': 7500.0, 'Unit': 'sentences', 'Language': 'Urdu'}]
null
https://github.com/facebookresearch/XNLI
CC BY-NC 4.0
2,018
['English', 'French', 'Spanish', 'German', 'Greek', 'Bulgarian', 'Russian', 'Turkish', 'Arabic', 'Vietnamese', 'Thai', 'Chinese', 'Hindi', 'Swahili', 'Urdu']
null
['public datasets']
text
null
Evaluation set for NLI by extending the development and test sets of the Multi-Genre Natural Language Inference Corpus (MultiNLI) to 15 languages
112,500
sentences
null
['Facebook']
null
null
null
null
false
GitHub
Free
null
['natural language inference']
null
null
null
['Alexis Conneau', 'Guillaume Lample', 'Ruty Rinott', 'Adina Williams', 'Samuel R. Bowman', 'Holger Schwenk', 'Veselin Stoyanov']
['Facebook AI Research', 'New York University']
State-of-the-art natural language processing systems rely on supervision in the form of annotated data to learn competent models. These models are generally trained on data in a single language (usually English), and cannot be directly used beyond that language. Since collecting data in every language is not realistic, there has been a growing interest in cross-lingual language understanding (XLU) and low-resource cross-language transfer. In this work, we construct an evaluation set for XLU by extending the development and test sets of the Multi-Genre Natural Language Inference Corpus (MultiNLI) to 15 languages, including low-resource languages such as Swahili and Urdu. We hope that our dataset, dubbed XNLI, will catalyze research in cross-lingual sentence understanding by providing an informative standard evaluation task. In addition, we provide several baselines for multilingual sentence understanding, including two based on machine translation systems, and two that use parallel data to train aligned multilingual bag-of-words and LSTM encoders. We find that XNLI represents a practical and challenging evaluation suite, and that directly translating the test data yields the best performance among available baselines.
1
1
null
0
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
0
0
0
null
1
null
null
null
1
1
1
multi
test
X-stance
[{'Name': 'DE', 'Volume': 40200.0, 'Unit': 'sentences', 'Language': 'German'}, {'Name': 'FR', 'Volume': 14129.0, 'Unit': 'sentences', 'Language': 'French'}, {'Name': 'IT', 'Volume': 1172.7, 'Unit': 'sentences', 'Language': 'Italian'}]
null
http://doi.org/10.5281/zenodo.3831317
CC BY-NC 4.0
2,020
['German', 'French', 'Italian']
null
['commentary']
text
null
A large-scale, multilingual (German, French, Italian) dataset for stance detection. It contains over 67,000 comments from Swiss political candidates on more than 150 political issues, formatted as question-comment pairs. The dataset is designed for cross-lingual and cross-target evaluation.
55,502
sentences
null
['Univeristy of Zurich']
null
null
null
null
false
zenodo
Free
null
['stance detection']
null
null
null
['Jannis Vamvas', 'Rico Sennrich']
['University of Zurich', 'University of Edinburgh']
We extract a large-scale stance detection dataset from comments written by candidates of elections in Switzerland. The dataset consists of German, French and Italian text, allowing for a cross-lingual evaluation of stance detection. It contains 67 000 comments on more than 150 political issues (targets). Unlike stance detection models that have specific target issues, we use the dataset to train a single model on all the issues. To make learning across targets possible, we prepend to each instance a natural question that represents the target (e.g. "Do you support X?"). Baseline results from multilingual BERT show that zero-shot cross-lingual and cross-target transfer of stance detection is moderately successful with this approach.
1
1
null
1
1
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
multi
test
DiS-ReX
[{'Name': 'English', 'Language': 'English', 'Volume': 532499.0, 'Unit': 'sentences'}, {'Name': 'French', 'Language': 'French', 'Unit': 'sentences', 'Volume': 409087.0}, {'Name': 'Spanish', 'Language': 'Spanish', 'Unit': 'sentences', 'Volume': 456418.0}, {'Volume': 438315.0, 'Language': 'German', 'Name': 'German', 'Unit': 'sentences'}]
null
https://github.com/dair-iitd/DiS-ReX
unknown
2,021
['English', 'German', 'Spanish', 'French']
null
['wikipedia']
text
null
DiS-ReX is a multilingual dataset for distantly supervised relation extraction (DS-RE) spanning English, German, Spanish, and French. It contains over 1.5 million sentences aligned with DBpedia, featuring 36 relation classes and a 'no relation' class, designed to be a challenging benchmark.
1,836,319
sentences
null
['Indian Institute of Technology']
null
null
null
null
false
GitHub
Free
null
['relation extraction']
null
null
null
['Abhyuday Bhartiya', 'Kartikeya Badola', 'Mausam']
['Indian Institute of Technology']
Distant supervision (DS) is a well established technique for creating large-scale datasets for relation extraction (RE) without using human annotations. However, research in DS-RE has been mostly limited to the English language. Constraining RE to a single language inhibits utilization of large amounts of data in other languages which could allow extraction of more diverse facts. Very recently, a dataset for multilingual DS-RE has been released. However, our analysis reveals that the proposed dataset exhibits unrealistic characteristics such as 1) lack of sentences that do not express any relation, and 2) all sentences for a given entity pair expressing exactly one relation. We show that these characteristics lead to a gross overestimation of the model performance. In response, we propose a new dataset, DiS-ReX, which alleviates these issues. Our dataset has more than 1.5 million sentences, spanning across 4 languages with 36 relation classes + 1 no relation (NA) class. We also modify the widely used bag attention models by encoding sentences using mBERT and provide the first benchmark results on multilingual DS-RE. Unlike the competing dataset, we show that our dataset is challenging and leaves enough room for future research to take place in this field.
1
1
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
multi
test
RELX
[{'Name': 'English', 'Volume': 502.0, 'Unit': 'sentences', 'Language': 'English'}, {'Name': 'French', 'Volume': 502.0, 'Unit': 'sentences', 'Language': 'French'}, {'Name': 'German', 'Volume': 502.0, 'Unit': 'sentences', 'Language': 'German'}, {'Name': 'Spanish', 'Volume': 502.0, 'Unit': 'sentences', 'Language': 'Spanish'}, {'Name': 'Turkish', 'Volume': 502.0, 'Unit': 'sentences', 'Language': 'Turkish'}]
null
https://github.com/boun-tabi/RELX
MIT License
2,020
['English', 'French', 'German', 'Spanish', 'Turkish']
null
['public datasets']
text
null
A public benchmark dataset for cross-lingual relation classification in English, French, German, Spanish, and Turkish. It contains 502 parallel sentences created by selecting a subset from the KBP-37 test set and having them professionally translated and annotated.
2,510
sentences
null
['Boğaziçi University']
null
null
null
null
false
GitHub
Free
null
['cross-lingual information retrieval']
null
null
null
['Abdullatif Köksal', 'Arzucan Özgür']
['Department of Computer Engineering, Boğaziçi University']
Relation classification is one of the key topics in information extraction, which can be used to construct knowledge bases or to provide useful information for question answering. Current approaches for relation classification are mainly focused on the English language and require lots of training data with human annotations. Creating and annotating a large amount of training data for low-resource languages is impractical and expensive. To overcome this issue, we propose two cross-lingual relation classification models: a baseline model based on Multilingual BERT and a new multilingual pretraining setup, which significantly improves the baseline with distant supervision. For evaluation, we introduce a new public benchmark dataset for cross-lingual relation classification in English, French, German, Spanish, and Turkish, called RELX. We also provide the RELX-Distant dataset, which includes hundreds of thousands of sentences with relations from Wikipedia and Wikidata collected by distant supervision for these languages. Our code and data are available at: https://github.com/boun-tabi/RELX
1
1
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
multi
test
MultiSubs
[{'Language': 'English', 'Name': 'English', 'Volume': 2159635.0, 'Unit': 'sentences'}, {'Name': 'Spanish', 'Language': 'Spanish', 'Volume': 2159635.0, 'Unit': 'sentences'}, {'Name': 'Portuguese', 'Language': 'Portuguese', 'Volume': 1796095.0, 'Unit': 'sentences'}, {'Name': 'French', 'Volume': 1063071.0, 'Language': 'French', 'Unit': 'sentences'}, {'Name': 'German', 'Volume': 384480.0, 'Language': 'German', 'Unit': 'sentences'}]
null
https://doi.org/10.5281/zenodo.5034604
CC BY 4.0
2,022
['English', 'Spanish', 'Portuguese', 'French', 'German']
null
['TV Channels', 'public datasets']
text
null
A large-scale multimodal and multilingual dataset of images aligned to text fragments from movie subtitles. It aims to facilitate research on grounding words to images in their contextual usage in language. The images are aligned to text fragments rather than whole sentences, and the parallel texts are multilingual.
5,403,281
sentences
null
['Imperial College London', 'Federal University of Mato Grosso']
null
null
null
null
false
zenodo
Free
null
['machine translation', 'fill-in-the blank']
null
null
null
['Josiah Wang', 'Pranava Madhyastha', 'Josiel Figueiredo', 'Chiraag Lala', 'Lucia Specia']
['Imperial College London', 'Federal University of Mato Grosso']
This paper introduces a large-scale multimodal and multilingual dataset that aims to facilitate research on grounding words to images in their contextual usage in language. The dataset consists of images selected to unambiguously illustrate concepts expressed in sentences from movie subtitles. The dataset is a valuable resource as (i) the images are aligned to text fragments rather than whole sentences; (ii) multiple images are possible for a text fragment and a sentence; (iii) the sentences are free-form and real-world like; (iv) the parallel texts are multilingual. We set up a fill-in-the-blank game for humans to evaluate the quality of the automatic image selection process of our dataset. We show the utility of the dataset on two automatic tasks: (i) fill-in-the-blank; (ii) lexical translation. Results of the human evaluation and automatic models demonstrate that images can be a useful complement to the textual context. The dataset will benefit research on visual grounding of words especially in the context of free-form sentences, and can be obtained from https://doi.org/10.5281/zenodo.5034604 under a Creative Commons licence.
1
1
null
1
1
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
multi
test
MEE
[{'Name': 'English', 'Language': 'English', 'Volume': 13000.0, 'Unit': 'documents'}, {'Name': 'Portuguese', 'Language': 'Portuguese', 'Volume': 1500.0, 'Unit': 'documents'}, {'Name': 'Spanish', 'Language': 'Spanish', 'Volume': 3268.0, 'Unit': 'documents'}, {'Volume': 4479.0, 'Language': 'Polish', 'Name': 'Polish', 'Unit': 'documents'}, {'Unit': 'documents', 'Name': 'Turkish', 'Volume': 4480.0, 'Language': 'Turkish'}, {'Name': 'Hindi', 'Language': 'Hindi', 'Volume': 1499.0, 'Unit': 'documents'}, {'Name': 'Japanese', 'Language': 'Japanese', 'Volume': 1500.0, 'Unit': 'documents'}, {'Name': 'Korean', 'Volume': 1500.0, 'Unit': 'documents', 'Language': 'Korean'}]
null
unknown
2,022
['English', 'Spanish', 'Portuguese', 'Polish', 'Turkish', 'Hindi', 'Korean', 'Japanese']
null
['wikipedia']
text
null
A large-scale Multilingual Event Extraction (MEE) dataset covering 8 typologically different languages. Sourced from Wikipedia, it provides comprehensive annotations for entity mentions, event triggers, and event arguments across diverse topics like politics, technology, and military.
31,226
documents
null
['University of Oregon', 'Adobe Research']
null
null
null
null
false
other
Free
null
['named entity recognition']
null
null
null
['Amir Pouran Ben Veyseh', 'Javid Ebrahimi', 'Franck Dernoncourt', 'Thien Huu Nguyen']
['Department of Computer Science, University of Oregon', 'Adobe Research']
Event Extraction (EE) is one of the fundamental tasks in Information Extraction (IE) that aims to recognize event mentions and their arguments (i.e., participants) from text. Due to its importance, extensive methods and resources have been developed for Event Extraction. However, one limitation of current research for EE involves the under-exploration for non-English languages in which the lack of high-quality multilingual EE datasets for model training and evaluation has been the main hindrance. To address this limitation, we propose a novel Multilingual Event Extraction dataset (MEE) that provides annotation for more than 50K event mentions in 8 typologically different languages. MEE comprehensively annotates data for entity mentions, event triggers and event arguments. We conduct extensive experiments on the proposed dataset to reveal challenges and opportunities for multilingual EE.
1
1
null
0
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
0
1
1
null
1
null
null
null
1
1
1
multi
test
XCOPA
[{'Name': 'Estonian', 'Volume': 600.0, 'Unit': 'sentences', 'Language': 'Estonian'}, {'Name': 'Haitian Creole', 'Volume': 600.0, 'Unit': 'sentences', 'Language': 'Haitian Creole'}, {'Name': 'Indonesian', 'Volume': 600.0, 'Unit': 'sentences', 'Language': 'Indonesian'}, {'Name': 'Italian', 'Volume': 600.0, 'Unit': 'sentences', 'Language': 'Italian'}, {'Name': 'Eastern Apurímac Quechua', 'Volume': 600.0, 'Unit': 'sentences', 'Language': 'Eastern Apurímac Quechua'}, {'Name': 'Kiswahili', 'Volume': 600.0, 'Unit': 'sentences', 'Language': 'Swahili'}, {'Name': 'Tamil', 'Volume': 600.0, 'Unit': 'sentences', 'Language': 'Tamil'}, {'Name': 'Thai', 'Volume': 600.0, 'Unit': 'sentences', 'Language': 'Thai'}, {'Name': 'Turkish', 'Volume': 600.0, 'Unit': 'sentences', 'Language': 'Turkish'}, {'Name': 'Vietnamese', 'Volume': 600.0, 'Unit': 'sentences', 'Language': 'Vietnamese'}, {'Name': 'Mandarin Chinese', 'Volume': 600.0, 'Unit': 'sentences', 'Language': 'Chinese'}]
null
https://github.com/cambridgeltl/xcopa
CC BY 4.0
2,020
['Indonesian', 'Italian', 'Swahili', 'Thai', 'Turkish', 'Vietnamese', 'Chinese', 'Estonian', 'Haitian Creole', 'Eastern Apurímac Quechua', 'Tamil']
null
['public datasets']
text
null
XCOPA is a typologically diverse multilingual dataset for causal commonsense reasoning. It was created by translating and re-annotating the English COPA dataset's validation and test sets into 11 languages. The task is to choose the more plausible cause or effect for a given premise.
6,600
sentences
null
['Cambridge']
null
null
null
null
false
GitHub
Free
null
['commonsense reasoning']
null
null
null
['Edoardo M. Ponti', 'Goran Glavasˇ', 'Olga Majewska', 'Qianchu Liu', 'Ivan Vulic´', 'Anna Korhonen']
['Language Technology Lab, TAL, University of Cambridge, UK', 'Data and Web Science Group, University of Mannheim, Germany']
In order to simulate human language capacity, natural language processing systems must be able to reason about the dynamics of everyday situations, including their possible causes and effects. Moreover, they should be able to generalise the acquired world knowledge to new languages, modulo cultural differences. Advances in machine reasoning and cross-lingual transfer depend on the availability of challenging evaluation benchmarks. Motivated by both demands, we introduce Cross-lingual Choice of Plausible Alternatives (XCOPA), a typologically diverse multilingual dataset for causal commonsense reasoning in 11 languages, which includes resource-poor languages like Eastern Apur\'imac Quechua and Haitian Creole. We evaluate a range of state-of-the-art models on this novel dataset, revealing that the performance of current methods based on multilingual pretraining and zero-shot fine-tuning falls short compared to translation-based transfer. Finally, we propose strategies to adapt multilingual models to out-of-sample resource-lean languages where only a small corpus or a bilingual dictionary is available, and report substantial improvements over the random baseline. The XCOPA dataset is freely available at github.com/cambridgeltl/xcopa.
1
1
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
multi
test
MLQA
[{'Name': 'en', 'Volume': 12738.0, 'Unit': 'sentences', 'Language': 'English'}, {'Name': 'ar', 'Volume': 5852.0, 'Unit': 'sentences', 'Language': 'Arabic'}, {'Name': 'de', 'Volume': 5029.0, 'Unit': 'sentences', 'Language': 'German'}, {'Name': 'vi', 'Volume': 6006.0, 'Unit': 'sentences', 'Language': 'Vietnamese'}, {'Name': 'es', 'Volume': 5770.0, 'Unit': 'sentences', 'Language': 'Spanish'}, {'Name': 'zh', 'Volume': 5852.0, 'Unit': 'sentences', 'Language': 'Simplified Chinese'}, {'Name': 'hi', 'Volume': 5425.0, 'Unit': 'sentences', 'Language': 'Hindi'}]
null
https://github.com/facebookresearch/mlqa
CC BY-SA 3.0
2,020
['English', 'Arabic', 'German', 'Vietnamese', 'Spanish', 'Simplified Chinese', 'Hindi']
null
['wikipedia']
text
null
MLQA has over 12K instances in English and 5K in each other language, with each instance parallel between 4 languages on average.
46,461
documents
null
['Facebook']
null
null
null
null
false
GitHub
Free
null
['question answering']
null
null
null
['Patrick Lewis', 'Barlas Oğuz', 'Ruty Rinott', 'S. Riedel', 'Holger Schwenk']
['Facebook AI Research;University College London']
Question answering (QA) models have shown rapid progress enabled by the availability of large, high-quality benchmark datasets. Such annotated datasets are difficult and costly to collect, and rarely exist in languages other than English, making training QA systems in other languages challenging. An alternative to building large monolingual training datasets is to develop cross-lingual systems which can transfer to a target language without requiring training data in that language. In order to develop such systems, it is crucial to invest in high quality multilingual evaluation benchmarks to measure progress. We present MLQA, a multi-way aligned extractive QA evaluation benchmark intended to spur research in this area. MLQA contains QA instances in 7 languages, namely English, Arabic, German, Spanish, Hindi, Vietnamese and Simplified Chinese. It consists of over 12K QA instances in English and 5K in each other language, with each QA instance being parallel between 4 languages on average. MLQA is built using a novel alignment context strategy on Wikipedia articles, and serves as a cross-lingual extension to existing extractive QA datasets. We evaluate current state-of-the-art cross-lingual representations on MLQA, and also provide machine-translation-based baselines. In all cases, transfer results are shown to be significantly behind training-language performance.
1
1
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
multi
test
M2DS
[{'Name': 'English', 'Volume': 67000.0, 'Unit': 'documents', 'Language': 'English'}, {'Name': 'Tamil', 'Volume': 32000.0, 'Unit': 'documents', 'Language': 'Tamil'}, {'Name': 'Japanese', 'Volume': 29000.0, 'Unit': 'documents', 'Language': 'Japanese'}, {'Name': 'Korean', 'Volume': 27000.0, 'Unit': 'documents', 'Language': 'Korean'}, {'Name': 'Sinhala', 'Volume': 23500.0, 'Unit': 'documents', 'Language': 'Sinhala'}]
null
https://huggingface.co/datasets/KushanH/m2ds
unknown
2,024
['English', 'Tamil', 'Japanese', 'Korean', 'Sinhala']
null
['news articles', 'public datasets']
text
null
M2DS is a multilingual multi-document summarization (MDS) dataset. It contains 180,000 news articles from the BBC, organized into 51,500 clusters across five languages: English, Japanese, Korean, Tamil, and Sinhala. The data covers the period from 2010 to 2023.
180,000
documents
null
['University of Moratuwa', 'ConscientAI']
null
null
null
null
false
HuggingFace
Free
null
['summarization']
null
null
null
['Kushan Hewapathirana', 'Nisansa de Silva', 'C.D. Athuraliya']
['Dept. of Computer Science & Engineering, University of Moratuwa, Sri Lanka', 'ConscientAI, Sri Lanka']
In the rapidly evolving digital era, there is an increasing demand for concise information as individuals seek to distil key insights from various sources. Recent attention from researchers on Multi-document Summarisation (MDS) has resulted in diverse datasets covering customer reviews, academic papers, medical and legal documents, and news articles. However, the English-centric nature of these datasets has created a conspicuous void for multilingual datasets in today's globalised digital landscape, where linguistic diversity is celebrated. Media platforms such as British Broadcasting Corporation (BBC) have disseminated news in 20+ languages for decades. With only 380 million people speaking English natively as their first language, accounting for less than 5% of the global population, the vast majority primarily relies on other languages. These facts underscore the need for inclusivity in MDS research, utilising resources from diverse languages. Recognising this gap, we present the Multilingual Dataset for Multi-document Summarisation (M2DS), which, to the best of our knowledge, is the first dataset of its kind. It includes document-summary pairs in five languages from BBC articles published during the 2010-2023 period. This paper introduces M2DS, emphasising its unique multilingual aspect, and includes baseline scores from state-of-the-art MDS models evaluated on our dataset.
1
1
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
multi
test
XOR-TyDi
[{'Name': 'Ar', 'Volume': 17218.0, 'Unit': 'sentences', 'Language': 'Arabic'}, {'Name': 'Bn', 'Volume': 2682.0, 'Unit': 'sentences', 'Language': 'Bengali'}, {'Name': 'Fi', 'Volume': 9132.0, 'Unit': 'sentences', 'Language': 'Finnish'}, {'Name': 'Ja', 'Volume': 6531.0, 'Unit': 'sentences', 'Language': 'Japanese'}, {'Name': 'Ko', 'Volume': 2433.0, 'Unit': 'sentences', 'Language': 'Korean'}, {'Name': 'Ru', 'Volume': 8787.0, 'Unit': 'sentences', 'Language': 'Russian'}, {'Name': 'Te', 'Volume': 6276.0, 'Unit': 'sentences', 'Language': 'Telugu'}]
null
https://nlp.cs.washington.edu/xorqa/
CC BY-SA 4.0
2,021
['Arabic', 'Bengali', 'Finnish', 'Japanese', 'Korean', 'Russian', 'Telugu']
null
['public datasets']
text
null
XOR-TyDi QA brings together information-seeking questions, open-retrieval QA, and multilingual QA to create a multilingual open-retrieval QA dataset that enables cross-lingual answer retrieval. It consists of questions written by information-seeking native speakers in 7 typologically diverse languages and answer annotations that are retrieved from multilingual document collections.
53,059
sentences
null
[]
null
null
null
null
false
other
Free
null
['cross-lingual information retrieval', 'question answering']
null
null
null
['Akari Asai', 'Jungo Kasai', 'Jonathan H. Clark', 'Kenton Lee', 'Eunsol Choi', 'Hannaneh Hajishirzi']
['University of Washington', 'University of Washington', 'Google Research', 'The University of Texas at Austin; Allen Institute for AI']
Multilingual question answering tasks typically assume answers exist in the same language as the question. Yet in practice, many languages face both information scarcity -- where languages have few reference articles -- and information asymmetry -- where questions reference concepts from other cultures. This work extends open-retrieval question answering to a cross-lingual setting enabling questions from one language to be answered via answer content from another language. We construct a large-scale dataset built on questions from TyDi QA lacking same-language answers. Our task formulation, called Cross-lingual Open Retrieval Question Answering (XOR QA), includes 40k information-seeking questions from across 7 diverse non-English languages. Based on this dataset, we introduce three new tasks that involve cross-lingual document retrieval using multi-lingual and English resources. We establish baselines with state-of-the-art machine translation systems and cross-lingual pretrained models. Experimental results suggest that XOR QA is a challenging task that will facilitate the development of novel techniques for multilingual question answering. Our data and code are available at https://nlp.cs.washington.edu/xorqa.
1
1
null
1
0
1
1
null
1
1
null
1
1
1
null
0
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
multi
test
Multilingual Hate Speech Detection Dataset
[{'Name': 'Arabic', 'Volume': 5790.0, 'Unit': 'sentences', 'Language': 'Arabic'}, {'Name': 'English', 'Volume': 96323.0, 'Unit': 'sentences', 'Language': 'English'}, {'Name': 'German', 'Volume': 6155.0, 'Unit': 'sentences', 'Language': 'German'}, {'Name': 'Indonesian', 'Volume': 13882.0, 'Unit': 'sentences', 'Language': 'Indonesian'}, {'Name': 'Italian', 'Volume': 9560.0, 'Unit': 'sentences', 'Language': 'Italian'}, {'Name': 'Polish', 'Volume': 9788.0, 'Unit': 'sentences', 'Language': 'Polish'}, {'Name': 'Portuguese', 'Volume': 5670.0, 'Unit': 'sentences', 'Language': 'Portuguese'}, {'Name': 'Spanish', 'Volume': 11365.0, 'Unit': 'sentences', 'Language': 'Spanish'}, {'Name': 'French', 'Volume': 1220.0, 'Unit': 'sentences', 'Language': 'French'}]
null
https://github.com/hate-alert/DE-LIMIT
MIT License
2,020
['Arabic', 'English', 'German', 'Indonesian', 'Italian', 'Polish', 'Portuguese', 'Spanish', 'French']
null
['public datasets', 'social media']
text
null
Combined MLMA and L-HSAB datasets
159,753
sentences
null
['Indian Institute of Technology Kharagpur']
null
null
null
null
false
GitHub
Free
null
['offensive language detection']
null
null
null
['Sai Saket Aluru', 'Binny Mathew', 'Punyajoy Saha', 'Animesh Mukherjee']
['Indian Institute of Technology Kharagpur']
Hate speech detection is a challenging problem with most of the datasets available in only one language: English. In this paper, we conduct a large scale analysis of multilingual hate speech in 9 languages from 16 different sources. We observe that in low resource setting, simple models such as LASER embedding with logistic regression performs the best, while in high resource setting BERT based models perform better. In case of zero-shot classification, languages such as Italian and Portuguese achieve good results. Our proposed framework could be used as an efficient solution for low-resource languages. These models could also act as good baselines for future multilingual hate speech detection tasks. We have made our code and experimental settings public for other researchers at https://github.com/punyajoy/DE-LIMIT.
1
1
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
multi
test
MINION
[{'Name': 'English', 'Volume': 13000.0, 'Unit': 'documents', 'Language': 'English'}, {'Name': 'Spanish', 'Volume': 1500.0, 'Unit': 'documents', 'Language': 'Spanish'}, {'Name': 'Portuguese', 'Volume': 3268.0, 'Unit': 'documents', 'Language': 'Portuguese'}, {'Name': 'Polish', 'Volume': 4479.0, 'Unit': 'documents', 'Language': 'Polish'}, {'Name': 'Turkish', 'Volume': 4480.0, 'Unit': 'documents', 'Language': 'Turkish'}, {'Name': 'Hindi', 'Volume': 1499.0, 'Unit': 'documents', 'Language': 'Hindi'}, {'Name': 'Japanese', 'Volume': 1500.0, 'Unit': 'documents', 'Language': 'Japanese'}, {'Name': 'Korean', 'Volume': 1500.0, 'Unit': 'documents', 'Language': 'Korean'}]
null
unknown
2,022
['English', 'Spanish', 'Portuguese', 'Polish', 'Turkish', 'Hindi', 'Japanese', 'Korean']
null
['wikipedia']
text
null
MINION is a large-scale, multilingual dataset for Event Detection (ED). It contains over 50,000 manually annotated event triggers in 8 languages (English, Spanish, Portuguese, Polish, Turkish, Hindi, Japanese, Korean) sourced from Wikipedia articles. The annotation schema is a pruned version of the ACE 2005 ontology.
31,226
documents
null
['University of Oregon']
null
null
null
null
false
other
Free
null
['other']
null
null
null
['AmirPouranBenVeyseh', 'MinhVanNguyen', 'FranckDernoncourt', 'ThienHuuNguyen']
['Dept. of Computer and Information Science, University of Oregon, Eugene, OR, USA', 'Adobe Research, Seattle, WA, USA']
Event Detection (ED) is the task of identifying and classifying trigger words of event mentions in text. Despite considerable research efforts in recent years for English text, the task of ED in other languages has been significantly less explored. Switching to non-English languages, important research questions for ED include how well existing ED models perform on different languages, how challenging ED is in other languages, and how well ED knowledge and annotation can be transferred across languages. To answer those questions, it is crucial to obtain multilingual ED datasets that provide consistent event annotation for multiple languages. There exist some multilingual ED datasets; however, they tend to cover a handful of languages and mainly focus on popular ones. Many languages are not covered in existing multilingual ED datasets. In addition, the current datasets are often small and not accessible to the public. To overcome those shortcomings, we introduce a new large-scale multilingual dataset for ED (called MINION) that consistently annotates events for 8 different languages; 5 of them have not been supported by existing multilingual datasets. We also perform extensive experiments and analysis to demonstrate the challenges and transferability of ED across languages in MINION that in all call for more research effort in this area.
1
1
null
0
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
0
1
1
null
1
null
null
null
1
1
1
multi
test
SEAHORSE
[{'Name': 'de', 'Language': 'German', 'Volume': 14591.0, 'Unit': 'sentences'}, {'Name': 'en', 'Language': 'English', 'Volume': 22339.0, 'Unit': 'sentences'}, {'Name': 'es', 'Language': 'Spanish', 'Volume': 14749.0, 'Unit': 'sentences'}, {'Name': 'ru', 'Language': 'Russian', 'Volume': 14542.0, 'Unit': 'sentences'}, {'Name': 'tr', 'Language': 'Turkish', 'Volume': 15418.0, 'Unit': 'sentences'}, {'Name': 'vi', 'Language': 'Vietnamese', 'Volume': 15006.0, 'Unit': 'sentences'}]
null
https://goo.gle/seahorse
CC BY 4.0
2,023
['German', 'English', 'Spanish', 'Russian', 'Turkish', 'Vietnamese']
null
['public datasets']
text
null
SEAHORSE is a large-scale dataset for multilingual, multifaceted summarization evaluation. It consists of 96,645 summaries with human ratings along 6 quality dimensions: comprehensibility, repetition, grammar, attribution, main ideas, and conciseness. It covers 6 languages, 9 systems, and 4 summarization datasets.
96,645
sentences
null
['Google']
null
null
null
null
false
GitHub
Free
null
['summarization']
null
null
null
['Elizabeth Clark', 'Shruti Rijhwani', 'Sebastian Gehrmann', 'Joshua Maynez', 'Roee Aharoni', 'Vitaly Nikolaev', 'Thibault Sellam', 'Aditya Siddhant', 'Dipanjan Das', 'Ankur P. Parikh']
['Google DeepMind', 'Google Research']
Reliable automatic evaluation of summarization systems is challenging due to the multifaceted and subjective nature of the task. This is especially the case for languages other than English, where human evaluations are scarce. In this work, we introduce SEAHORSE, a dataset for multilingual, multifaceted summarization evaluation. SEAHORSE consists of 96K summaries with human ratings along 6 dimensions of text quality: comprehensibility, repetition, grammar, attribution, main ideas, and conciseness, covering 6 languages, 9 systems and 4 datasets. As a result of its size and scope, SEAHORSE can serve both as a benchmark to evaluate learnt metrics, as well as a large-scale resource for training such metrics. We show that metrics trained with SEAHORSE achieve strong performance on the out-of-domain meta-evaluation benchmarks TRUE (Honovich et al., 2022) and mFACE (Aharoni et al., 2022). We make the SEAHORSE dataset and metrics publicly available for future research on multilingual and multifaceted summarization evaluation.
1
1
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
multi
test
Mintaka
[{'Name': 'English', 'Language': 'English', 'Volume': 20000.0, 'Unit': 'sentences'}, {'Name': 'Arabic', 'Language': 'Arabic', 'Volume': 20000.0, 'Unit': 'sentences'}, {'Name': 'French', 'Language': 'French', 'Volume': 20000.0, 'Unit': 'sentences'}, {'Name': 'German', 'Volume': 20000.0, 'Language': 'German', 'Unit': 'sentences'}, {'Name': 'Hindi', 'Volume': 20000.0, 'Language': 'Hindi', 'Unit': 'sentences'}, {'Name': 'Italian', 'Volume': 20000.0, 'Language': 'Italian', 'Unit': 'sentences'}, {'Name': 'Japanese', 'Language': 'Japanese', 'Volume': 20000.0, 'Unit': 'sentences'}, {'Name': 'Portuguese', 'Language': 'Portuguese', 'Volume': 20000.0, 'Unit': 'sentences'}, {'Name': 'Spanish', 'Language': 'Spanish', 'Volume': 20000.0, 'Unit': 'sentences'}]
null
https://github.com/amazon-research/mintaka
CC BY 4.0
2,022
['English', 'Arabic', 'French', 'German', 'Hindi', 'Italian', 'Japanese', 'Portuguese', 'Spanish']
null
['wikipedia']
text
null
Mintaka is a large, complex, naturally-elicited, and multilingual question answering dataset. It contains 20,000 English question-answer pairs, which have been translated into 8 other languages, totaling 180,000 samples. The dataset is annotated with Wikidata entities and includes 8 types of complex questions.
180,000
sentences
null
['Amazon']
null
null
null
null
false
GitHub
Free
null
['question answering']
null
null
null
['Priyanka Sen', 'Alham Fikri Aji', 'Amir Saffari']
['Amazon Alexa AI']
We introduce Mintaka, a complex, natural, and multilingual dataset designed for experimenting with end-to-end question-answering models. Mintaka is composed of 20,000 question-answer pairs collected in English, annotated with Wikidata entities, and translated into Arabic, French, German, Hindi, Italian, Japanese, Portuguese, and Spanish for a total of 180,000 samples. Mintaka includes 8 types of complex questions, including superlative, intersection, and multi-hop questions, which were naturally elicited from crowd workers. We run baselines over Mintaka, the best of which achieves 38% hits@1 in English and 31% hits@1 multilingually, showing that existing models have room for improvement. We release Mintaka at https://github.com/amazon-research/mintaka.
1
1
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
multi
test
Multi2WOZ
[{'Name': 'Arabic', 'Language': 'Arabic', 'Volume': 29500.0, 'Unit': 'sentences'}, {'Name': 'Chinese', 'Language': 'Chinese', 'Volume': 29500.0, 'Unit': 'sentences'}, {'Name': 'German', 'Language': 'German', 'Volume': 29500.0, 'Unit': 'sentences'}, {'Name': 'Russian', 'Language': 'Russian', 'Volume': 29500.0, 'Unit': 'sentences'}]
null
https://github.com/umanlp/Multi2WOZ
MIT License
2,022
['Arabic', 'Chinese', 'German', 'Russian']
null
['public datasets']
text
null
A multilingual, multi-domain task-oriented dialog (TOD) dataset in Arabic, Chinese, German, and Russian. It was created by translating and manually post-editing the 2,000 development and test dialogs from the English MultiWOZ 2.1 dataset, enabling reliable cross-lingual transfer evaluation.
118,000
sentences
null
['University of Mannheim']
null
null
null
null
false
GitHub
Free
null
['instruction tuning']
null
null
null
['Chia-Chien Hung', 'Anne Lauscher', 'Ivan Vulic´', 'Simone Paolo Ponzetto', 'Goran Glavasˇ']
['Data and Web Science Group, University of Mannheim, Germany', 'MilaNLP, Bocconi University, Italy', 'LTL, University of Cambridge, UK', 'CAIDAS, University of Wu¨rzburg, Germany']
Research on (multi-domain) task-oriented dialog (TOD) has predominantly focused on the English language, primarily due to the shortage of robust TOD datasets in other languages, preventing the systematic investigation of cross-lingual transfer for this crucial NLP application area. In this work, we introduce Multi2WOZ, a new multilingual multi-domain TOD dataset, derived from the well-established English dataset MultiWOZ, that spans four typologically diverse languages: Chinese, German, Arabic, and Russian. In contrast to concurrent efforts, Multi2WOZ contains gold-standard dialogs in target languages that are directly comparable with development and test portions of the English dataset, enabling reliable and comparative estimates of cross-lingual transfer performance for TOD. We then introduce a new framework for multilingual conversational specialization of pretrained language models (PrLMs) that aims to facilitate cross-lingual transfer for arbitrary downstream TOD tasks. Using such conversational PrLMs specialized for concrete target languages, we systematically benchmark a number of zero-shot and few-shot cross-lingual transfer approaches on two standard TOD tasks: Dialog State Tracking and Response Retrieval. Our experiments show that, in most setups, the best performance entails the combination of (I) conversational specialization in the target language and (ii) few-shot transfer for the concrete TOD task. Most importantly, we show that our conversational specialization in the target language allows for an exceptionally sample-efficient few-shot transfer for downstream TOD tasks.
1
1
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
multi
test
MTOP
[{'Name': 'English', 'Volume': 22288.0, 'Unit': 'sentences', 'Language': 'English'}, {'Name': 'German', 'Volume': 18788.0, 'Unit': 'sentences', 'Language': 'German'}, {'Name': 'French', 'Volume': 16584.0, 'Unit': 'sentences', 'Language': 'French'}, {'Name': 'Spanish', 'Volume': 15459.0, 'Unit': 'sentences', 'Language': 'Spanish'}, {'Name': 'Hindi', 'Volume': 16131.0, 'Unit': 'sentences', 'Language': 'Hindi'}, {'Name': 'Thai', 'Volume': 15195.0, 'Unit': 'sentences', 'Language': 'Thai'}]
null
https://fb.me/mtop_dataset
unknown
2,021
['English', 'German', 'French', 'Spanish', 'Hindi', 'Thai']
null
['other']
text
null
MTOP is a multilingual, almost-parallel dataset for task-oriented semantic parsing. It comprises 100k annotated utterances in 6 languages (English, German, French, Spanish, Hindi, Thai) across 11 domains. The dataset is designed to handle complex, nested queries through a compositional representation scheme.
104,445
sentences
null
['Facebook']
null
null
null
null
true
other
Free
null
['named entity recognition', 'intent classification']
null
null
null
['Haoran Li', 'Abhinav Arora', 'Shuohui Chen', 'Anchit Gupta', 'Sonal Gupta', 'Yashar Mehdad']
['Facebook']
Scaling semantic parsing models for task-oriented dialog systems to new languages is often expensive and time-consuming due to the lack of available datasets. Available datasets suffer from several shortcomings: a) they contain few languages b) they contain small amounts of labeled examples per language c) they are based on the simple intent and slot detection paradigm for non-compositional queries. In this paper, we present a new multilingual dataset, called MTOP, comprising of 100k annotated utterances in 6 languages across 11 domains. We use this dataset and other publicly available datasets to conduct a comprehensive benchmarking study on using various state-of-the-art multilingual pre-trained models for task-oriented semantic parsing. We achieve an average improvement of +6.3 points on Slot F1 for the two existing multilingual datasets, over best results reported in their experiments. Furthermore, we demonstrate strong zero-shot performance using pre-trained models combined with automatic translation and alignment, and a proposed distant supervision method to reduce the noise in slot label projection.
1
1
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
multi
test
X-RiSAWOZ
[{'Name': 'Chinese', 'Language': 'Chinese', 'Volume': 18000.0, 'Unit': 'sentences'}, {'Name': 'English', 'Language': 'English', 'Volume': 18000.0, 'Unit': 'sentences'}, {'Name': 'French', 'Language': 'French', 'Volume': 18000.0, 'Unit': 'sentences'}, {'Name': 'Hindi', 'Language': 'Hindi', 'Volume': 18000.0, 'Unit': 'sentences'}, {'Name': 'Korean', 'Language': 'Korean', 'Volume': 18000.0, 'Unit': 'sentences'}]
null
https://github.com/stanford-oval/dialogues
custom
2,023
['Chinese', 'English', 'French', 'Hindi', 'Korean']
null
['public datasets']
text
null
A multi-domain, large-scale, and high-quality task-oriented dialogue benchmark, produced by translating the Chinese RiSAWOZ data to four diverse languages: English, French, Hindi, and Korean; and one code-mixed English-Hindi language. It is an end-to-end dataset for building fully-functioning agents.
90,000
sentences
null
['Stanford University']
null
null
null
null
false
GitHub
Free
null
['instruction tuning']
null
null
null
['Mehrad Moradshahi', 'Tianhao Shen', 'Kalika Bali', 'Monojit Choudhury', 'Gaël de Chalendar', 'Anmol Goel', 'Sungkyun Kim', 'Prashant Kodali', 'Ponnurangam Kumaraguru', 'Nasredine Semmar', 'Sina J. Semnani', 'Jiwon Seo', 'Vivek Seshadri', 'Manish Shrivastava', 'Michael Sun', 'Aditya Yadavalli', 'Chaobin You', 'Deyi Xiong', 'Monica S. Lam']
['Stanford University', 'Tianjin University', 'Microsoft', 'Université Paris-Saclay', 'International Institute of Information Technology, Hyderabad', 'Hanyang University', 'Karya Inc.']
Task-oriented dialogue research has mainly focused on a few popular languages like English and Chinese, due to the high dataset creation cost for a new language. To reduce the cost, we apply manual editing to automatically translated data. We create a new multilingual benchmark, X-RiSAWOZ, by translating the Chinese RiSAWOZ to 4 languages: English, French, Hindi, Korean; and a code-mixed English-Hindi language. X-RiSAWOZ has more than 18,000 human-verified dialogue utterances for each language, and unlike most multilingual prior work, is an end-to-end dataset for building fully-functioning agents. The many difficulties we encountered in creating X-RiSAWOZ led us to develop a toolset to accelerate the post-editing of a new language dataset after translation. This toolset improves machine translation with a hybrid entity alignment technique that combines neural with dictionary-based methods, along with many automated and semi-automated validation checks. We establish strong baselines for X-RiSAWOZ by training dialogue agents in the zero- and few-shot settings where limited gold data is available in the target language. Our results suggest that our translation and post-editing methodology and toolset can be used to create new high-quality multilingual dialogue agents cost-effectively. Our dataset, code, and toolkit are released open-source.
1
1
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
multi
test
PRESTO
[{'Language': 'German', 'Name': 'German', 'Volume': 83584.0, 'Unit': 'sentences'}, {'Name': 'English', 'Unit': 'sentences', 'Language': 'English', 'Volume': 95671.0}, {'Unit': 'sentences', 'Name': 'Spanish', 'Language': 'Spanish', 'Volume': 96164.0}, {'Volume': 95870.0, 'Unit': 'sentences', 'Language': 'French', 'Name': 'French'}, {'Unit': 'sentences', 'Name': 'Hindi', 'Volume': 72107.0, 'Language': 'Hindi'}, {'Name': 'Japanese', 'Volume': 109528.0, 'Unit': 'sentences', 'Language': 'Japanese'}]
null
https://github.com/google-research-datasets/presto
CC BY 4.0
2,023
['German', 'English', 'Spanish', 'French', 'Hindi', 'Japanese']
null
['other']
text
null
PRESTO is a public, multilingual dataset of over 550K contextual conversations between humans and virtual assistants for parsing realistic task-oriented dialogs. It contains challenges like disfluencies, code-switching, and user revisions, and provides structured context (contacts, lists) for each example across six languages.
552,924
sentences
null
['Google Inc.']
null
null
null
null
false
GitHub
Free
null
['intent classification', 'instruction tuning']
null
null
null
['Rahul Goel', 'Waleed Ammar', 'Aditya Gupta', 'Siddharth Vashishtha', 'Motoki Sano', 'Faiz Surani', 'Max Chang', 'HyunJeong Choe', 'David Greene', 'Kyle He', 'Rattima Nitisaroj', 'Anna Trukhina', 'Shachi Paul', 'Pararth Shah', 'Rushin Shah', 'Zhou Yu']
['Google Inc.', 'University of Rochester', 'University of California, Santa Barbara', 'Columbia University']
Research interest in task-oriented dialogs has increased as systems such as Google Assistant, Alexa and Siri have become ubiquitous in everyday life. However, the impact of academic research in this area has been limited by the lack of datasets that realistically capture the wide array of user pain points. To enable research on some of the more challenging aspects of parsing realistic conversations, we introduce PRESTO, a public dataset of over 550K contextual multilingual conversations between humans and virtual assistants. PRESTO contains a diverse array of challenges that occur in real-world NLU tasks such as disfluencies, code-switching, and revisions. It is the only large scale human generated conversational parsing dataset that provides structured context such as a user's contacts and lists for each example. Our mT5 model based baselines demonstrate that the conversational phenomenon present in PRESTO are challenging to model, which is further pronounced in a low-resource setup.
1
1
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
multi
test
LAHM
[{'Name': 'English', 'Volume': 105120.0, 'Unit': 'sentences', 'Language': 'English'}, {'Name': 'Hindi', 'Volume': 32734.0, 'Unit': 'sentences', 'Language': 'Hindi'}, {'Name': 'Arabic', 'Volume': 5394.0, 'Unit': 'sentences', 'Language': 'Arabic'}, {'Name': 'French', 'Volume': 20809.0, 'Unit': 'sentences', 'Language': 'French'}, {'Name': 'German', 'Volume': 8631.0, 'Unit': 'sentences', 'Language': 'German'}, {'Name': 'Spanish', 'Volume': 55148.0, 'Unit': 'sentences', 'Language': 'Spanish'}]
null
unknown
2,023
['English', 'Hindi', 'Arabic', 'French', 'German', 'Spanish']
null
['social media', 'news articles', 'public datasets']
text
null
A large-scale, semi-supervised dataset for multilingual and multi-domain hate speech identification. It contains nearly 300k tweets across 6 languages (English, Hindi, Arabic, French, German, Spanish) and 5 domains (Abuse, Racism, Sexism, Religious Hate, Extremism), created using a 3-layer annotation pipeline.
227,836
sentences
null
['Logically.ai']
null
null
null
null
false
other
Free
null
['offensive language detection']
null
null
null
['Ankit Yadav', 'Shubham Chandel', 'Sushant Chatufale', 'Anil Bandhakavi']
['Logically.ai']
Current research on hate speech analysis is typically oriented towards monolingual and single classification tasks. In this paper, we present a new multilingual hate speech analysis dataset for English, Hindi, Arabic, French, German and Spanish languages for multiple domains across hate speech - Abuse, Racism, Sexism, Religious Hate and Extremism. To the best of our knowledge, this paper is the first to address the problem of identifying various types of hate speech in these five wide domains in these six languages. In this work, we describe how we created the dataset, created annotations at high level and low level for different domains and how we use it to test the current state-of-the-art multilingual and multitask learning approaches. We evaluate our dataset in various monolingual, cross-lingual and machine translation classification settings and compare it against open source English datasets that we aggregated and merged for this task. Then we discuss how this approach can be used to create large scale hate-speech datasets and how to leverage our annotations in order to improve hate speech detection and classification in general.
1
1
null
0
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
0
0
0
null
1
null
null
null
1
1
1
multi
test
MARC
[{'Name': 'English', 'Volume': 2100000.0, 'Unit': 'sentences', 'Language': 'English'}, {'Name': 'Japanese', 'Volume': 2100000.0, 'Unit': 'sentences', 'Language': 'Japanese'}, {'Name': 'German', 'Volume': 2100000.0, 'Unit': 'sentences', 'Language': 'German'}, {'Name': 'French', 'Volume': 2100000.0, 'Unit': 'sentences', 'Language': 'French'}, {'Name': 'Spanish', 'Volume': 2100000.0, 'Unit': 'sentences', 'Language': 'Spanish'}, {'Name': 'Chinese', 'Volume': 2100000.0, 'Unit': 'sentences', 'Language': 'Chinese'}]
null
https://registry.opendata.aws/amazon-reviews-ml
custom
2,020
['Japanese', 'English', 'German', 'French', 'Spanish', 'Chinese']
null
['reviews']
text
null
Amazon product reviews dataset for multilingual text classification. The dataset contains reviews in English, Japanese, German, French, Chinese and Spanish, collected between November 1, 2015 and November 1, 2019
12,600,000
sentences
null
['Amazon']
null
null
null
null
false
other
Free
null
['sentiment analysis', 'review classification']
null
null
null
['Phillip Keung', 'Yichao Lu', 'Gyorgy Szarvas', 'Noah A. Smith']
['Amazon', 'Washington University']
We present the Multilingual Amazon Reviews Corpus (MARC), a large-scale collection of Amazon reviews for multilingual text classification. The corpus contains reviews in English, Japanese, German, French, Spanish, and Chinese, which were collected between 2015 and 2019. Each record in the dataset contains the review text, the review title, the star rating, an anonymized reviewer ID, an anonymized product ID, and the coarse-grained product category (e.g., 'books', 'appliances', etc.) The corpus is balanced across the 5 possible star ratings, so each rating constitutes 20% of the reviews in each language. For each language, there are 200,000, 5,000, and 5,000 reviews in the training, development, and test sets, respectively. We report baseline results for supervised text classification and zero-shot cross-lingual transfer learning by fine-tuning a multilingual BERT model on reviews data. We propose the use of mean absolute error (MAE) instead of classification accuracy for this task, since MAE accounts for the ordinal nature of the ratings.
1
1
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1
multi
test
MLSUM
[{'Name': 'FR', 'Volume': 424763.0, 'Unit': 'documents', 'Language': 'French'}, {'Name': 'DE', 'Volume': 242982.0, 'Unit': 'documents', 'Language': 'German'}, {'Name': 'ES', 'Volume': 290645.0, 'Unit': 'documents', 'Language': 'Spanish'}, {'Name': 'RU', 'Volume': 27063.0, 'Unit': 'documents', 'Language': 'Russian'}, {'Name': 'TR', 'Volume': 273617.0, 'Unit': 'documents', 'Language': 'Turkish'}]
null
https://github.com/recitalAI/MLSUM
custom
2,020
['French', 'German', 'Spanish', 'Russian', 'Turkish']
null
['news articles', 'web pages']
text
null
MLSUM is a large-scale multilingual summarization dataset with over 1.5 million article/summary pairs in French, German, Spanish, Russian, and Turkish. Collected from online newspapers, it is designed to complement the English CNN/Daily Mail dataset, enabling new research in cross-lingual summarization.
1,259,070
documents
null
['reciTAL', 'Sorbonne Université', 'CNRS']
null
null
null
null
false
GitHub
Free
null
['summarization']
null
null
null
['Thomas Scialom', 'Paul-Alexis Dray', 'Sylvain Lamprier', 'Benjamin Piwowarski', 'Jacopo Staiano']
['reciTAL, Paris, France', 'Sorbonne Université, CNRS, LIP6, F-75005 Paris, France', 'CNRS, France']
We present MLSUM, the first large-scale MultiLingual SUMmarization dataset. Obtained from online newspapers, it contains 1.5M+ article/summary pairs in five different languages -- namely, French, German, Spanish, Russian, Turkish. Together with English newspapers from the popular CNN/Daily mail dataset, the collected data form a large scale multilingual dataset which can enable new research directions for the text summarization community. We report cross-lingual comparative analyses based on state-of-the-art systems. These highlight existing biases which motivate the use of a multi-lingual dataset.
1
1
null
1
0
1
1
null
1
1
null
1
1
1
null
1
null
null
null
null
1
1
1
1
null
1
null
null
null
1
1
1