Datasets:

Modalities:
Image
ArXiv:
License:
Jinruiy commited on
Commit
125838a
1 Parent(s): a02e36b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +99 -1
README.md CHANGED
@@ -30,4 +30,102 @@ language:
30
  - mt
31
  - ga
32
  pretty_name: multi_eup
33
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
  - mt
31
  - ga
32
  pretty_name: multi_eup
33
+ ---
34
+ ## Dataset Description
35
+
36
+ - **Homepage:**
37
+ - **Repository:** [Multi-EuP Dataset repository](https://github.com/jrnlp/Multi-EuP)
38
+ - **Paper:** [Multi-EuP: The Multilingual European Parliament Dataset for Analysis of Bias in Information Retrieval](https://arxiv.org/pdf/2311.01870.pdf)
39
+ - **Leaderboard:** [Papers with Code leaderboard for Multi-EuP](Coming soon)
40
+ - **Point of Contact:** [Jinrui Yang](mailto:[email protected])
41
+
42
+ ### Dataset Summary
43
+
44
+ The Multi-Eup is a new multilingual benchmark dataset, comprising 22K multilingual documents collected from the European Parliament, spanning 24 languages. This dataset is designed to investigate fairness in a multilingual information retrieval (IR) context to analyze both language and demographic bias in a ranking context. It boasts an authentic multilingual corpus, featuring topics translated into all 24 languages, as well as cross-lingual relevance judgments. Furthermore, it offers rich demographic information associated with its documents, facilitating the study of demographic bias.
45
+
46
+ ### Languages
47
+
48
+ | Language | ISO code | Countries where official lang. | Native Usage | Total Usage | # Docs | Words per Doc (mean/median) |
49
+ |----------|----------|--------------------------------|--------------|-------------|-------|------------------------------|
50
+ | English | EN | United Kingdom, Ireland, Malta | 13% | 51% | 7123 | 286/200 |
51
+ | German | DE | Germany, Belgium, Luxembourg | 16% | 32% | 3433 | 180/164 |
52
+ | French | FR | France, Belgium, Luxembourg | 12% | 26% | 2779 | 296/223 |
53
+ | Italian | IT | Italy | 13% | 16% | 1829 | 190/175 |
54
+ | Spanish | ES | Spain | 8% | 15% | 2371 | 232/198 |
55
+ | Polish | PL | Poland | 8% | 9% | 1841 | 155/148 |
56
+ | Romanian | RO | Romania | 5% | 5% | 794 | 186/172 |
57
+ | Dutch | NL | Netherlands, Belgium | 4% | 5% | 897 | 184/170 |
58
+ | Greek | EL | Greece, Cyprus | 3% | 4% | 707 | 209/205 |
59
+ | Hungarian| HU | Hungary | 3% | 3% | 614 | 126/128 |
60
+ | Portuguese| PT | Portugal | 2% | 3% | 1176 | 179/167 |
61
+ | Czech | CS | Czech Republic | 2% | 3% | 397 | 167/149 |
62
+ | Swedish | SV | Sweden | 2% | 3% | 531 | 175/165 |
63
+ | Bulgarian| BG | Bulgaria | 2% | 2% | 408 | 196/178 |
64
+ | Danish | DA | Denmark | 1% | 1% | 292 | 218/198 |
65
+ | Finnish | FI | Finland | 1% | 1% | 405 | 94/87 |
66
+ | Slovak | SK | Slovakia | 1% | 1% | 348 | 151/158 |
67
+ | Lithuanian| LT | Lithuania | 1% | 1% | 115 | 142/127 |
68
+ | Croatian | HR | Croatia | <1% | <1% | 524 | 183/164 |
69
+ | Slovene | SL | Slovenia | <1% | <1% | 270 | 201/163 |
70
+ | Estonian | ET | Estonia | <1% | <1% | 58 | 160/158 |
71
+ | Latvian | LV | Latvia | <1% | <1% | 89 | 111/123 |
72
+ | Maltese | MT | Malta | <1% | <1% | 178 | 117/115 |
73
+ | Irish | GA | Ireland | <1% | <1% | 33 | 198/172 |
74
+ *Table 1: Multi-EuP statistics, broken down by language: ISO language code; EU member states using the language officially; proportion of the EU population speaking the language; number of debate speech documents in Mult-EuP; and words per document (mean/median).*
75
+
76
+ ## Dataset Structure
77
+
78
+ The Multi-EuP dataset contains two files, debate coprpus and MEP info.
79
+
80
+ ### Data Instances
81
+
82
+ For each instance, there is a string for the article, a string for the highlights, and a string for the id. See the [CNN / Daily Mail dataset viewer](https://huggingface.co/datasets/viewer/?dataset=cnn_dailymail&config=3.0.0) to explore more examples.
83
+
84
+ ```
85
+ {'id': '0054d6d30dbcad772e20b22771153a2a9cbeaf62',
86
+ 'article': '(CNN) -- An American woman died aboard a cruise ship that docked at Rio de Janeiro on Tuesday, the same ship on which 86 passengers previously fell ill, according to the state-run Brazilian news agency, Agencia Brasil. The American tourist died aboard the MS Veendam, owned by cruise operator Holland America. Federal Police told Agencia Brasil that forensic doctors were investigating her death. The ship's doctors told police that the woman was elderly and suffered from diabetes and hypertension, according the agency. The other passengers came down with diarrhea prior to her death during an earlier part of the trip, the ship's doctors said. The Veendam left New York 36 days ago for a South America tour.'
87
+ 'highlights': 'The elderly woman suffered from diabetes and hypertension, ship's doctors say .\nPreviously, 86 passengers had fallen ill on the ship, Agencia Brasil says .'}
88
+ ```
89
+
90
+ ### Data Fields
91
+
92
+ - `id`: a string containing the heximal formated SHA1 hash of the url where the story was retrieved from
93
+ - `article`: a string containing the body of the news article
94
+ - `highlights`: a string containing the highlight of the article as written by the article author
95
+
96
+
97
+ ## Dataset Creation
98
+
99
+ ### Curation Rationale
100
+
101
+ Version 1.0.0 aimed to support supervised neural methodologies for machine reading and question answering with a large amount of real natural language training data and released about 313k unique articles and nearly 1M Cloze style questions to go with the articles. Versions 2.0.0 and 3.0.0 changed the structure of the dataset to support summarization rather than question answering. Version 3.0.0 provided a non-anonymized version of the data, whereas both the previous versions were preprocessed to replace named entities with unique identifier labels.
102
+
103
+ ### Source Data
104
+
105
+ #### Initial Data Collection and Normalization
106
+
107
+ The code for the non-anonymized version is made publicly available by Abigail See of Stanford University, Peter J. Liu of Google Brain and Christopher D. Manning of Stanford University at <https://github.com/abisee/cnn-dailymail>. The work at Stanford University was supported by the DARPA DEFT ProgramAFRL contract no. FA8750-13-2-0040.
108
+
109
+ ### Ethic
110
+
111
+ The CNN / Daily Mail dataset version 1.0.0 is released under the [Apache-2.0 License](http://www.apache.org/licenses/LICENSE-2.0).
112
+
113
+ ### Citation Information
114
+
115
+ ```
116
+ @inproceedings{see-etal-2017-get,
117
+ title = "Get To The Point: Summarization with Pointer-Generator Networks",
118
+ author = "See, Abigail and
119
+ Liu, Peter J. and
120
+ Manning, Christopher D.",
121
+ booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
122
+ month = jul,
123
+ year = "2017",
124
+ address = "Vancouver, Canada",
125
+ publisher = "Association for Computational Linguistics",
126
+ url = "https://www.aclweb.org/anthology/P17-1099",
127
+ doi = "10.18653/v1/P17-1099",
128
+ pages = "1073--1083",
129
+ abstract = "Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.",
130
+ }
131
+ ```