--- license: cc-by-nc-4.0 task_categories: - translation language: - am - ar - ay - bm - bbj - bn - bg - ca - cs - ku - da - de - el - en - et - ee - fil - fi - fr - fon - gu - ha - he - hi - hu - ig - id - it - ja - kk - km - ko - lv - lt - lg - luo - mk - mos - my - nl - ne - or - pa - pcm - fa - pl - pt - mg - ro - ru - es - sr - sq - sw - sv - tet - tn - tr - tw - ur - wo - yo - zh - zu multilinguality: - translation - multilingual pretty_name: PolyNewsParallel size_categories: - 1K - **Point of Contact:** [Andreea Iana](https://andreeaiana.github.io/) - **License:** [CC-BY-4.0-NC](https://creativecommons.org/licenses/by-nc/4.0/) ### Dataset Summary PolyNewsParallel is a multilingual paralllel dataset containing news titles for 833 language pairs. It covers 65 languages and 17 scripts. ### Uses This dataset can be used for domain adaptation of language models or machine translation. ### Languages There are 65 languages avaiable: |**Code** | **Language** | **Script** | |:-----------|:---------------------------|:-------------------| | amh\_Ethi | Amharic | Ethiopic | | arb\_Arab | Modern Standard Arabic | Arabic | | ayr\_Latn | Central Aymara | Latin | | bam\_Latn | Bambara | Latin | | bbj\_Latn | Ghomálá’ | Latin | | ben\_Beng | Bengali | Bengali | | bul\_Cyrl | Bulgarian | Cyrillic | | cat\_Latn | Catalan | Latin | | ces\_Latn | Czech | Latin | | ckb\_Arab | Central Kurdish | Arabic | | dan\_Latn | Danish | Latin | | deu\_Latn | German | Latin | | ell\_Grek | Greek | Greek | | eng\_Latn | English | Latin | | est\_Latn | Estonian | Latin | | ewe\_Latn | Éwé | Latin | | fil\_Latn | Filipino | Latin | | fin\_Latn | Finnish | Latin | | fon\_Latn | Fon | Latin | | fra\_Latn | French | Latin | | guj\_Gujr | Gujarati | Gujarati | | hau\_Latn | Hausa | Latin | | heb\_Hebr | Hebrew | Hebrew | | hin\_Deva | Hindi | Devanagari | | hun\_Latn | Hungarian | Latin | | ibo\_Latn | Igbo | Latin | | ind\_Latn | Indonesian | Latin | | ita\_Latn | Italian | Latin | | jpn\_Jpan | Japanese | Japanese | | kaz\_Cyrl | Kazakh | Cyrillic | | khm\_Khmr | Khmer | Khmer | | kor\_Hang | Korean | Hangul | | lav\_Latn | Latvian | Latin | | lit\_Latn | Lithuanian | Latin | | lug\_Latn | Ganda | Latin | | luo\_Latn | Luo | Latin | | mkd\_Cyrl | Macedonian | Cyrillic | | mos\_Latn | Mossi | Latin | | mya\_Mymr | Burmese | Myanmar | | nld\_Latn | Dutch | Latin | | npi\_Deva | Nepali | Devanagari | | ory\_Orya | Odia | Oriya | | pan\_Guru | Eastern Panjabi | Gurmukhi | | pcm\_Latn | Nigerian Pidgin | Latin | | pes\_Arab | Western Persian | Arabic | | plt\_Latn | Malagasy | Latin | | pol\_Latn | Polish | Latin | | por\_Latn | Portuguese | Latin | | ron\_Latn | Romanian | Latin | | rus\_Cyrl | Russian | Cyrillic | | spa\_Latn | Spanish | Latin | | sqi\_Latn | Albanian | Latin | | srp\_Latn | Serbian | Latin | | swe\_Latn | Swedish | Latin | | swh\_Latn | Swahili | Latin | | tet\_Latn | Tetun | Latin | | tsn\_Latn | Tswana | Latin | | tur\_Latn | Turkish | Latin | | twi\_Latn | Twi | Latin | | urd\_Arab | Urdu | Arabic | | wol\_Latn | Wolof | Latin | | yor\_Latn | Yorùbá | Latin | | zho\_Hans | Chinese | Han (Simplified) | | zho\_Hant | Chinese | Han (Traditional) | | zul\_Latn | Zulu | Latin | The heatmap shows the language pairs available, as well as the number of articles per language pair.
PolyNewsParallel: Number of texts per language pair
## Dataset Structure ### Data Instances ``` >>> from datasets import load_dataset >>> data = load_dataset('aiana94/polynews-parallel', 'eng_Latn-ron_Latn') # Please, specify the language code, # A data point example is below: { "src": "They continue to support the view that this decision will have a lasting negative impact on the rule of law in the country. ", "tgt": "Ei continuă să creadă că această decizie va avea efecte negative pe termen lung asupra statului de drept în țară. ", "provenance": "globalvoices" } ``` ### Data Fields - src (string): source news text - tgt (string): target news text - provenance (string) : source dataset for the news example ### Data Splits For all languages, there is only the `train` split. ## Dataset Creation ### Curation Rationale Multiple multilingual, human-translated, datasets containing news texts have been released in recent years. However, these datasets are stored in different formats and various websites, and many contain numerous near duplicates. With PolyNewsParallel, we aim to provide an easily-accessible, unified and deduplicated parallel dataset that combines these disparate data sources, and which can be used for domain adaptation of language models or machine translation in both high-resource and low-resource languages. ### Source Data The source data consists of five multilingual news datasets. - [GlobalVoices](https://opus.nlpl.eu/GlobalVoices/corpus/version/GlobalVoices) (v2018q4) - [WMT-News](https://opus.nlpl.eu/WMT-News/corpus/version/WMT-News) (v2019) - [MAFAND](https://huggingface.co/datasets/masakhane/mafand) (`train` split) #### Data Collection and Processing We processed the data using a **working script** which covers the entire processing pipeline. It can be found [here](https://github.com/andreeaiana/nase/script/polynews). The data processing pipeline consists of: 1. Downloading the WMT-News and GlobalVoices News from OPUS. 2. Loading MAFAND datasets from Hugging Face Hub (only the `train` splits). 4. Concatenating, per language, all news texts from the source datasets. 5. Data cleaning (e.g., removal of exact duplicates, short texts, texts in other scripts) 6. [MinHash near-deduplication](https://github.com/bigcode-project/bigcode-dataset/blob/main/near_deduplication/minhash_deduplication.py) per language. ### Annotations We augment the original samples with the `provenance` annotation which specifies the original data source from which a particular examples stems. #### Personal and Sensitive Information The data is sourced from newspaper sources and contains mentions of public figures and individuals. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations Users should keep in mind that the dataset contains short news texts (e.g., mostly titles), which might limit the applicability of the developed systems to other domains. ## Additional Information ### Licensing Information The dataset is released under the [CC BY-NC Attribution-NonCommercial 4.0 International license](https://creativecommons.org/licenses/by-nc/4.0/). ### Citation Infomation **BibTeX:** [More Information Needed]