Datasets:
Neural Machine Translation parallel corpora
Introduction
We use OpusTools to extract resources from the OPUS project, a renowned platform for parallel corpora, and create a multilingual dataset. Specifically, we collect the parallel corpora from prominent projects within OPUS, including NLLB, CCMatrix, and OpenSubtitles.
This comprehensive data collection process results in a corpus of more than 3T, covering 60 languages and over 1900 language pairs.
Preprocessing
High-quality parallel corpora are essential for training powerful NMT models. However, raw data, such as web-crawled content, often contains significant noise—including length inconsistencies, irrelevant content, and sensitive material—which can negatively impact model performance. To address this, we adopt a six-stage cleaning pipeline inspired by previous work to ensure high-quality multilingual parallel corpora.
1. Text Extraction and Preprocessing
- Compressed files for each target language pair are obtained from the OPUS corpus.
- After decompression, only customized plain-text files in Moses format are retained.
- For example, in English-Chinese translation, only
.en
and.zh
files are preserved.
2. Proportion of Characters
To remove noisy or irrelevant sentences, we apply the following filters:
- Punctuation ratio filtering: Sentences with punctuation exceeding 50% are discarded.
- Rule-based filtering: Sentences with only spaces, invalid UTF8 characters, or excessively long tokens (e.g., DNA-like sequences) are removed.
- Character ratio filtering: Sentences with a low proportion of target language characters are eliminated to ensure relevance.
3. Data Length Filtering
Sentence length inconsistencies are controlled as follows:
- Tokenization: Both source and target texts are tokenized using SentencePiece.
- Length ratio filtering: Sentence pairs with one side exceeding three times the length of the other are removed.
- Short text removal: Documents with average line lengths < 10 words or total lengths > 250 characters are discarded.
4. Sensitive Word Filtering
To prevent the model from learning harmful language:
- A predefined list of sensitive words is used.
- Sentences with high-frequency sensitive words (e.g., frequency > 0.5) are removed.
5. Duplication Removal
- Duplicate sentence pairs are identified and removed using a deduplication script.
- Only the first occurrence of each pair is retained.
6. Normalization
- Text is normalized to unify punctuation, numbers, and space formatting.
- Unicode standardization ensures consistent symbol encoding.
- All quotation marks are normalized to a single standard format to reduce vocabulary size and improve model efficiency.
Citation Information
You can cite our paper https://arxiv.org/abs/2505.14256
@misc{zhu2025fuximtsparsifyinglargelanguage,
title={FuxiMT: Sparsifying Large Language Models for Chinese-Centric Multilingual Machine Translation},
author={Shaolin Zhu and Tianyu Dong and Bo Li and Deyi Xiong},
year={2025},
eprint={2505.14256},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.14256},
}
- Downloads last month
- 55