|
--- |
|
dataset_info: |
|
features: |
|
- name: en |
|
dtype: string |
|
- name: vi |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 536891664 |
|
num_examples: 2977999 |
|
- name: dev |
|
num_bytes: 3341942 |
|
num_examples: 18719 |
|
- name: test |
|
num_bytes: 3633646 |
|
num_examples: 19151 |
|
download_size: 317794951 |
|
dataset_size: 543867252 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
- split: dev |
|
path: data/dev-* |
|
- split: test |
|
path: data/test-* |
|
license: apache-2.0 |
|
--- |
|
|
|
- The original dataset is the high-quality work of VinAI Research. |
|
- I just simply to process and reformat this data into the standard Hugging Face datasets structure, making it accessible for pretraining compact models. |
|
|
|
```python |
|
@inproceedings{PhoMT, |
|
title = {{PhoMT: A High-Quality and Large-Scale Benchmark Dataset for Vietnamese-English Machine Translation}}, |
|
author = {Long Doan and Linh The Nguyen and Nguyen Luong Tran and Thai Hoang and Dat Quoc Nguyen}, |
|
booktitle = {Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing}, |
|
year = {2021}, |
|
pages = {4495--4503} |
|
} |
|
``` |