PRIM: Towards Practical In-Image Multilingual Machine Translation (EMNLP 2025 Main)
📄Paper arXiv | 💻Code GitHub | 🤗Benchmark HuggingFace | 🤗Model HuggingFace
Introduction
This repository provides the MTedIIMT training set, which is introduced in our paper PRIM: Towards Practical In-Image Multilingual Machine Translation.
The text in the source and target images is derived from MTed [1], while the background images are captured from Ted and rendered using the TRDG toolkit [2]. We sincerely thank the authors of these works for their valuable contributions.
Extraction
Each language-pair directory (e.g., ./en-de) contains multiple compressed parts (.part-aa, .part-ab, …) along with a checksum file (SHA256SUMS.txt).
The split parts must be concatenated back into a single .tar.gz archive before extraction. You can do this in one step:
cd ./en-de
cat MTedIIMT_subset_en_de_lmdb.tar.gz.part-* | tar -xzf - -C /path/to/dest
part-* expands to all split files in order (aa, ab, ac, …).
Use -C /path/to/dest to specify the target extraction directory (create it beforehand).
Citation
If you find our work helpful, we would greatly appreciate it if you could cite our paper:
@misc{tian2025primpracticalinimagemultilingual,
title={PRIM: Towards Practical In-Image Multilingual Machine Translation},
author={Yanzhi Tian and Zeming Liu and Zhengyang Liu and Chong Feng and Xin Li and Heyan Huang and Yuhang Guo},
year={2025},
eprint={2509.05146},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2509.05146},
}
[1] The multitarget ted talks task. http://www.cs.jhu.edu/~kevinduh/a/multitarget-tedtalks/
- Downloads last month
- 19