PRIM: Towards Practical In-Image Multilingual Machine Translation (EMNLP 2025 Main)
📄Paper arXiv | 💻Code GitHub | 🤗Benchmark HuggingFace | 🤗Model HuggingFace
Introduction
This repository provides the MTedIIMT training set, which is introduced in our paper PRIM: Towards Practical In-Image Multilingual Machine Translation.
The text in the source and target images is derived from MTed [1], while the background images are captured from Ted and rendered using the TRDG toolkit [2]. We sincerely thank the authors of these works for their valuable contributions.
Extraction
Each language-pair directory (e.g., ./en-de) contains multiple compressed parts (.part-aa, .part-ab, …) along with a checksum file (SHA256SUMS.txt).
The split parts must be concatenated back into a single .tar.gz archive before extraction. You can do this in one step:
cd ./en-de
cat MTedIIMT_subset_en_de_lmdb.tar.gz.part-* | tar -xzf - -C /path/to/dest
part-* expands to all split files in order (aa, ab, ac, …).
Use -C /path/to/dest to specify the target extraction directory (create it beforehand).
Citation
If you find our work helpful, we would greatly appreciate it if you could cite our paper:
@inproceedings{tian-etal-2025-prim,
title = "{PRIM}: Towards Practical In-Image Multilingual Machine Translation",
author = "Tian, Yanzhi and
Liu, Zeming and
Liu, Zhengyang and
Feng, Chong and
Li, Xin and
Huang, Heyan and
Guo, Yuhang",
editor = "Christodoulopoulos, Christos and
Chakraborty, Tanmoy and
Rose, Carolyn and
Peng, Violet",
booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2025",
address = "Suzhou, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.emnlp-main.691/",
pages = "13693--13708",
ISBN = "979-8-89176-332-6",
abstract = "In-Image Machine Translation (IIMT) aims to translate images containing texts from one language to another. Current research of end-to-end IIMT mainly conducts on synthetic data, with simple background, single font, fixed text position, and bilingual translation, which can not fully reflect real world, causing a significant gap between the research and practical conditions. To facilitate research of IIMT in real-world scenarios, we explore Practical In-Image Multilingual Machine Translation (IIMMT). In order to convince the lack of publicly available data, we annotate the PRIM dataset, which contains real-world captured one-line text images with complex background, various fonts, diverse text positions, and supports multilingual translation directions. We propose an end-to-end model VisTrans to handle the challenge of practical conditions in PRIM, which processes visual text and background information in the image separately, ensuring the capability of multilingual translation while improving the visual quality. Experimental results indicate the VisTrans achieves a better translation quality and visual effect compared to other models. The code and dataset are available at: https://github.com/BITHLP/PRIM."
}
[1] The multitarget ted talks task. http://www.cs.jhu.edu/~kevinduh/a/multitarget-tedtalks/
- Downloads last month
- 73