--- license: mit task_categories: - translation language: - en - de - fr - cs - ro - ru pretty_name: PRIM --- # PRIM: Towards Practical In-Image Multilingual Machine Translation (EMNLP 2025 Main) > [!NOTE] > 📄Paper [arXiv](https://arxiv.org/abs/2509.05146) | 💻Code [GitHub](https://github.com/BITHLP/PRIM) | 🤗Training set [HuggingFace](https://huggingface.co/datasets/yztian/MTedIIMT) | 🤗Model [HuggingFace](https://huggingface.co/yztian/VisTrans) ## Introduction This repository provides the **PRIM benchmark**, which is introduced in our paper PRIM: Towards Practical In-Image Multilingual Machine Translation. PRIM (**Pr**actical In-**I**mage **M**ultilingual Machine Translation) is the first publicly available benchmark **captured from real-word images** for In-Image machine Translation. The source images are collected from [1] and [2]. We sincerely thank the authors of these datasets for making their data available. ## Citation If you find our work helpful, we would greatly appreciate it if you could cite our paper: ```bibtex @misc{tian2025primpracticalinimagemultilingual, title={PRIM: Towards Practical In-Image Multilingual Machine Translation}, author={Yanzhi Tian and Zeming Liu and Zhengyang Liu and Chong Feng and Xin Li and Heyan Huang and Yuhang Guo}, year={2025}, eprint={2509.05146}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2509.05146}, } ``` ---------- [1] Modal Contrastive Learning Based End-to-End Text Image Machine Translation [2] MIT-10M: A Large Scale Parallel Corpus of Multilingual Image Translation