--- license: mit library_name: transformers pipeline_tag: image-text-to-text --- This repository contains the MangaLMM model described in the paper [MangaVQA and MangaLMM: A Benchmark and Specialized Model for Multimodal Manga Understanding](https://huggingface.co/papers/2505.20298). Code: https://github.com/manga109/MangaLMM
Official demo: https://huggingface.co/spaces/yuki-imajuku/MangaLMM-Demo