Abstract
A novel watermarking technique for autoregressive image generation models achieves reliable detection through improved reverse cycle-consistency and synchronization layers.
Watermarking the outputs of generative models has emerged as a promising approach for tracking their provenance. Despite significant interest in autoregressive image generation models and their potential for misuse, no prior work has attempted to watermark their outputs at the token level. In this work, we present the first such approach by adapting language model watermarking techniques to this setting. We identify a key challenge: the lack of reverse cycle-consistency (RCC), wherein re-tokenizing generated image tokens significantly alters the token sequence, effectively erasing the watermark. To address this and to make our method robust to common image transformations, neural compression, and removal attacks, we introduce (i) a custom tokenizer-detokenizer finetuning procedure that improves RCC, and (ii) a complementary watermark synchronization layer. As our experiments demonstrate, our approach enables reliable and robust watermark detection with theoretically grounded p-values.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Autoregressive Images Watermarking through Lexical Biasing: An Approach Resistant to Regeneration Attack (2025)
- Training-Free Watermarking for Autoregressive Image Generation (2025)
- A Watermark for Auto-Regressive Image Generation Models (2025)
- VIDSTAMP: A Temporally-Aware Watermark for Ownership and Integrity in Video Diffusion Models (2025)
- Video Signature: In-generation Watermarking for Latent Video Diffusion Models (2025)
- GaussMarker: Robust Dual-Domain Watermark for Diffusion Models (2025)
- Forging and Removing Latent-Noise Diffusion Watermarks Using a Single Image (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper