Papers
arxiv:2510.03289

Why mask diffusion does not work

Published on Sep 29
Authors:
,
,

Abstract

Diffusion language models offer parallel generation and bidirectional attention, but mask diffusion faces challenges in these areas; effective training and inference strategies are proposed.

AI-generated summary

The main advantages of diffusion language models over autoregressive (AR) models lie in their ability to support parallel generation and bidirectional attention, enabling a more controllable generation process. In recent years, open-source mask diffusion language models have emerged, most of which are based on a variant known as absorbing diffusion. However, this paper demonstrates why mask diffusion faces inherent difficulties in achieving parallel generation and bidirectional attention. We also propose the most effective training and inference strategies for mask diffusion.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2510.03289 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2510.03289 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2510.03289 in a Space README.md to link it from this page.

Collections including this paper 1