weight tying

#2
by spawn99 - opened

The paper describes LLaDA as a 'vanilla Transformer' predicting masked tokens, and my inference code expects logits over the vocab at each diffusion step. Given model.transformer.ff_out.weight exists but lm_head.weight doesn’t, is this ff_out the output projection for LLaDA?
Is there weight tying going on, unlike the 'vanilla' description in the paper?

GSAI-ML org

Thank you for your attention!

The LLaDA model is derived from removing the causal mask from an autoregressive model (as detailed in https://github.com/ML-GSAI/LLaDA/blob/main/GUIDELINES.md), so we call it a vanilla Transformer. By using this term, we emphasize that, unlike other diffusion language models, LLaDA does not require inputting the timestep into the network. I'm not sure if I've fully understood your question. More discussions are welcome.

Sign up or log in to comment