Why Does o_proj Project to hidden_size When num_heads × head_dim ≠ hidden_size?

#3
by ccosmos - opened

Typically, hidden_size is calculated as num_attention_heads × head_dim, but this model’s configuration is as follows:

num_attention_heads: 32

head_dim: 128

hidden_size: 1792

Why does it change the hidden size by projecting to hidden_size in o_proj?

Oh, I also curious about it.

Are you referring to the Mi:dm 2.0 Mini model?

The Mi:dm 2.0 Mini is trained through multiple rounds of knowledge distillation and pruning, starting from the Mi:dm-Base (11.5B) model.
To explore ways for the lighter model without sacrificing performance, we ran several experiments with a mix of fixed parameters and variable ones.
As a result, the total number of parameters, number of layers, hidden size, and other parameters were determined.

In short, this architecture was chosen as the outcome of experiments aimed at minimizing performance degradation while reducing model size.
Given that it’s a lightweight model, we prioritized maintaining strong performance over inference-time optimizations.
Thank you.

Sign up or log in to comment