Papers
arxiv:2502.07553

Attention Learning is Needed to Efficiently Learn Parity Function

Published on Feb 11
Authors:
,

Abstract

Transformers demonstrate superior parameter efficiency compared to feed-forward neural networks in learning low-sensitivity functions like the $k$-parity problem.

AI-generated summary

Transformers, with their attention mechanisms, have emerged as the state-of-the-art architectures of sequential modeling and empirically outperform feed-forward neural networks (FFNNs) across many fields, such as natural language processing and computer vision. However, their generalization ability, particularly for low-sensitivity functions, remains less studied. We bridge this gap by analyzing transformers on the k-parity problem. Daniely and Malach (NeurIPS 2020) show that FFNNs with one hidden layer and O(nk^7 log k) parameters can learn k-parity, where the input length n is typically much larger than k. In this paper, we prove that FFNNs require at least Omega(n) parameters to learn k-parity, while transformers require only O(k) parameters, surpassing the theoretical lower bound needed by FFNNs. We further prove that this parameter efficiency cannot be achieved with fixed attention heads. Our work establishes transformers as theoretically superior to FFNNs in learning parity function, showing how their attention mechanisms enable parameter-efficient generalization in functions with low sensitivity.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2502.07553 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2502.07553 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2502.07553 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.