Skip a Layer or Loop it? Test-Time Depth Adaptation of Pretrained LLMs
Abstract
A method using chain-of-layers (CoLa) derived from a pretrained large language model allows for dynamic architecture adaptation, improving efficiency and accuracy across diverse tasks through selective layer manipulation and Monte Carlo Tree Search optimization.
Can a pretrained neural network adapt its architecture to different inputs without any finetuning? Do we need all layers for simple tasks, and are they adequate for challenging tasks? We found that the layers of a pretrained large language model (LLM) can be manipulated as separate modules to build a better and even shallower model customized for each test sample. In particular, each layer from the pretrained model can be skipped/pruned or repeated multiple times as recurrent neural networks (RNN), and stacked with others in arbitrary orders, yielding a chain-of-layers (CoLa) per sample. This compositional space greatly expands the scope of existing works on looped/recurrent pretrained modules, layer pruning, or early-exit networks. We develop a Monte Carlo Tree Search (MCTS) protocol to explore and identify the optimal CoLa for each sample from math and commonsense reasoning benchmarks. Compared to a static model of a fixed depth, CoLa allows shortcut paths (fast thinking), recurrence of the same layer(s) (slow thinking), and combining both, offering more flexible, dynamic architectures for different inputs. We conduct an extensive analysis of the MCTS-optimized CoLa, which leads to two key findings: (1) For >75% of samples with correct predictions by the original LLM, we can find shorter CoLa, suggesting a large space for improving inference efficiency; (2) For >60% of samples with originally incorrect predictions, we can identify CoLa achieving correct predictions, suggesting a large space of performance enhancement. Our results highlight the shortcomings of using a fixed architecture of pre-trained LLMs for inference on different samples and pave the way to unlock the generalization power of test-time depth adaptation.
Community
We found that the layers of a pretrained large language model (LLM) can be manipulated as separate modules to build a better and even shallower model customized for each test sample. In particular, each layer from a pretrained LLM can be skipped or repeated multiple times as recurrent neural networks (RNN), and stacked with others in arbitrary orders, yielding a chain-of-layers (CoLa) per sample. This compositional space significantly expands the scope of existing works on looped or recurrently pretrained modules, layer pruning, or early-exit networks.
We develop a Monte Carlo Tree Search (MCTS) protocol to explore and identify the optimal CoLa for each sample from math and commonsense reasoning benchmarks. Compared to a static model of a fixed depth, CoLa allows shortcut paths (fast thinking), recurrence of the same layer(s) (slow thinking), and combining both, offering more flexible, dynamic architectures for different inputs. Specifically,
We introduce a new dimension of generalization that turns a static pretrained LLM into dynamic architectures of adaptive depths without training any parameter: for different test samples/tasks, the pretrained layers can be skipped, repeated, and assembled to create better (more accurate and/or shallower) CoLa models without further training.
We develop an MCTS protocol for efficient architecture search of CoLa with adaptive depth
for each sample. In-depth analysis of patterns in the achieved CoLa models sheds critical insights
into the importance and redundancy of layers at different depths of pretrained/finetuned models
of different sizes, which also vary for tasks at different difficulty levels.
We conduct an extensive analysis of the MCTS-optimized CoLa, which leads to two key findings:
(1) For >75% of samples with correct predictions by the original LLM, we can find shorter CoLa, suggesting a large space for improving inference efficiency;
(2) For >60% of samples with originally incorrect predictions, we can identify CoLa achieving correct predictions, suggesting a large space of performance enhancement.
Our results highlight the shortcomings of using a fixed architecture of pre-trained LLMs for inference on different samples and pave the way to unlock the generalization power of test-time depth adaptation.
Hi
@zhoutianyi
, Thank you for sharing your paper--very interesting findings! I had a quick question about Algorithm 1. The simulation step says that it will "evaluate path accuracy on held-out input(s)". How does the algorithm actually select those held-out inputs? I'm guessing they need to be (closely?) related to the current input that CoLA is being applied to, but I couldn't find the details or maybe have misunderstood something.
Thanks!
Hi @myeesw , it is great to hear that you find our discoveries interesting. The "held-out input(s)" here refer to the inputs from held-out test sets that have not been used for model training. We will make it clearer in later versions. In this work, we do not explore the possibility of generalizing CoLA for one input to other (similar) inputs. But what you mentioned is exactly what we are trying, and we will share the results in later preprints. Good point! Thanks!
Hi @zhoutianyi . Thanks for sharing your paper.
One question about Table 1: Do you search distinct layer inference strategies, or a single strategy, for different datasets?
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper