forum_id
stringlengths
8
20
forum_title
stringlengths
1
899
forum_authors
sequencelengths
0
174
forum_abstract
stringlengths
0
4.69k
forum_keywords
sequencelengths
0
35
forum_pdf_url
stringlengths
38
50
forum_url
stringlengths
40
52
note_id
stringlengths
8
20
note_type
stringclasses
6 values
note_created
int64
1,360B
1,737B
note_replyto
stringlengths
4
20
note_readers
sequencelengths
1
8
note_signatures
sequencelengths
1
2
venue
stringclasses
349 values
year
stringclasses
12 values
note_text
stringlengths
10
56.5k
03EkqSCKuO
Port-Hamiltonian Architectural Bias for Long-Range Propagation in Deep Graph Networks
[]
The dynamics of information diffusion within graphs is a critical open issue that heavily influences graph representation learning, especially when considering long-range propagation. This calls for principled approaches that control and regulate the degree of propagation and dissipation of information throughout the neural flow. Motivated by this, we introduce port-Hamiltonian Deep Graph Networks, a novel framework that models neural information flow in graphs by building on the laws of conservation of Hamiltonian dynamical systems. We reconcile under a single theoretical and practical framework both non-dissipative long-range propagation and non-conservative behaviors, introducing tools from mechanical systems to gauge the equilibrium between the two components. Our approach can be applied to general message-passing architectures, and it provides theoretical guarantees on information conservation in time. Empirical results prove the effectiveness of our port-Hamiltonian scheme in pushing simple graph convolutional architectures to state-of-the-art performance in long-range benchmarks.
[ "graph representation learning", "long-range propagation", "ordinary differential equations" ]
https://openreview.net/pdf?id=03EkqSCKuO
https://openreview.net/forum?id=03EkqSCKuO
S2CUtq5iKs
official_comment
1,732,019,839,720
d6JJf0KjwN
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission7738/Authors" ]
ICLR.cc/2025/Conference
2025
title: Rebuttal Part 2 comment: **Regarding the ablation studies** We appreciate the Reviewer’s constructive feedback. To address their suggestion and enhance the quality of our work, we have included an additional benchmark and an ablation study in Appendix D.4 that examines the impact of different dissipative components on the Minesweeper task from Platonov et al. (2023). While the results indicate that certain driving components perform better than others for this task, we recommend performing model selection to identify the optimal components based on the specific data setting. For convenience, we present the results for this task in the table below. | **Model** | **Train Score (ROC-AUC ↑)** | **Test Score (ROC-AUC ↑)** | |-------------------------------------|--------------------------------|-------------------------------| | Top-6 models form Luo et al. (2024) | | | | GraphGPS | - | 90.75 ± 0.89 | | SGFormer | - | 91.42 ± 0.41 | | Polynormer | - | 97.49 ± 0.48 | | GAT | - | 97.73 ± 0.73 | | GraphSAGE | - | 0.9777 ± 0.0062 | | GCN | - | **0.9786 ± 0.0024** | | **Our - no driving forces** | | | | PH-DGN$_{\text{C}}$ | 0.9978 ± 0.0005 | **0.9845 ± 0.0021** | | **Our - with driving forces** | | | | *PH-DGN* | | | | *Dampening* / *External Force* | | | | -- / MLP4-Sin | 0.9937 ± 0.0038 | 0.9661 ± 0.0057 | | -- / DGN-tanh | 0.9928 ± 0.0010 | 0.9720 ± 0.0042 | | param / -- | 0.9979 ± 0.0005 | **0.9842 ± 0.0021** | | param / MLP4-Sin | 0.9955 ± 0.0021 | 0.9686 ± 0.0052 | | param / DGN-tanh | 0.9930 ± 0.0019 | 0.9727 ± 0.0029 | | MLP4-ReLU / -- | 0.9962 ± 0.0057 | 0.9533 ± 0.0065 | | MLP4-ReLU / MLP4-Sin | 0.9993 ± 0.0003 | 0.9567 ± 0.0064 | | MLP4-ReLU / DGN-tanh | 0.9789 ± 0.0024 | 0.9541 ± 0.0066 | | DGN-ReLU / -- | 0.9496 ± 0.0017 | 0.9342 ± 0.0061 | | DGN-ReLU / MLP4-Sin | 0.9561 ± 0.0048 | 0.9387 ± 0.0055 | | DGN-ReLU / DGN-tanh | 0.9501 ± 0.0047 | 0.9332 ± 0.0084 | Platonov et al. A critical look at the evaluation of GNNs under heterophily: Are we really making progress?. ICLR 2023 **Regarding hyperparameter choices** As detailed in Appendix C, our experiments adhered to the established procedures for each task to ensure fair evaluation and reproducibility. For hyperparameters specific to our PH-DGN, such as the step size $\epsilon$ and the number of layers, we selected values from a thorough and reasonable range, taking into account factors like the average graph diameter in the training set. Lastly, we highlight that we conducted a thorough model selection over a comprehensive grid to minimize the risk of suboptimal performance. We have included this discussion in the revised manuscript. Thank you. **Regarding the problems tackled by PH-DGN** The main objective of our work is to design the information flow within a graph as a solution of a port-Hamiltonian system to ***mitigate the challenge of long-range propagation in DGNs.*** Throughout Section 2, we provide theoretical statements to support the claim that our PH-DGN can effectively learn and propagate long-range dependencies between nodes. Afterward, in Section 3 we empirically support our theoretical findings by evaluating our PH-DGN on the graph transfer task, graph property prediction and the long-range benchmark, all specifically designed to test the model's capabilities in the long-range regime. In summary, we believe that our method is optimal for those problems that require the exploitation of long-range dependencies to be effectively solved. As an example, PH-DGN is beneficial to solve shortest-path-based problems, e.g., compute the diameter of a graph (see Section 3.3), or molecular tasks in which far away nodes interact to determine the overall function of the molecule (see Section 3.4). Furthermore, as emerged by our experiments in Section 3.4 and in the newly added Appendix D.4, the use of driving forces can lead to better performance on real-world tasks. The driving forces act as an adaptive filter mechanism that filters noisy information. Meanwhile, a purely conservative approach (i.e., __without__ driving forces) can have improved utility for tasks that require preserving ***all*** information, like the Minesweeper task.
03EkqSCKuO
Port-Hamiltonian Architectural Bias for Long-Range Propagation in Deep Graph Networks
[]
The dynamics of information diffusion within graphs is a critical open issue that heavily influences graph representation learning, especially when considering long-range propagation. This calls for principled approaches that control and regulate the degree of propagation and dissipation of information throughout the neural flow. Motivated by this, we introduce port-Hamiltonian Deep Graph Networks, a novel framework that models neural information flow in graphs by building on the laws of conservation of Hamiltonian dynamical systems. We reconcile under a single theoretical and practical framework both non-dissipative long-range propagation and non-conservative behaviors, introducing tools from mechanical systems to gauge the equilibrium between the two components. Our approach can be applied to general message-passing architectures, and it provides theoretical guarantees on information conservation in time. Empirical results prove the effectiveness of our port-Hamiltonian scheme in pushing simple graph convolutional architectures to state-of-the-art performance in long-range benchmarks.
[ "graph representation learning", "long-range propagation", "ordinary differential equations" ]
https://openreview.net/pdf?id=03EkqSCKuO
https://openreview.net/forum?id=03EkqSCKuO
R125jFd7tn
official_comment
1,732,673,929,850
9JFBNqaAUN
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission7738/Reviewer_7PXw" ]
ICLR.cc/2025/Conference
2025
title: Rebuttal Response comment: Thank you for the thorough rebuttal and explanations! They increased my understanding of the work.
03EkqSCKuO
Port-Hamiltonian Architectural Bias for Long-Range Propagation in Deep Graph Networks
[]
The dynamics of information diffusion within graphs is a critical open issue that heavily influences graph representation learning, especially when considering long-range propagation. This calls for principled approaches that control and regulate the degree of propagation and dissipation of information throughout the neural flow. Motivated by this, we introduce port-Hamiltonian Deep Graph Networks, a novel framework that models neural information flow in graphs by building on the laws of conservation of Hamiltonian dynamical systems. We reconcile under a single theoretical and practical framework both non-dissipative long-range propagation and non-conservative behaviors, introducing tools from mechanical systems to gauge the equilibrium between the two components. Our approach can be applied to general message-passing architectures, and it provides theoretical guarantees on information conservation in time. Empirical results prove the effectiveness of our port-Hamiltonian scheme in pushing simple graph convolutional architectures to state-of-the-art performance in long-range benchmarks.
[ "graph representation learning", "long-range propagation", "ordinary differential equations" ]
https://openreview.net/pdf?id=03EkqSCKuO
https://openreview.net/forum?id=03EkqSCKuO
O9fpxBdnrZ
official_comment
1,732,285,259,487
03EkqSCKuO
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission7738/Authors" ]
ICLR.cc/2025/Conference
2025
title: Rebuttal Summary comment: We sincerely thank the reviewers for their thoughtful and detailed feedback, as well as for recognizing the key strengths of our work. We are happy to read that the reviewers found that our paper provide an ***attractive*** and ***novel*** methodology (Revs. X6rz, 7PXw) for the incorporation of port-Hamiltonian dynamics in graph representation learning. We are also grateful for the acknowledgment of the ***clarity*** (Revs. X6rz, XxSd, 7PXw) and ***technical soundness*** (Rev. X6rz) of our work while reporting ***strong theoretical results*** (Revs. X6rz, XxSd, 7PXw) and ***thorough*** (Rev. X6rz) and ***superb*** (Rev. 7PXw) experimental validation to show the practical benefits (Revs. X6rz, XxSd) for long-range propagation. We are also thankful for the constructive feedback, which has further improved the quality of our paper. Specifically: **Additional Experiments:** - Following the Reviewer **X6rz**’s and Reviewer **XxSd** ’s suggestions, we have added a novel experiment on a recent benchmark to: (i) appreciate the impact of the different dissipative components in our approach, and (ii) highlight the utility of the purely conservative version of PH-DGN (i.e., PH-DGN$_C$). To further address point (ii), we have also included an additional ablation study on the graph transfer task, demonstrating the improved utility of PH-DGN$_C$ in tasks that require preserving all information. **Revisions to the Paper:** - Following the Reviewer **XxSd**’s and Reviewer **7PXw**’s suggestions, we incorporated additional theoretical guarantees on long-range propagation that also accounts for the presence of non-conservative forces. - In Appendix A.3, we clarified that the assumption on the structure of W and V matrices is not restricting the final implementation of $p$ and $q$ components. - We improved the discussion on the choice of the hyperparameters. - We improved the clarity of Tables 2 and 5 as well as provided a deeper clarification on the usefulness of the proposed model with respect to existing techniques in Section 3.4. - We clarified the goal of the graph transfer task. - We revised Table 6 to clarify that driving forces do not allow for purely Hamiltonian conservation. ---- As the author-reviewer discussion comes close to an end, we would like to thank the reviewers again for their invaluable feedback and the positive assessment of our paper. We did our best efforts to provide a detailed response to reviewers’ comments, and we hope these revisions address all concerns while further emphasizing the significance and robustness of our contributions. In particular, we would greatly appreciate hearing from Reviewer **XxSd** whether they were satisfied with our responses. We hope that this is the case, and, if so, we would like to kindly ask the reviewer to consider revising their score. Thank you all again for your constructive feedback.
03EkqSCKuO
Port-Hamiltonian Architectural Bias for Long-Range Propagation in Deep Graph Networks
[]
The dynamics of information diffusion within graphs is a critical open issue that heavily influences graph representation learning, especially when considering long-range propagation. This calls for principled approaches that control and regulate the degree of propagation and dissipation of information throughout the neural flow. Motivated by this, we introduce port-Hamiltonian Deep Graph Networks, a novel framework that models neural information flow in graphs by building on the laws of conservation of Hamiltonian dynamical systems. We reconcile under a single theoretical and practical framework both non-dissipative long-range propagation and non-conservative behaviors, introducing tools from mechanical systems to gauge the equilibrium between the two components. Our approach can be applied to general message-passing architectures, and it provides theoretical guarantees on information conservation in time. Empirical results prove the effectiveness of our port-Hamiltonian scheme in pushing simple graph convolutional architectures to state-of-the-art performance in long-range benchmarks.
[ "graph representation learning", "long-range propagation", "ordinary differential equations" ]
https://openreview.net/pdf?id=03EkqSCKuO
https://openreview.net/forum?id=03EkqSCKuO
Lw6pbguCws
official_comment
1,733,157,356,146
nvIbV12O4i
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission7738/Authors" ]
ICLR.cc/2025/Conference
2025
comment: We thank the Reviewer for the quick response to our message. We noticed that the response may contain several typos, which made some parts challenging to interpret. Below, we provide our reply based on our best understanding of the concerns raised. As for the under-estimation of the general **PH-DGN** (i.e., the one with driving forces) in the ablation study in Appendix D.4, we want to emphasize again, that *the goal of this study was not to optimize performance but rather to investigate how different driving forces contribute to solve the task under a constant starting point*, i.e., the hyperparameters selected for PH-DGN$_C$. We do acknowledge that there may be a possible configuration of shared hyperparameters that could lead to better performances on the validation set for the general PH-DGN. However, while we agree that this is crucial for optimization purposes, it is outside the scope of our ablation, which is to show the effect of singular forces to the same purely conservative regime in the Minesweeper task. ​​Moreover, it seems from the comment that the Reviewer may believe we are not retraining the general PH-DGN after selecting the hyperparameters for PH-DGN$_C$​. However, as we have previously explained, this is not the case. In fact, we retrain the model to optimize its learnable parameters using the selected shared hyperparameters to ensure a fair evaluation. Finally, since this is an ablation study and optimizing performance is not critical for its purpose, we believe the Reviewer’s concerns, while valid, may not significantly impact the evaluation of our contributions. In light of this, we kindly encourage the Reviewer to consider this context when assessing their score.
03EkqSCKuO
Port-Hamiltonian Architectural Bias for Long-Range Propagation in Deep Graph Networks
[]
The dynamics of information diffusion within graphs is a critical open issue that heavily influences graph representation learning, especially when considering long-range propagation. This calls for principled approaches that control and regulate the degree of propagation and dissipation of information throughout the neural flow. Motivated by this, we introduce port-Hamiltonian Deep Graph Networks, a novel framework that models neural information flow in graphs by building on the laws of conservation of Hamiltonian dynamical systems. We reconcile under a single theoretical and practical framework both non-dissipative long-range propagation and non-conservative behaviors, introducing tools from mechanical systems to gauge the equilibrium between the two components. Our approach can be applied to general message-passing architectures, and it provides theoretical guarantees on information conservation in time. Empirical results prove the effectiveness of our port-Hamiltonian scheme in pushing simple graph convolutional architectures to state-of-the-art performance in long-range benchmarks.
[ "graph representation learning", "long-range propagation", "ordinary differential equations" ]
https://openreview.net/pdf?id=03EkqSCKuO
https://openreview.net/forum?id=03EkqSCKuO
LM5DS4s349
official_comment
1,732,021,563,638
nvIbV12O4i
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission7738/Authors" ]
ICLR.cc/2025/Conference
2025
title: Rebuttal Part 4 comment: **- Regarding the comparison clarification** Again, we thank the Reviewer for its effort and we refer them to the revised manuscript, which now contains a deeper clarification on the usefulness of the proposed model with respect to existing methods. **- Regarding Adam citation** We included the citations to our employed optimization strategies in the revised manuscript **- Regarding n. layer typo** We corrected the typo in the revised manuscript. Thank you. **- Regarding Table 6** We thank the Reviewer for the comment. Appendix Table 6 serves as a high-level comparison with related works on Hamiltonian inspired DGNs. We opted to report our framework as a single row in the table for simplicity reasons, since in our PH-DGN the driving forces can be turned on and off depending on the specific needs of the problem. Indeed, from a high-level perspective, the conservative approach could be seen as a subset of the full port-Hamiltonian approach, explaining why in the single row scenario we marked the Hamiltonian conservation column. Following the Reviewer’s suggestion, we revised Table 6 to clarify that driving forces do not allow for purely Hamiltonian conservation.
03EkqSCKuO
Port-Hamiltonian Architectural Bias for Long-Range Propagation in Deep Graph Networks
[]
The dynamics of information diffusion within graphs is a critical open issue that heavily influences graph representation learning, especially when considering long-range propagation. This calls for principled approaches that control and regulate the degree of propagation and dissipation of information throughout the neural flow. Motivated by this, we introduce port-Hamiltonian Deep Graph Networks, a novel framework that models neural information flow in graphs by building on the laws of conservation of Hamiltonian dynamical systems. We reconcile under a single theoretical and practical framework both non-dissipative long-range propagation and non-conservative behaviors, introducing tools from mechanical systems to gauge the equilibrium between the two components. Our approach can be applied to general message-passing architectures, and it provides theoretical guarantees on information conservation in time. Empirical results prove the effectiveness of our port-Hamiltonian scheme in pushing simple graph convolutional architectures to state-of-the-art performance in long-range benchmarks.
[ "graph representation learning", "long-range propagation", "ordinary differential equations" ]
https://openreview.net/pdf?id=03EkqSCKuO
https://openreview.net/forum?id=03EkqSCKuO
KCRLEboAzz
official_comment
1,732,679,901,180
dvcmzyF55B
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission7738/Reviewer_XxSd" ]
ICLR.cc/2025/Conference
2025
comment: Thank you for the explanation. Although I am not perfectly confident, I think this protocol may have the risk of overfitting slightly as re-trained learnable parameters implicitly depend on the validation dataset through the choice of hyperparameters of PH-DGN$_{C}$. However, the protocol itself looks OK because the test dataset is not used for choosing learnable parameters and hyperparaemters.
03EkqSCKuO
Port-Hamiltonian Architectural Bias for Long-Range Propagation in Deep Graph Networks
[]
The dynamics of information diffusion within graphs is a critical open issue that heavily influences graph representation learning, especially when considering long-range propagation. This calls for principled approaches that control and regulate the degree of propagation and dissipation of information throughout the neural flow. Motivated by this, we introduce port-Hamiltonian Deep Graph Networks, a novel framework that models neural information flow in graphs by building on the laws of conservation of Hamiltonian dynamical systems. We reconcile under a single theoretical and practical framework both non-dissipative long-range propagation and non-conservative behaviors, introducing tools from mechanical systems to gauge the equilibrium between the two components. Our approach can be applied to general message-passing architectures, and it provides theoretical guarantees on information conservation in time. Empirical results prove the effectiveness of our port-Hamiltonian scheme in pushing simple graph convolutional architectures to state-of-the-art performance in long-range benchmarks.
[ "graph representation learning", "long-range propagation", "ordinary differential equations" ]
https://openreview.net/pdf?id=03EkqSCKuO
https://openreview.net/forum?id=03EkqSCKuO
J7rugzU8yy
official_comment
1,733,150,503,659
nvIbV12O4i
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission7738/Reviewer_XxSd" ]
ICLR.cc/2025/Conference
2025
comment: I thank the authors for the further responses to my questions. > Regarding the potential risk of overfitting in this protocol, we kindly ask the Reviewer to elaborate further on their argument. First, I realize that the Dampening and the External forcing do not have hyperparameters, which I overlooked in the last comment. I agree with the authors in that we do not have to choose hyperparameters of these components using the validation dataset. Still, I think that there is a possibility that the performance of PH-DGN$_C$ could be underestimated by the authors' protocol. In order to search the best hyperparameters from the set of possible hyperparameters (which we denote by $\Theta$), we need to search *all* possible spaces of learnable parameters for each hyperparameter $\theta \in \Theta$ (using the training dataset), then choose the best hyperparameter (using the validation dataset). However, in the authors' protocol, since we only search the part of learnable parameters to choose the hyperparameter, we could fail to find the best model. ------------------- For example, for simplicity, suppose we only have two learnable parameters --- $w$ for PH-DGN$_C$ and $w'$ for the Dampening and the External forcing, and one hyperparameter $\theta$ (shared by PH-DGN$_C$ and PH-DGN), which takes only two values $\theta=0, 1$. For PH-DGN$_C$, we assume: - When we fix $\theta=0$, the model achieves the best performance $p_0$ at $w=a_0$, - When we fix $\theta=1$, the model achieves the best performance $p_1$ at $w=a_1$, where $p_0 > p_1$. For PH-DGN, we assume: - When we fix $\theta=0$, the model achieves the best performance $q_0$ at $(w, w') = (a_0, b_0)$, - When we fix $\theta=1$, the model achieves the best performance $q_1$ at $(w, w') = (a_1, b_1)$, where $q_0 < q_1$ Then, the best-performing PH-DGN is $(w, w') = (a_1, b_1)$ and $\theta=1$, which achieves $q_1$. However, if we follow the authors' protocol, we first choose $\theta=0$ because we have $p_0 > p_1$, then choose the learnable parameter $(w, w') = (a_0, b_0)$ to get the sub-optimal PH-DGN, which achieves $q_0$.
03EkqSCKuO
Port-Hamiltonian Architectural Bias for Long-Range Propagation in Deep Graph Networks
[]
The dynamics of information diffusion within graphs is a critical open issue that heavily influences graph representation learning, especially when considering long-range propagation. This calls for principled approaches that control and regulate the degree of propagation and dissipation of information throughout the neural flow. Motivated by this, we introduce port-Hamiltonian Deep Graph Networks, a novel framework that models neural information flow in graphs by building on the laws of conservation of Hamiltonian dynamical systems. We reconcile under a single theoretical and practical framework both non-dissipative long-range propagation and non-conservative behaviors, introducing tools from mechanical systems to gauge the equilibrium between the two components. Our approach can be applied to general message-passing architectures, and it provides theoretical guarantees on information conservation in time. Empirical results prove the effectiveness of our port-Hamiltonian scheme in pushing simple graph convolutional architectures to state-of-the-art performance in long-range benchmarks.
[ "graph representation learning", "long-range propagation", "ordinary differential equations" ]
https://openreview.net/pdf?id=03EkqSCKuO
https://openreview.net/forum?id=03EkqSCKuO
Fe0GVofRt9
official_review
1,729,822,898,690
03EkqSCKuO
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission7738/Reviewer_7PXw" ]
ICLR.cc/2025/Conference
2025
summary: This work provides a novel methodology for the incorporation of port-Hamiltonian dynamics in graph representation learning. The central model, called a port-Hamiltonian Deep Graph Network (PH-DGN), is introduced first in a purely conservative setting using only the Hamiltonian and no non-conservative terms. Several theorems are developed to show that this conservative case leads to long-range data propagation between graph nodes, where the graph dynamics exhibit no energy loss and gradients do not vanish as the backward sensitivity matrix is bounded below. Dissipitive forces are then added to the full port-Hamiltonian model, which are two additional networks that may be trained to capture non-conservative forces. Several experiments follow, including a showcase of energy conservation and sensitivity to empirically verify theoretical work, and a graph transfer problem and graph property prediction problem to compare performance on benchmark tasks against other graph models in settings which require long-range information propagation. soundness: 4 presentation: 3 contribution: 3 strengths: - There is very clear motivation to this work, and it builds nicely upon other references. - The proposed port-Hamiltonian approach is an original and clever way to allow for non-conservative dynamics in a graph network while still maintaining long-range message passing. - The theoretical results for the conservative case are strong, and the motivation and interpretation for these are presented nicely. - The experimental setup is superb; the care taken to ensure ease of replication is applauded. Model details are presented very clearly and choices are explained well for each step of the setup. - A strong suite of models are compared against, with many different competitors and different approaches. The consistently favorable results provide a great deal of strength to claims of the proposed method's performance. - The appendices are comprehensive for both proofs and experimental setup, and made clear many of the questions I had on an initial read. weaknesses: - The majority of the theoretical results are developed for the conservative case. This makes sense in the context, as conservative long-range message passing is stated as a goal, but I would also be quite interested to see what could be proven for the fully general port-Hamiltonian case. - In the explanation of Theorem 2.3, the statement that "the final representation of each node retains its complete past" seems somewhat strong. While I understand that the BSM result shows the influence of the entire past history on the current state, this statement as written seems to imply something stronger, and perhaps could be made more clear. - The dissipitive force terms are added in some experiments to great success, but the explanations of their performance are more intuitive and are not supported by hard data in the paper. There may be a great opportunity here for visualization to support the intuitive claims. There are two very minor typos: - In the first paragraph of Section 2, "node states' in graph" has an unnecessary apostrophe. - In Appendix D.3, "on a grid of n. layers" has an unnecessary period. questions: - How would classical GCN aggregation interact with Theorem 2.4? Can the bound be easily extended for that case? - In Section 3.1, you mention that the growing behavior can be controlled by regularizing the weight matrices or using normalized aggregation functions. Did you try this? How are the empirical results? - Have you examined the interpretability of a trained PH-DGN? In particular, do the learned non-conservative forces make sense for the associated problem? flag_for_ethics_review: ['No ethics review needed.'] rating: 8 confidence: 3 code_of_conduct: Yes
03EkqSCKuO
Port-Hamiltonian Architectural Bias for Long-Range Propagation in Deep Graph Networks
[]
The dynamics of information diffusion within graphs is a critical open issue that heavily influences graph representation learning, especially when considering long-range propagation. This calls for principled approaches that control and regulate the degree of propagation and dissipation of information throughout the neural flow. Motivated by this, we introduce port-Hamiltonian Deep Graph Networks, a novel framework that models neural information flow in graphs by building on the laws of conservation of Hamiltonian dynamical systems. We reconcile under a single theoretical and practical framework both non-dissipative long-range propagation and non-conservative behaviors, introducing tools from mechanical systems to gauge the equilibrium between the two components. Our approach can be applied to general message-passing architectures, and it provides theoretical guarantees on information conservation in time. Empirical results prove the effectiveness of our port-Hamiltonian scheme in pushing simple graph convolutional architectures to state-of-the-art performance in long-range benchmarks.
[ "graph representation learning", "long-range propagation", "ordinary differential equations" ]
https://openreview.net/pdf?id=03EkqSCKuO
https://openreview.net/forum?id=03EkqSCKuO
9lI1rHmIvj
official_comment
1,732,549,003,684
znEvBo7vK5
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission7738/Authors" ]
ICLR.cc/2025/Conference
2025
title: Thank you comment: We sincerely thank the Reviewer for their thoughtful feedback and their positive evaluation of our work, as well as for the increased score.
03EkqSCKuO
Port-Hamiltonian Architectural Bias for Long-Range Propagation in Deep Graph Networks
[]
The dynamics of information diffusion within graphs is a critical open issue that heavily influences graph representation learning, especially when considering long-range propagation. This calls for principled approaches that control and regulate the degree of propagation and dissipation of information throughout the neural flow. Motivated by this, we introduce port-Hamiltonian Deep Graph Networks, a novel framework that models neural information flow in graphs by building on the laws of conservation of Hamiltonian dynamical systems. We reconcile under a single theoretical and practical framework both non-dissipative long-range propagation and non-conservative behaviors, introducing tools from mechanical systems to gauge the equilibrium between the two components. Our approach can be applied to general message-passing architectures, and it provides theoretical guarantees on information conservation in time. Empirical results prove the effectiveness of our port-Hamiltonian scheme in pushing simple graph convolutional architectures to state-of-the-art performance in long-range benchmarks.
[ "graph representation learning", "long-range propagation", "ordinary differential equations" ]
https://openreview.net/pdf?id=03EkqSCKuO
https://openreview.net/forum?id=03EkqSCKuO
9JFBNqaAUN
official_comment
1,732,021,824,868
Fe0GVofRt9
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission7738/Authors" ]
ICLR.cc/2025/Conference
2025
title: Rebuttal Part 2 comment: **Regarding controlling the upper bound Appendix Th. A.1** We thank the Reviewer for the question. In our experiments, we tested two aggregations schemes, i.e., the one implemented in Eq. 6 and the classical GCN aggregation. Although Theorem A.1 theoretically indicates a potential increase in the sensitivity measure, we did not observe this issue with either aggregation method during our experiments. Furthermore, we found that incorporating the GCN aggregation scheme did not consistently lead to improved performance. Hence, we did not see the necessity to include norm-constraining regularizers on the weights during training in our experiments. We recommend practitioners to treat the aggregation scheme as a hyperparameter to be selected via model selection. **Regarding the interpretability of a trained PH-DGN** We thank the Reviewer for the comment. Our experimental suite was designed to deliberately demonstrate the long-range capabilities of our approach and, as such, we did not focus on examining the interpretability of the trained model. Interpreting the dynamics of the model is inherently challenging due to the lack of ground truth for what constitutes the "true" flow of information across the graph, especially on tasks like predicting 3D properties of peptides. Without this reference, it becomes difficult to disentangle how the model processes and propagates information or to verify whether it aligns with any hypothesized dynamics. This challenge is further amplified by the fact that, in our experiments, the driving forces are modeled using complex neural networks, making it difficult to deduce clear intuitions about how these forces operate within the model. These limitations highlight the need for future work to explore the interpretability of the methods.
03EkqSCKuO
Port-Hamiltonian Architectural Bias for Long-Range Propagation in Deep Graph Networks
[]
The dynamics of information diffusion within graphs is a critical open issue that heavily influences graph representation learning, especially when considering long-range propagation. This calls for principled approaches that control and regulate the degree of propagation and dissipation of information throughout the neural flow. Motivated by this, we introduce port-Hamiltonian Deep Graph Networks, a novel framework that models neural information flow in graphs by building on the laws of conservation of Hamiltonian dynamical systems. We reconcile under a single theoretical and practical framework both non-dissipative long-range propagation and non-conservative behaviors, introducing tools from mechanical systems to gauge the equilibrium between the two components. Our approach can be applied to general message-passing architectures, and it provides theoretical guarantees on information conservation in time. Empirical results prove the effectiveness of our port-Hamiltonian scheme in pushing simple graph convolutional architectures to state-of-the-art performance in long-range benchmarks.
[ "graph representation learning", "long-range propagation", "ordinary differential equations" ]
https://openreview.net/pdf?id=03EkqSCKuO
https://openreview.net/forum?id=03EkqSCKuO
7bnoXBWIn9
official_comment
1,732,713,919,006
KCRLEboAzz
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission7738/Authors" ]
ICLR.cc/2025/Conference
2025
comment: We thank the Reviewer for positively evaluating our experimental protocol. Regarding the potential risk of overfitting in this protocol, we kindly ask the Reviewer to elaborate further on their argument. To further clarify on our approach, as stated in previous responses, after selecting the shared hyperparameters, we are not performing an additional model selection for the driving forces. Thus, the validation set is not used to perform additional tuning, meaning that the validation set is not used multiple times for the same purpose. Our goal in this ablation is not to optimize performance but rather to investigate how different driving forces contribute to solve the task under a constant starting point, i.e., the hyperparameters selected for PH-DGN$_C$. We hope this clarification addresses your concern and further highlights the rationale behind our experimental design.
03EkqSCKuO
Port-Hamiltonian Architectural Bias for Long-Range Propagation in Deep Graph Networks
[]
The dynamics of information diffusion within graphs is a critical open issue that heavily influences graph representation learning, especially when considering long-range propagation. This calls for principled approaches that control and regulate the degree of propagation and dissipation of information throughout the neural flow. Motivated by this, we introduce port-Hamiltonian Deep Graph Networks, a novel framework that models neural information flow in graphs by building on the laws of conservation of Hamiltonian dynamical systems. We reconcile under a single theoretical and practical framework both non-dissipative long-range propagation and non-conservative behaviors, introducing tools from mechanical systems to gauge the equilibrium between the two components. Our approach can be applied to general message-passing architectures, and it provides theoretical guarantees on information conservation in time. Empirical results prove the effectiveness of our port-Hamiltonian scheme in pushing simple graph convolutional architectures to state-of-the-art performance in long-range benchmarks.
[ "graph representation learning", "long-range propagation", "ordinary differential equations" ]
https://openreview.net/pdf?id=03EkqSCKuO
https://openreview.net/forum?id=03EkqSCKuO
5uvZaoIX0C
official_comment
1,732,578,944,721
SYAaRs1WdH
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission7738/Reviewer_XxSd" ]
ICLR.cc/2025/Conference
2025
comment: I thank the authors for the quick responses. **3) Regarding the usefulness of PH-DGN on the real dataset** Thank you for the explanation. Let me take time to consider whether the rationale is reasonable. ---------------- **Regarding choosing the hyperparameters of PH-DGN$_C$ in Minesweeper task** > Dampening and external forces do not play any role in the selection of the PH-DGN hyperparameters, since such components are not employed in the purely conservative setting. Thank you for the explanation. I understand this point. My question was about the evaluation protocol of PH-DGN (i.e., the model with dampening and external force, which have learnable parameters.) I thought that to evaluate PH-DGN, the authors (1) first choose the hyperparameters that PH-DGN have in common with PH-DGN$_C$ using training and validation datasets, then (2) learn parameters that are specific to PH-DGN. However, since training and validation datasets are already used in the first stage, we only have the test dataset to conduct the second stage (2), which I think has the risk of information leakage. Let me know if I misunderstand something. Other questions are OK for me. I am sorry for the short answers as the deadline is approaching.
03EkqSCKuO
Port-Hamiltonian Architectural Bias for Long-Range Propagation in Deep Graph Networks
[]
The dynamics of information diffusion within graphs is a critical open issue that heavily influences graph representation learning, especially when considering long-range propagation. This calls for principled approaches that control and regulate the degree of propagation and dissipation of information throughout the neural flow. Motivated by this, we introduce port-Hamiltonian Deep Graph Networks, a novel framework that models neural information flow in graphs by building on the laws of conservation of Hamiltonian dynamical systems. We reconcile under a single theoretical and practical framework both non-dissipative long-range propagation and non-conservative behaviors, introducing tools from mechanical systems to gauge the equilibrium between the two components. Our approach can be applied to general message-passing architectures, and it provides theoretical guarantees on information conservation in time. Empirical results prove the effectiveness of our port-Hamiltonian scheme in pushing simple graph convolutional architectures to state-of-the-art performance in long-range benchmarks.
[ "graph representation learning", "long-range propagation", "ordinary differential equations" ]
https://openreview.net/pdf?id=03EkqSCKuO
https://openreview.net/forum?id=03EkqSCKuO
5Le4FyHyzD
official_comment
1,732,020,204,116
nvIbV12O4i
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission7738/Authors" ]
ICLR.cc/2025/Conference
2025
title: Rebuttal Part 1 comment: We thank the Reviewer for the extensive feedback on our manuscript and for acknowledging that we ***carefully explained*** the knowledge behind our method and that we ***theoretically show*** and empirically validate the ***usefulness*** in the long-range regime. Below, we address each of your comments, for which we are grateful. We found them to be helpful to further improve the quality of our paper, and we hope that you are satisfied with our response. We hope that in light of our clarifications and modifications to the paper, you will consider revising your score. In particular, following the Reviewer’s suggestions, we highlight how the revised version of the paper now contains new additional theoretical guarantees on long-range propagation that also accounts for the presence of non-conservative forces, as well as new experiments to prove the usefulness of the fully conservative PH-DGN (and, of course, additional clarifications to all Reviewer’s questions). **1) Regarding the theoretical guarantees for the general PH-DGN** We appreciate the Reviewer’s effort in improving the quality of our work. We note that the main goal of our paper is to reconcile under a single framework strong theoretical guarantees of conservation, for non-dissipative long-range propagation, with non-conservative behaviors, to potentially improve the performance on the downstream task. Indeed, without driving forces, the final ability of the system to model all complex nonlinear dynamics is restricted in real-world scenarios, as empirically shown in Table 2. To further accommodate the Reviewer’s suggestion, we derived some additional theoretical results on the effects of the port-Hamiltonian components on information propagation in Appendix B.7. In particular, we note that the sensitivity matrix can be linearly decomposed into the conservative and dissipative terms. Assuming that the driving forces and their derivatives are bounded, the self-influence of a node after one update can be constrained within fixed upper and lower bounds which are mediated (among the other) by the step-size $\epsilon$. Additionally, we demonstrate that a similar upper bound applies to the influence between neighboring nodes. These results indicate that, under mild assumptions, the port-Hamiltonian components theoretically support long-range propagation. **2) Regarding Tables 2 and 5** We thank the Reviewer for the feedback. Both Tables 2 and 5 contain results for the LRGB benchmark. Due to submission length constraints, we decided to report in the main text (i.e., Table 2) only a selection of all the considered baselines while in the appendix (i.e., Table 5) we report the full list of baselines. In our revised paper, we have improved the clarity of both tables by better specifying which result comes from which paper, and whether or not the model uses positional/structural encodings. We are open to other suggestions to improve the clarity of the tables.
02kZwCo0C3
SAIL: Self-improving Efficient Online Alignment of Large Language Models
[]
Reinforcement Learning from Human Feedback (RLHF) is a critical method for aligning large language models (LLMs) with human preferences. However, existing offline alignment approaches, such as DPO, IPO, and SLiC, rely heavily on static datasets of human preferences, often leading to suboptimal performance. Recent efforts in the literature have moved towards online RLHF methods, but they lack a unified framework and suffer from distribution shift issues. In this work, we formalize online LLM alignment as a bilevel optimization problem. By reducing this formulation to a more computationally efficient single-level first-order method, utilizing reward-policy equivalence, we propose SAIL (Self-improving Efficient Online Alignment).SAIL generates new samples and iteratively refines model alignment through online exploration and regulation of preference labels. This enables continuous, self-improving alignment and generalizes prior online RLHF methods as special cases. Compared to state-of-the-art RLHF methods, SAIL delivers significant performance gains, with up to 11.6\% improvement in win rate and a 3.6-point increase in evaluation rewards, while maintaining low computational overhead.
[ "RLHF", "Alignment", "Online Alignment", "Self-Play" ]
https://openreview.net/pdf?id=02kZwCo0C3
https://openreview.net/forum?id=02kZwCo0C3
uFzhESKso6
official_comment
1,732,371,694,970
d6EdV2MJYf
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission12435/Authors" ]
ICLR.cc/2025/Conference
2025
title: Response to Reviewer ZoUS (1/3) comment: > Limited Exploration of Alternative Utility Functions: The method relies on the Bradley-Terry preference model, which may not be optimal for all RLHF applications. Future work could benefit from exploring alternative utility models that account for more nuanced preference data. SAIL currently relies on the Bradley-Terry preference model. Have you considered experimenting with other preference models, and do you anticipate any impact on alignment performance if different utility functions are used? **Response:** Thank you for this insightful comment. Indeed, our current method relies on the Bradley-Terry (BT) preference model, and exploring alternative preference models is an exciting direction for future work. Since this is one of the initial works establishing a rigorous foundation for iterative RLHF, we focused on fundamental methods to clearly convey the core idea of Bilevel RLHF. Our work reveals a crucial insight: the BT preference model plays a critical role in ensuring strong concavity of the lower-level problem within our bilevel optimization framework. This mathematical property enables us to derive a closed-form solution, which is key to simplifying the bilevel problem into single-level optimization using the DPO trick. However, this approach may not readily extend to more complex or non-convex preference models, as they could introduce additional optimization challenges. We agree that extending the framework to accommodate alternative utility functions, particularly those capable of capturing more nuanced or domain-specific preferences, is a valuable research direction. Exploring these extensions could uncover interesting trade-offs between expressiveness, computational feasibility, and alignment performance, and we plan to address this in future work. > Scalability Concerns for Larger Models: Although the paper demonstrates SAIL’s effectiveness on LLMs with up to 8B parameters, additional scaling experiments would strengthen the paper's claims about computational efficiency for significantly larger models. The paper demonstrates SAIL's efficiency with models up to 8B parameters. Could you share any considerations or expected challenges for scaling SAIL to significantly larger models, such as those with over 100B parameters? **Response:** Thank you for this insightful question regarding the scalability of SAIL to larger models exceeding 100B parameters. We would like to share our considerations and expected challenges: 1. **Primary Overhead Sources:** For the main SAIL methods—**SAIL-PP** and **SAIL-PR**—the major overhead compared to standard DPO comes from response generation and reward evaluation. The additional gradient terms computed (as per Equations (9) and (13)) are low-dimensional relative to the model parameters or inputs. This results in minimal time and memory overhead, even for models with over 100B parameters. 2. **Challenges Similar to Online RLHF Training:** Scaling SAIL to larger models involves challenges common to most online RLHF training methods. To achieve computational efficiency and enable training on machines with limited resources, we recommend using **Parameter-Efficient Fine-Tuning (PEFT)** techniques not only for training but also during generation, as we have implemented in our code. 3. **Technical Considerations:** There may be additional overhead when switching between training and generation modes, as well as interfacing with the reward model. Utilizing an optimized training framework that minimizes these overheads is crucial. Our current implementation adapts TRL's `DPOTrainer`, but it is not fully optimized or tested for models larger than 100B parameters. Further optimization is needed to handle the increased scale effectively. We believe that with these considerations and optimizations, SAIL can be effectively scaled to significantly larger models while maintaining computational efficiency.
02kZwCo0C3
SAIL: Self-improving Efficient Online Alignment of Large Language Models
[]
Reinforcement Learning from Human Feedback (RLHF) is a critical method for aligning large language models (LLMs) with human preferences. However, existing offline alignment approaches, such as DPO, IPO, and SLiC, rely heavily on static datasets of human preferences, often leading to suboptimal performance. Recent efforts in the literature have moved towards online RLHF methods, but they lack a unified framework and suffer from distribution shift issues. In this work, we formalize online LLM alignment as a bilevel optimization problem. By reducing this formulation to a more computationally efficient single-level first-order method, utilizing reward-policy equivalence, we propose SAIL (Self-improving Efficient Online Alignment).SAIL generates new samples and iteratively refines model alignment through online exploration and regulation of preference labels. This enables continuous, self-improving alignment and generalizes prior online RLHF methods as special cases. Compared to state-of-the-art RLHF methods, SAIL delivers significant performance gains, with up to 11.6\% improvement in win rate and a 3.6-point increase in evaluation rewards, while maintaining low computational overhead.
[ "RLHF", "Alignment", "Online Alignment", "Self-Play" ]
https://openreview.net/pdf?id=02kZwCo0C3
https://openreview.net/forum?id=02kZwCo0C3
sLPl1rDy74
official_comment
1,732,395,397,144
Yqbllggrmw
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission12435/Authors" ]
ICLR.cc/2025/Conference
2025
title: Response Status Update to Reviewer 7i95 comment: Thank you for your detailed comments. We are currently running the requested experiments and will post our complete responses with results soon. We appreciate your patience.
02kZwCo0C3
SAIL: Self-improving Efficient Online Alignment of Large Language Models
[]
Reinforcement Learning from Human Feedback (RLHF) is a critical method for aligning large language models (LLMs) with human preferences. However, existing offline alignment approaches, such as DPO, IPO, and SLiC, rely heavily on static datasets of human preferences, often leading to suboptimal performance. Recent efforts in the literature have moved towards online RLHF methods, but they lack a unified framework and suffer from distribution shift issues. In this work, we formalize online LLM alignment as a bilevel optimization problem. By reducing this formulation to a more computationally efficient single-level first-order method, utilizing reward-policy equivalence, we propose SAIL (Self-improving Efficient Online Alignment).SAIL generates new samples and iteratively refines model alignment through online exploration and regulation of preference labels. This enables continuous, self-improving alignment and generalizes prior online RLHF methods as special cases. Compared to state-of-the-art RLHF methods, SAIL delivers significant performance gains, with up to 11.6\% improvement in win rate and a 3.6-point increase in evaluation rewards, while maintaining low computational overhead.
[ "RLHF", "Alignment", "Online Alignment", "Self-Play" ]
https://openreview.net/pdf?id=02kZwCo0C3
https://openreview.net/forum?id=02kZwCo0C3
r0xTzOrbHO
official_comment
1,732,752,325,077
ID6MmtL62c
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission12435/Authors" ]
ICLR.cc/2025/Conference
2025
title: Looking Forward to Your Review of Our Responses comment: Thank you so much for your insightful and constructive feedback on our work. We have provided detailed responses to your valuable comments, including new experimental results on AlpacaEval 2.0 length-controlled win-rates with additional model architectures, as well as the requested Arena-Hard benchmark and ARC-Challenge evaluations. As we are nearing the end of the author-reviewer discussion period, we would be very grateful if you could take a moment to review our responses. We truly value your expertise and would welcome any additional thoughts or questions you may have. We are here to address any remaining concerns and continue this productive discussion. Thank you again for your time and dedication in helping us improve our work.
02kZwCo0C3
SAIL: Self-improving Efficient Online Alignment of Large Language Models
[]
Reinforcement Learning from Human Feedback (RLHF) is a critical method for aligning large language models (LLMs) with human preferences. However, existing offline alignment approaches, such as DPO, IPO, and SLiC, rely heavily on static datasets of human preferences, often leading to suboptimal performance. Recent efforts in the literature have moved towards online RLHF methods, but they lack a unified framework and suffer from distribution shift issues. In this work, we formalize online LLM alignment as a bilevel optimization problem. By reducing this formulation to a more computationally efficient single-level first-order method, utilizing reward-policy equivalence, we propose SAIL (Self-improving Efficient Online Alignment).SAIL generates new samples and iteratively refines model alignment through online exploration and regulation of preference labels. This enables continuous, self-improving alignment and generalizes prior online RLHF methods as special cases. Compared to state-of-the-art RLHF methods, SAIL delivers significant performance gains, with up to 11.6\% improvement in win rate and a 3.6-point increase in evaluation rewards, while maintaining low computational overhead.
[ "RLHF", "Alignment", "Online Alignment", "Self-Play" ]
https://openreview.net/pdf?id=02kZwCo0C3
https://openreview.net/forum?id=02kZwCo0C3
ltMqz7hL5U
official_comment
1,732,618,365,347
1QaKMNvqWa
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission12435/Reviewer_urgR" ]
ICLR.cc/2025/Conference
2025
comment: Thank you for addressing my questions. I have no further inquiries and will maintain my current rating.
02kZwCo0C3
SAIL: Self-improving Efficient Online Alignment of Large Language Models
[]
Reinforcement Learning from Human Feedback (RLHF) is a critical method for aligning large language models (LLMs) with human preferences. However, existing offline alignment approaches, such as DPO, IPO, and SLiC, rely heavily on static datasets of human preferences, often leading to suboptimal performance. Recent efforts in the literature have moved towards online RLHF methods, but they lack a unified framework and suffer from distribution shift issues. In this work, we formalize online LLM alignment as a bilevel optimization problem. By reducing this formulation to a more computationally efficient single-level first-order method, utilizing reward-policy equivalence, we propose SAIL (Self-improving Efficient Online Alignment).SAIL generates new samples and iteratively refines model alignment through online exploration and regulation of preference labels. This enables continuous, self-improving alignment and generalizes prior online RLHF methods as special cases. Compared to state-of-the-art RLHF methods, SAIL delivers significant performance gains, with up to 11.6\% improvement in win rate and a 3.6-point increase in evaluation rewards, while maintaining low computational overhead.
[ "RLHF", "Alignment", "Online Alignment", "Self-Play" ]
https://openreview.net/pdf?id=02kZwCo0C3
https://openreview.net/forum?id=02kZwCo0C3
d6EdV2MJYf
official_review
1,730,490,911,716
02kZwCo0C3
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission12435/Reviewer_ZoUS" ]
ICLR.cc/2025/Conference
2025
summary: The paper introduces SAIL (Self-improving Efficient Online Alignment), an approach for online reinforcement learning from human feedback (RLHF) that aims to align large language models (LLMs) with human preferences. SAIL addresses limitations in offline RLHF methods by framing online LLM alignment as a bilevel optimization problem, which it reduces to a single-level first-order optimization method to enhance computational efficiency. The approach allows for continuous model improvement by generating samples iteratively, regulating preferences, and exploring online feedback. SAIL's self-improvement mechanism enables it to reduce reliance on preference oracles, thus allowing for more scalable alignment. Empirical evaluations demonstrate significant performance improvements over standard RLHF baselines. soundness: 4 presentation: 4 contribution: 3 strengths: 1. **Innovative Formulation**: The paper provides a novel formulation of online RLHF through bilevel optimization, enhancing computational efficiency by reducing this problem to a single-level optimization, which is a significant advancement for practical LLM training. 2. **Effective Self-improvement Mechanism**: SAIL effectively addresses challenges related to reliance on preference oracles, making online alignment more feasible by leveraging the model's self-generated responses for iterative improvement. 3. **Comprehensive Evaluation**: The paper includes extensive experiments that demonstrate substantial improvements in evaluation reward, win rate, and efficiency over other methods like DPO, supporting SAIL's efficacy and computational advantage. 4. **Scalability and Adaptability**: SAIL’s approach to handling distribution shifts and reducing oracle reliance presents a promising method for more scalable RLHF applications, especially for emerging large-scale LLMs. 5. **Detailed Experiment Design and Baselines**: The experiment section is well-structured, covering a range of metrics (reward-margin, eval-reward, win rate) and configurations (SAIL-PR, SAIL-PP, SAIL-DP), providing insights into the trade-offs and performance across different setups. weaknesses: 1. **Limited Exploration of Alternative Utility Functions**: The method relies on the Bradley-Terry preference model, which may not be optimal for all RLHF applications. Future work could benefit from exploring alternative utility models that account for more nuanced preference data. 2. **Scalability Concerns for Larger Models**: Although the paper demonstrates SAIL’s effectiveness on LLMs with up to 8B parameters, additional scaling experiments would strengthen the paper's claims about computational efficiency for significantly larger models. 3. **Dependency on Initial Offline Dataset**: While SAIL reduces oracle dependency, it still relies on an initial offline dataset to bootstrap alignment. Further discussion on managing this dependency, especially when starting with limited labeled data, could be beneficial. 4. **Potential Overfitting in SAIL-DP**: The paper mentions that SAIL-DP shows signs of overfitting on in-distribution responses, suggesting that the method may benefit from more refined regularization techniques to ensure robust generalization to out-of-distribution samples. questions: 1. The paper demonstrates SAIL's efficiency with models up to 8B parameters. Could you share any considerations or expected challenges for scaling SAIL to significantly larger models, such as those with over 100B parameters? 2. SAIL currently relies on the Bradley-Terry preference model. Have you considered experimenting with other preference models, and do you anticipate any impact on alignment performance if different utility functions are used? 3. SAIL-DP seems to show some overfitting on in-distribution responses. Could you discuss any regularization techniques you considered or plans to mitigate this, particularly to enhance generalization to out-of-distribution data? 4. Given the dependence on an initial offline dataset, how does SAIL perform in situations with minimal or noisy initial data? Are there strategies within the current framework to mitigate issues arising from a limited initial dataset? 5. Could you provide more detail on the computational costs of SAIL, particularly in comparison with other RLHF approaches? How does the single-level optimization approach compare in terms of resource requirements, and what practical considerations should be kept in mind when implementing it? flag_for_ethics_review: ['No ethics review needed.'] rating: 8 confidence: 4 code_of_conduct: Yes
02kZwCo0C3
SAIL: Self-improving Efficient Online Alignment of Large Language Models
[]
Reinforcement Learning from Human Feedback (RLHF) is a critical method for aligning large language models (LLMs) with human preferences. However, existing offline alignment approaches, such as DPO, IPO, and SLiC, rely heavily on static datasets of human preferences, often leading to suboptimal performance. Recent efforts in the literature have moved towards online RLHF methods, but they lack a unified framework and suffer from distribution shift issues. In this work, we formalize online LLM alignment as a bilevel optimization problem. By reducing this formulation to a more computationally efficient single-level first-order method, utilizing reward-policy equivalence, we propose SAIL (Self-improving Efficient Online Alignment).SAIL generates new samples and iteratively refines model alignment through online exploration and regulation of preference labels. This enables continuous, self-improving alignment and generalizes prior online RLHF methods as special cases. Compared to state-of-the-art RLHF methods, SAIL delivers significant performance gains, with up to 11.6\% improvement in win rate and a 3.6-point increase in evaluation rewards, while maintaining low computational overhead.
[ "RLHF", "Alignment", "Online Alignment", "Self-Play" ]
https://openreview.net/pdf?id=02kZwCo0C3
https://openreview.net/forum?id=02kZwCo0C3
ZwEcy0FU5x
official_comment
1,732,371,368,955
BU6la6v4Ci
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission12435/Authors" ]
ICLR.cc/2025/Conference
2025
title: Response to Reviewer Rdtx (1/2) comment: > As a practitioner, at least the presentation/writing wasn't clear enough to agree that SAIL provides a unified framework for those who might want to consider using online RLHF in future works. I would personally suggest adding a section explains about how one could use SAIL instead of iterative DPO methods, as well as a huge emphasis on how the provided code could be used. **Response:** Thank you for this valuable suggestion. We will enhance the manuscript by adding a paragraph that addresses the limitations of current online iterative RLHF methods. In the final draft, we will expand upon the following points to better articulate the significance of SAIL: - We will emphasize that iterative methods fail to account for interdependencies during the reward learning phase, specifically the dependency of policy-generated trajectories that result in distribution shift. - To address these dependencies in a principled manner, we demonstrate the necessity of reformulating the alignment problem as a bilevel optimization problem, as expressed in equation (3). - While bilevel optimization presents significant computational challenges due to its requirement for complex second-order information, making it computationally intensive. - To overcome this, we leverage RLHF's special structure and the closed-form solution of the KL-regularized problem to transform it into a single-level problem without compromising generality, leading to our proposed SAIL approach. - Finally, we develop a self-improvement mechanism that replaces the human-in-the-loop component by utilizing the implicit reward function as defined in equation (11). > There is a huge emphasis on trying to improve reward models (on RewardBench) to mitigated reward model overoptimization & train better LMs. I am curious if given a fixed budget/time limit, whether one should try to employ online RLHF methods or try to enhance reward models in general. **Response:** Thank you for raising this insightful point. Indeed, there has been significant emphasis on improving reward models (through initiatives like RewardBench and new VLM benchmarks, particularly from AllenAI etc.) which has successfully addressed certain issues such as length bias. While we acknowledge the value of this approach in addressing specific challenges, we believe the underlying issue is more fundamental and encompasses response quality more broadly. The effectiveness of reward models is intrinsically dependent on training with optimal or high-quality response pairs. However, this presents a significant challenge, as it necessitates training on an extensive corpus of responses to ensure comprehensive coverage. Our proposed bilevel optimization framework addresses this challenge by providing an efficient mechanism for concurrent training of the reward model and policy. This approach enables dynamic collection of task-relevant response pairs, resulting in more targeted and effective training. > I would suggest adding an explanation of what is the limitation of online RLHF methods that the paper could not address. For example, it is still unclear on what is the best practice to "whether to discard instances from a preference dataset that have a subtle difference on the preference strength" or "would it be beneficial to employ more models when gathering responses when consisting a preference dataset". **Response:** Thank you for this valuable suggestion regarding the limitations of online RLHF methods. We will include a comprehensive discussion of these limitations in the revised manuscript. Our theoretical insights and experimental analysis reveal an important finding: preference datasets containing diverse responses yield more informative gradients, which are essential for effective model updates. Conversely, responses with only subtle differences in preference strength generate minimal gradients, resulting in negligible improvements. Our work leaves several promising directions unexplored. One particularly intriguing possibility is the development of a curriculum-based approach that initially leverages diverse responses and progressively incorporates responses with closer preference values. Such an approach could optimize the learning process by capitalizing on response diversity in early stages while refining alignment as the model converges. This aligns with the natural progression we observe in model training, where response similarity tends to increase as the model approaches convergence, particularly in scenarios with low uncertainty in optimal response generation. This area represents a promising avenue for future research.
02kZwCo0C3
SAIL: Self-improving Efficient Online Alignment of Large Language Models
[]
Reinforcement Learning from Human Feedback (RLHF) is a critical method for aligning large language models (LLMs) with human preferences. However, existing offline alignment approaches, such as DPO, IPO, and SLiC, rely heavily on static datasets of human preferences, often leading to suboptimal performance. Recent efforts in the literature have moved towards online RLHF methods, but they lack a unified framework and suffer from distribution shift issues. In this work, we formalize online LLM alignment as a bilevel optimization problem. By reducing this formulation to a more computationally efficient single-level first-order method, utilizing reward-policy equivalence, we propose SAIL (Self-improving Efficient Online Alignment).SAIL generates new samples and iteratively refines model alignment through online exploration and regulation of preference labels. This enables continuous, self-improving alignment and generalizes prior online RLHF methods as special cases. Compared to state-of-the-art RLHF methods, SAIL delivers significant performance gains, with up to 11.6\% improvement in win rate and a 3.6-point increase in evaluation rewards, while maintaining low computational overhead.
[ "RLHF", "Alignment", "Online Alignment", "Self-Play" ]
https://openreview.net/pdf?id=02kZwCo0C3
https://openreview.net/forum?id=02kZwCo0C3
Yqbllggrmw
official_review
1,729,772,623,071
02kZwCo0C3
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission12435/Reviewer_7i95" ]
ICLR.cc/2025/Conference
2025
summary: The paper addresses the limitations of traditional reinforcement learning from human feedback (RLHF) methods for aligning large language models (LLMs) with human preferences. The authors propose a unified framework for online RLHF formulated as a bilevel optimization problem, which they simplify to a single-level method for efficiency. This approach, called SAIL, allows for continuous model improvement through online exploration and iterative refinement of preference labels, mitigating issues related to distribution shifts and reducing reliance on static preference oracles. Experimental results demonstrate significant performance gains, with SAIL outperforming state-of-the-art RLHF methods. soundness: 3 presentation: 2 contribution: 3 strengths: (1) The paper introduces a novel unified framework for online RLHF that effectively addresses the challenges of static datasets and distribution shifts. (2) By reducing a bilevel optimization problem to a single-level method, SAIL maintains theoretical benefits while significantly lowering computational costs, making it more practical for real-world applications. (3) The self-improving aspect of SAIL allows models to iteratively enhance alignment without extensive supervision, addressing the challenge of needing constant access to human preference data. (4) Extensive experiments validate the effectiveness of SAIL, showing substantial improvements in performance metrics compared to existing methods, thus showcasing its applicability across various datasets. I would consider rescoring if the authors can solve my concern. weaknesses: (1) The method does not improve much in the AlpacaEval 2.0 Score. The author should give a detailed explanation. And why not use metrics like length-controlled win rate? (2) Authors should compare more advanced preference optimization algorithms like ORPO and SimPO. And current results are not impressive for the alignment community. (3) Why did the author just include MMLU as the downstream task metric? They should incorporate more tasks (eg., arc-challenge) like the similar self-improvement work SPIN (ICML24) to better illustrate their contribution. (4) In the alignment area, it's better to conduct experiments in the Arena-Hard benchmark since it's a common metric to evaluate the alignment ability. questions: See the weakness section. flag_for_ethics_review: ['No ethics review needed.'] rating: 3 confidence: 4 code_of_conduct: Yes
02kZwCo0C3
SAIL: Self-improving Efficient Online Alignment of Large Language Models
[]
Reinforcement Learning from Human Feedback (RLHF) is a critical method for aligning large language models (LLMs) with human preferences. However, existing offline alignment approaches, such as DPO, IPO, and SLiC, rely heavily on static datasets of human preferences, often leading to suboptimal performance. Recent efforts in the literature have moved towards online RLHF methods, but they lack a unified framework and suffer from distribution shift issues. In this work, we formalize online LLM alignment as a bilevel optimization problem. By reducing this formulation to a more computationally efficient single-level first-order method, utilizing reward-policy equivalence, we propose SAIL (Self-improving Efficient Online Alignment).SAIL generates new samples and iteratively refines model alignment through online exploration and regulation of preference labels. This enables continuous, self-improving alignment and generalizes prior online RLHF methods as special cases. Compared to state-of-the-art RLHF methods, SAIL delivers significant performance gains, with up to 11.6\% improvement in win rate and a 3.6-point increase in evaluation rewards, while maintaining low computational overhead.
[ "RLHF", "Alignment", "Online Alignment", "Self-Play" ]
https://openreview.net/pdf?id=02kZwCo0C3
https://openreview.net/forum?id=02kZwCo0C3
YBF0htDOcP
official_comment
1,732,372,662,980
01R8mdOaXU
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission12435/Authors" ]
ICLR.cc/2025/Conference
2025
title: Status Update on Response Progress comment: We have now posted detailed responses to questions that do not heavily depend on experimental validation. We are diligently working on the remaining experimental evaluations and will provide comprehensive results, along with any necessary response updates, in the coming days. We sincerely appreciate your thoughtful feedback and understanding as we work to thoroughly address all comments and strengthen our paper.
02kZwCo0C3
SAIL: Self-improving Efficient Online Alignment of Large Language Models
[]
Reinforcement Learning from Human Feedback (RLHF) is a critical method for aligning large language models (LLMs) with human preferences. However, existing offline alignment approaches, such as DPO, IPO, and SLiC, rely heavily on static datasets of human preferences, often leading to suboptimal performance. Recent efforts in the literature have moved towards online RLHF methods, but they lack a unified framework and suffer from distribution shift issues. In this work, we formalize online LLM alignment as a bilevel optimization problem. By reducing this formulation to a more computationally efficient single-level first-order method, utilizing reward-policy equivalence, we propose SAIL (Self-improving Efficient Online Alignment).SAIL generates new samples and iteratively refines model alignment through online exploration and regulation of preference labels. This enables continuous, self-improving alignment and generalizes prior online RLHF methods as special cases. Compared to state-of-the-art RLHF methods, SAIL delivers significant performance gains, with up to 11.6\% improvement in win rate and a 3.6-point increase in evaluation rewards, while maintaining low computational overhead.
[ "RLHF", "Alignment", "Online Alignment", "Self-Play" ]
https://openreview.net/pdf?id=02kZwCo0C3
https://openreview.net/forum?id=02kZwCo0C3
QM5flvOQTS
official_comment
1,733,083,173,316
r0xTzOrbHO
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission12435/Authors" ]
ICLR.cc/2025/Conference
2025
title: Time-Critical: Your Review of Our Responses Would Be Greatly Appreciated comment: We are nearing the end of the discussion period, and we wanted to reach out once more about our detailed responses to your insightful comments. We greatly value your thorough review and have worked diligently to address each of your concerns, including conducting additional experiments on AlpacaEval 2.0, Arena-Hard benchmark, and ARC-Challenge as per your suggestions. Your expertise and perspective have been crucial in strengthening our work, and we would deeply appreciate if you could take a moment to review our detailed responses before the discussion period ends.
02kZwCo0C3
SAIL: Self-improving Efficient Online Alignment of Large Language Models
[]
Reinforcement Learning from Human Feedback (RLHF) is a critical method for aligning large language models (LLMs) with human preferences. However, existing offline alignment approaches, such as DPO, IPO, and SLiC, rely heavily on static datasets of human preferences, often leading to suboptimal performance. Recent efforts in the literature have moved towards online RLHF methods, but they lack a unified framework and suffer from distribution shift issues. In this work, we formalize online LLM alignment as a bilevel optimization problem. By reducing this formulation to a more computationally efficient single-level first-order method, utilizing reward-policy equivalence, we propose SAIL (Self-improving Efficient Online Alignment).SAIL generates new samples and iteratively refines model alignment through online exploration and regulation of preference labels. This enables continuous, self-improving alignment and generalizes prior online RLHF methods as special cases. Compared to state-of-the-art RLHF methods, SAIL delivers significant performance gains, with up to 11.6\% improvement in win rate and a 3.6-point increase in evaluation rewards, while maintaining low computational overhead.
[ "RLHF", "Alignment", "Online Alignment", "Self-Play" ]
https://openreview.net/pdf?id=02kZwCo0C3
https://openreview.net/forum?id=02kZwCo0C3
OQRvgef8Aj
official_comment
1,732,371,761,120
uFzhESKso6
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission12435/Authors" ]
ICLR.cc/2025/Conference
2025
title: Response to Reviewer ZoUS (2/3) comment: > Dependency on Initial Offline Dataset: While SAIL reduces oracle dependency, it still relies on an initial offline dataset to bootstrap alignment. Further discussion on managing this dependency, especially when starting with limited labeled data, could be beneficial. Given the dependence on an initial offline dataset, how does SAIL perform in situations with minimal or noisy initial data? Are there strategies within the current framework to mitigate issues arising from a limited initial dataset? **Response:** Thank you for bringing up this important consideration. While SAIL does depend on an initial offline dataset to bootstrap alignment, it requires less initial data compared to standard DPO. This is because SAIL is designed to address the suboptimality issues of offline alignment methods and to be more efficient than exact bilevel formulations. In situations with minimal or noisy initial data, SAIL is better suited than standard DPO. Its reduced dependency on large amounts of high-quality data makes it more practical when starting with limited labeled data. Although mitigating issues from limited initial datasets isn't the primary motivation of our framework, this advantage allows SAIL to perform effectively even when the available data is minimal. > Potential Overfitting in SAIL-DP: The paper mentions that SAIL-DP shows signs of overfitting on in-distribution responses, suggesting that the method may benefit from more refined regularization techniques to ensure robust generalization to out-of-distribution samples. SAIL-DP seems to show some overfitting on in-distribution responses. Could you discuss any regularization techniques you considered or plans to mitigate this, particularly to enhance generalization to out-of-distribution data? **Response:** Thank you for this insightful question. SAIL-DP does show signs of overfitting on in-distribution responses, as it significantly improves the Reward Margin but doesn't necessarily enhance metrics like the MT-Bench score. We hypothesize that this is due to the lack of exposure to out-of-distribution responses and offline rewards, which limits the model's ability to generalize. To mitigate this and enhance generalization to out-of-distribution data, we suggest the following strategies: - **Incorporate Out-of-Distribution Data:** Adding offline rewards and out-of-distribution responses to the training data can help the model learn a more generalized policy. This approach is employed in our SAIL-PR and SAIL-PP setups. - **Regularization Techniques:** - Data Augmentation: Augment the offline dataset by rewriting responses using other large language models (LLMs) to introduce more diversity. - Label Smoothing: Apply label smoothing techniques, such as those proposed in cDPO (Mitchell et al., 2023), to reduce overconfidence in the model and mitigate the impact of noisy preference labels. These strategies can help address the overfitting issue in SAIL-DP and improve its generalization to out-of-distribution samples.
02kZwCo0C3
SAIL: Self-improving Efficient Online Alignment of Large Language Models
[]
Reinforcement Learning from Human Feedback (RLHF) is a critical method for aligning large language models (LLMs) with human preferences. However, existing offline alignment approaches, such as DPO, IPO, and SLiC, rely heavily on static datasets of human preferences, often leading to suboptimal performance. Recent efforts in the literature have moved towards online RLHF methods, but they lack a unified framework and suffer from distribution shift issues. In this work, we formalize online LLM alignment as a bilevel optimization problem. By reducing this formulation to a more computationally efficient single-level first-order method, utilizing reward-policy equivalence, we propose SAIL (Self-improving Efficient Online Alignment).SAIL generates new samples and iteratively refines model alignment through online exploration and regulation of preference labels. This enables continuous, self-improving alignment and generalizes prior online RLHF methods as special cases. Compared to state-of-the-art RLHF methods, SAIL delivers significant performance gains, with up to 11.6\% improvement in win rate and a 3.6-point increase in evaluation rewards, while maintaining low computational overhead.
[ "RLHF", "Alignment", "Online Alignment", "Self-Play" ]
https://openreview.net/pdf?id=02kZwCo0C3
https://openreview.net/forum?id=02kZwCo0C3
LptcsYSp94
official_comment
1,732,371,796,553
OQRvgef8Aj
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission12435/Authors" ]
ICLR.cc/2025/Conference
2025
title: Response to Reviewer ZoUS (3/3) comment: > Could you provide more detail on the computational costs of SAIL, particularly in comparison with other RLHF approaches? How does the single-level optimization approach compare in terms of resource requirements, and what practical considerations should be kept in mind when implementing it? **Response:** Thank you for your question regarding the computational costs of SAIL compared to other RLHF approaches. Here are our insights: 1. **Overhead Comparison with Offline DPO:** SAIL introduces no additional overhead during the model update phase compared to offline DPO. The primary overhead stems from its online nature—specifically, response generation and reward evaluation. 2. **Detailed Overheads of SAIL Variants:** As illustrated in Figure 5 of our paper, the overheads for the three SAIL setups vary: - **SAIL-DP:** This variant incurs minimal overhead, mainly from computing additional gradient terms during backpropagation. - **SAIL-PP:** In addition to the overhead in SAIL-DP, SAIL-PP includes significant overhead from generating online responses. - **SAIL-PR:** Beyond the overheads in SAIL-PP, SAIL-PR also involves overhead from reward evaluation. By comparing the overheads of each setup, one can estimate the contribution of each component to the overall computational cost. 3. **Resource Requirements and Practical Considerations:** Similar to other online RLHF methods, implementing SAIL requires careful management of memory resources due to the extra memory needed for online response generation and reward model evaluation. To optimize training speed, it's preferable to load all necessary models and caches into memory simultaneously to avoid the time overhead associated with frequent loading and unloading. Therefore, systems with larger memory capacity are advantageous for running SAIL efficiently. 4. **Implementation Guidance:** Our code provides an example implementation based on the TRL package's `DPOTrainer`. While it may not represent state-of-the-art optimization, it serves as a practical starting point. Researchers can build upon this and explore additional optimization strategies to further reduce computational costs when applying SAIL to larger models. We hope this clarifies the computational considerations and practical aspects of implementing SAIL compared to other RLHF approaches.
02kZwCo0C3
SAIL: Self-improving Efficient Online Alignment of Large Language Models
[]
Reinforcement Learning from Human Feedback (RLHF) is a critical method for aligning large language models (LLMs) with human preferences. However, existing offline alignment approaches, such as DPO, IPO, and SLiC, rely heavily on static datasets of human preferences, often leading to suboptimal performance. Recent efforts in the literature have moved towards online RLHF methods, but they lack a unified framework and suffer from distribution shift issues. In this work, we formalize online LLM alignment as a bilevel optimization problem. By reducing this formulation to a more computationally efficient single-level first-order method, utilizing reward-policy equivalence, we propose SAIL (Self-improving Efficient Online Alignment).SAIL generates new samples and iteratively refines model alignment through online exploration and regulation of preference labels. This enables continuous, self-improving alignment and generalizes prior online RLHF methods as special cases. Compared to state-of-the-art RLHF methods, SAIL delivers significant performance gains, with up to 11.6\% improvement in win rate and a 3.6-point increase in evaluation rewards, while maintaining low computational overhead.
[ "RLHF", "Alignment", "Online Alignment", "Self-Play" ]
https://openreview.net/pdf?id=02kZwCo0C3
https://openreview.net/forum?id=02kZwCo0C3
ID6MmtL62c
official_comment
1,732,634,885,594
Yqbllggrmw
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission12435/Authors" ]
ICLR.cc/2025/Conference
2025
title: Response to Reviewer 7i95 (2/2) comment: > Why did the author just include MMLU as the downstream task metric? They should incorporate more tasks (e.g., ARC-Challenge) like the similar self-improvement work SPIN (ICML24) to better illustrate their contribution. **Response:** Thank you for your suggestion. Let us first explain why we did not apply many different downstream task evaluation datasets to our experiments. One reason is that we have incorporated 5 other metrics including the widely used MT-Bench scores and AlpacaEval 2.0 length-controlled win-rates. Another reason is that the UltraFeedback fine-tuning dataset we used is primarily designed to consider 4 different aspects, namely instruction-following, truthfulness, honesty, and helpfulness, and therefore it may not be very useful to improve the model's capability on reasoning datasets like MMLU and ARC-Challenge. Nevertheless, we agree that adding more evaluation datasets would strengthen our experimental analysis. Following the reviewer's suggestion, we added the ARC-Challenge dataset as part of the evaluation and reconducted the experiments on Llama-3 (8B) and ARC-Challenge. We see similar observations as on MMLU, the SAIL methods bring larger improvements than the DPO baseline. | | Instr-Tuned | DPO | SAIL-PR | SAIL-PP | SAIL-DP | |---------------|-------------|-------|---------|---------|---------| | ARC-Challenge Accuracy | 82.2% | 82.8% | 84.1% | 83.6% | 83.4% | The results show that our improvements are larger than the DPO baseline, although the baseline improvement is small. > In the alignment area, it's better to conduct experiments in the Arena-Hard benchmark since it's a common metric to evaluate the alignment ability. **Response:** Thank you for your suggestion. We agree that the Arena-Hard benchmark is recently becoming a widely used benchmark. We use the Arena-Hard-Auto repository and adapt their newly introduced Style Control (SC) method, which follows an update of Chatbot Arena. Following the reviewer's suggestion, we added the Arena-Hard benchmark as part of the evaluation, and reconducted the experiments on Llama3 (8B). The observation is similar as on MT-Bench, where we clearly see the SAIL methods can lead to significantly larger improvements than the DPO baseline. We plan to add Arena-Hard evaluations to other experiments in the manuscript soon. | | Instr-Tuned | DPO | SAIL-PR | SAIL-PP | SAIL-DP | |-------------------------------------|-------------|------|---------|---------|---------| | Arena-Hard Score (Style Controlled) | 19.8 | 23.8 | 29.4 | 26.8 | 24.9 |
02kZwCo0C3
SAIL: Self-improving Efficient Online Alignment of Large Language Models
[]
Reinforcement Learning from Human Feedback (RLHF) is a critical method for aligning large language models (LLMs) with human preferences. However, existing offline alignment approaches, such as DPO, IPO, and SLiC, rely heavily on static datasets of human preferences, often leading to suboptimal performance. Recent efforts in the literature have moved towards online RLHF methods, but they lack a unified framework and suffer from distribution shift issues. In this work, we formalize online LLM alignment as a bilevel optimization problem. By reducing this formulation to a more computationally efficient single-level first-order method, utilizing reward-policy equivalence, we propose SAIL (Self-improving Efficient Online Alignment).SAIL generates new samples and iteratively refines model alignment through online exploration and regulation of preference labels. This enables continuous, self-improving alignment and generalizes prior online RLHF methods as special cases. Compared to state-of-the-art RLHF methods, SAIL delivers significant performance gains, with up to 11.6\% improvement in win rate and a 3.6-point increase in evaluation rewards, while maintaining low computational overhead.
[ "RLHF", "Alignment", "Online Alignment", "Self-Play" ]
https://openreview.net/pdf?id=02kZwCo0C3
https://openreview.net/forum?id=02kZwCo0C3
FcLVLsBQIy
official_comment
1,732,371,453,551
ZwEcy0FU5x
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission12435/Authors" ]
ICLR.cc/2025/Conference
2025
title: Response to Reviewer Rdtx (2/2) comment: > Reward margin and offline-reward evaluation is interesting by itself and could provide information of the effectiveness of the method, but I personally think is not as an important measurement as pairwise winrate. Could you elaborate on Section 6.1 why one should consider looking into it? **Response:** Thank you for this thoughtful feedback. While we agree that pairwise win rate represents a critical metric for response quality evaluation, reward margin and offline-reward evaluation contribute significant additional value for the following reasons: - These metrics enable quantitative comparisons between our method and baselines, demonstrating the effectiveness of our RLHF algorithm. Our evaluation utilizes high-quality offline reward models provided by the dataset authors, ensuring consistent evaluation standards. - Although we acknowledge the limitations inherent in using a static reward model, these metrics complement the pairwise win rate and other evaluations such as MT-Bench and MMLU. This multi-faceted approach provides a more comprehensive assessment of model performance. We think this combination of metrics offers a more complete understanding of our method's capabilities and limitations.
02kZwCo0C3
SAIL: Self-improving Efficient Online Alignment of Large Language Models
[]
Reinforcement Learning from Human Feedback (RLHF) is a critical method for aligning large language models (LLMs) with human preferences. However, existing offline alignment approaches, such as DPO, IPO, and SLiC, rely heavily on static datasets of human preferences, often leading to suboptimal performance. Recent efforts in the literature have moved towards online RLHF methods, but they lack a unified framework and suffer from distribution shift issues. In this work, we formalize online LLM alignment as a bilevel optimization problem. By reducing this formulation to a more computationally efficient single-level first-order method, utilizing reward-policy equivalence, we propose SAIL (Self-improving Efficient Online Alignment).SAIL generates new samples and iteratively refines model alignment through online exploration and regulation of preference labels. This enables continuous, self-improving alignment and generalizes prior online RLHF methods as special cases. Compared to state-of-the-art RLHF methods, SAIL delivers significant performance gains, with up to 11.6\% improvement in win rate and a 3.6-point increase in evaluation rewards, while maintaining low computational overhead.
[ "RLHF", "Alignment", "Online Alignment", "Self-Play" ]
https://openreview.net/pdf?id=02kZwCo0C3
https://openreview.net/forum?id=02kZwCo0C3
F9TjZCBBKB
official_comment
1,733,162,266,100
LptcsYSp94
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission12435/Reviewer_ZoUS" ]
ICLR.cc/2025/Conference
2025
title: Response to rebuttal comment: Thank you for your response and addressing my concerns. I have no further questions and will keep my current rating.
02kZwCo0C3
SAIL: Self-improving Efficient Online Alignment of Large Language Models
[]
Reinforcement Learning from Human Feedback (RLHF) is a critical method for aligning large language models (LLMs) with human preferences. However, existing offline alignment approaches, such as DPO, IPO, and SLiC, rely heavily on static datasets of human preferences, often leading to suboptimal performance. Recent efforts in the literature have moved towards online RLHF methods, but they lack a unified framework and suffer from distribution shift issues. In this work, we formalize online LLM alignment as a bilevel optimization problem. By reducing this formulation to a more computationally efficient single-level first-order method, utilizing reward-policy equivalence, we propose SAIL (Self-improving Efficient Online Alignment).SAIL generates new samples and iteratively refines model alignment through online exploration and regulation of preference labels. This enables continuous, self-improving alignment and generalizes prior online RLHF methods as special cases. Compared to state-of-the-art RLHF methods, SAIL delivers significant performance gains, with up to 11.6\% improvement in win rate and a 3.6-point increase in evaluation rewards, while maintaining low computational overhead.
[ "RLHF", "Alignment", "Online Alignment", "Self-Play" ]
https://openreview.net/pdf?id=02kZwCo0C3
https://openreview.net/forum?id=02kZwCo0C3
DHwZxFryth
official_comment
1,732,634,846,687
Yqbllggrmw
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission12435/Authors" ]
ICLR.cc/2025/Conference
2025
title: Response to Reviewer 7i95 (1/2) comment: > The method does not improve much in the AlpacaEval 2.0 Score. The author should give a detailed explanation. And why not use metrics like length-controlled win rate? **Response:** Thank you for your careful observation and question. We would like to clarify that we are already using the length-controlled (LC) AlpacaEval 2.0 win-rate metric in our evaluations. We will make this clearer in the table header of Table 3. Regarding the fact that the AlpacaEval 2.0 scores on LLama-3 (8B) do not improve compared to the baselines, we believe this is because our base model, the instruction-finetuned LLama-3 (8B), is already trained to perform exceptionally well in terms of helpfulness, which is the focus of the AlpacaEval benchmark. Additionally, the preference dataset we used, UltraFeedback, may not provide significant further enhancement in the helpfulness aspect. This is supported by the slight decrease observed in the AlpacaEval score for the standard DPO baseline as well (see Table 3, results on LLama-3). Therefore, we think these AlpacaEval 2.0 results on LLama-3 (8B) may not indicate that SAIL is ineffective; it may be simply caused by an ill-suited combination of base model, finetuning dataset, and evaluation benchmark. We also further conducted experiments on the Zephyr (7B) model as the backbone, whose AlpacaEval 2.0 win-rate is lower. We still train on the UltraFeedback preference dataset and the other experiment setups are unchanged. In this experiment, we see a larger improvement of the SAIL method compared to the standard DPO baseline (Zephyr-7B-Beta). | | AlpacaEval 2.0 (LC) Win-Rate | |--------------------|------------------------------| | Base (Zephyr-7B-SFT-Full) | 6.4 % | | DPO (Zephyr-7B-Beta) | 13.2 % | | SAIL-PP | 15.9 % | > Authors should compare more advanced preference optimization algorithms like ORPO and SimPO. And current results are not impressive for the alignment community. **Response:** Thank you for raising this insightful point. We see ORPO and SimPO are two recent work which propose a different objective than the standard RLHF, and achieve remarkable improvements in terms of alignment performance and efficiency. Our work focus more on bringing standard RLHF to a bilevel optimization framework and propose an effective and efficient approximate algorithm on top of it. We can see some new preference optimization methods including ORPO and SimPO have one fundamental difference from our approach: they do not explicitly incorporate the KL regularization term. The absence of the KL regularization term allows these methods to optimize more aggressively for the reward function by deviating significantly from the reference model. In contrast, our approach is specifically grounded in the standard RLHF, where the KL regularization term ensures that the model remains aligned with the reference distribution while optimizing for the reward function. This distinction makes direct comparisons with ORPO or SimPO less meaningful theoretically, as those methods omit the KL regularization and adopt a fundamentally different optimization objective design. However, we think our work, although developed adhering to the standard RLHF setup, can be compatible and combined with some recent advanced preference optimization algorithms, despite their differences in optimization setups and objectives. This is because we can reformulate their alignment problem as bilevel optimization, and go through the derivation as done in the paper. Taking SimPO as an example, we can treat their reward model definition (Equation (4) in the SimPO paper) as the solution of the upper level optimization (replacing Equation (4) in our manuscript), and adopt their modified Bradley-Terry objective with reward margin (Equation (5) in the SimPO paper) to replace the standard one (Equation (10) in our manuscript). By applying these changes and rederiving the extra gradient terms, we can formulate an adaptation of our method to the SimPO objective. We will implement this combined algorithm, which adapt our methodology to the SimPO objective, and compare with the SimPO as a baseline. Recently many different alignment objectives and algorithms have emerged; it is an interesting question to discuss the compatibility and combination of our method with each objective. We will add more relevant discussions to the appendices, but due to the fact that the compatibility problem with each design is a non-trivial question, this process may incur considerably more work, and we hope the reviewer understands that this effort cannot be fully reflected by the rebuttal period. But we will continue to expand the discussion as the wide compatibility to other designs also strengthens our contribution to the community. We thank the reviewer for raising this insightful point.
02kZwCo0C3
SAIL: Self-improving Efficient Online Alignment of Large Language Models
[]
Reinforcement Learning from Human Feedback (RLHF) is a critical method for aligning large language models (LLMs) with human preferences. However, existing offline alignment approaches, such as DPO, IPO, and SLiC, rely heavily on static datasets of human preferences, often leading to suboptimal performance. Recent efforts in the literature have moved towards online RLHF methods, but they lack a unified framework and suffer from distribution shift issues. In this work, we formalize online LLM alignment as a bilevel optimization problem. By reducing this formulation to a more computationally efficient single-level first-order method, utilizing reward-policy equivalence, we propose SAIL (Self-improving Efficient Online Alignment).SAIL generates new samples and iteratively refines model alignment through online exploration and regulation of preference labels. This enables continuous, self-improving alignment and generalizes prior online RLHF methods as special cases. Compared to state-of-the-art RLHF methods, SAIL delivers significant performance gains, with up to 11.6\% improvement in win rate and a 3.6-point increase in evaluation rewards, while maintaining low computational overhead.
[ "RLHF", "Alignment", "Online Alignment", "Self-Play" ]
https://openreview.net/pdf?id=02kZwCo0C3
https://openreview.net/forum?id=02kZwCo0C3
BU6la6v4Ci
official_review
1,730,696,090,534
02kZwCo0C3
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission12435/Reviewer_Rdtx" ]
ICLR.cc/2025/Conference
2025
summary: Compared to offline RLHF methods, online RLHF methods empirically show stronger performance, yet is computationally expensive, vulnerable to distribution shifts and lacks a unified framework. The authors ablate different online RLHF methods based on all possible combinations (namely, SAIL-PR, SAIL-PP, SAIL-DP) which could be useful for future work exploring online RLHF methods. Personally, it was surprising that SAIL-PP generally works on par or slightly better than SAIL-PR, which open up further research questions on what would be the optimal way to obtain preference dataset. soundness: 3 presentation: 2 contribution: 4 strengths: * The authors test of two LLM-as-a-Judge benchmarks as well as on a well-established classification benchmark, and their results are consistent. * The authors provide a theoretical explanation of why their method works effectively. * Showing all possible combinations at Figure 2 helped understanding what kind of online RLHF methods one should consider * The results are consistent across smaller models (0.5B) up to widely used scale models (8B). weaknesses: * As a practitioner, at least the presentation/writing wasn't clear enough to agree that SAIL provides a unified framework for those who might want to consider using online RLHF in future works. I would personally suggest adding a section explains about how one could use SAIL instead of iterative DPO methods, as well as a huge emphasis on how the provided code could be used. * There is a huge emphasis on trying to improve reward models (on RewardBench) to mitigated reward model overoptimization & train better LMs. I am curious if given a fixed budget/time limit, whether one should try to employ online RLHF methods or try to enhance reward models in general. * I would suggest adding an explanation of what is the limitation of online RLHF methods that the paper could not address. For example, it is still unclear on what is the best practice to "whether to discard instances from a preference dataset that have a subtle difference on the preference strength" or "would it be beneficial to employ more models when gathering responses when consisting a preference dataset". questions: * Reward margin and offline-reward evaluation is interesting by itself and could provide information of the effectiveness of the method, but I personally think is not as an important measurement as pairwise winrate. Could you elaborate on Section 6.1 why one should consider looking into it? * Please check the questions in weaknesses as well! flag_for_ethics_review: ['No ethics review needed.'] rating: 6 confidence: 4 code_of_conduct: Yes
02kZwCo0C3
SAIL: Self-improving Efficient Online Alignment of Large Language Models
[]
Reinforcement Learning from Human Feedback (RLHF) is a critical method for aligning large language models (LLMs) with human preferences. However, existing offline alignment approaches, such as DPO, IPO, and SLiC, rely heavily on static datasets of human preferences, often leading to suboptimal performance. Recent efforts in the literature have moved towards online RLHF methods, but they lack a unified framework and suffer from distribution shift issues. In this work, we formalize online LLM alignment as a bilevel optimization problem. By reducing this formulation to a more computationally efficient single-level first-order method, utilizing reward-policy equivalence, we propose SAIL (Self-improving Efficient Online Alignment).SAIL generates new samples and iteratively refines model alignment through online exploration and regulation of preference labels. This enables continuous, self-improving alignment and generalizes prior online RLHF methods as special cases. Compared to state-of-the-art RLHF methods, SAIL delivers significant performance gains, with up to 11.6\% improvement in win rate and a 3.6-point increase in evaluation rewards, while maintaining low computational overhead.
[ "RLHF", "Alignment", "Online Alignment", "Self-Play" ]
https://openreview.net/pdf?id=02kZwCo0C3
https://openreview.net/forum?id=02kZwCo0C3
25dqYHH6wI
official_review
1,730,646,081,261
02kZwCo0C3
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission12435/Reviewer_urgR" ]
ICLR.cc/2025/Conference
2025
summary: The authors identify three significant challenges in online RLHF algorithms: Challenge 1: the interdependence between models and data in implicit reward learning; Challenge 2: the computational complexity of bi-level optimization; and Challenge 3: the reliance on preference oracles. They propose SAIL to address these challenges. The main contributions of the paper can be summarized as follows: 1. **Unified LLM Alignment Mathematical Framework**: The authors have designed a principled online RLHF framework that provides concrete guidance for generating new responses, assuming the existence of a preference oracle. 2. **Adaptive Direct Preference Optimization**: By introducing a DPO-style analysis, the authors present an efficient single-layer solution capable of effectively addressing distribution shifts and providing a scalable online preference optimization method. 3. **Introduction of a Self-Improvement Mechanism**: This mechanism reduces the reliance on preference oracles. 4. **Extensive Experimental Evaluation**: The experiments conducted demonstrate that SAIL significantly outperforms baseline methods. soundness: 3 presentation: 4 contribution: 3 strengths: 1. Introducing Bi-level Preference Optimization: The process of bi-level preference optimization is integrated into the modeling of online RLHF. By leveraging the unique correspondence between the reward function and the LLM policy, this approach innovatively transforms the process into an equivalent single-layer form that is easier to solve. 2. Extensive Experiments on SAIL: Comprehensive and rich experiments were conducted to address the three significant challenges in online RLHF and to demonstrate the relevant applications of SAIL. weaknesses: Regarding the three variants of the SAIL method, Table 3 shows that in the Eval-Reward and MT-bench columns, the SAIL method performs worse than the baseline DPO. Please clarify whether these experimental results undermine the assertion that the SAIL method is superior to the baseline DPO. questions: There is a large amount of blank space below Section 6.1. Is there any missing content in this part of the paper? flag_for_ethics_review: ['No ethics review needed.'] rating: 6 confidence: 3 code_of_conduct: Yes
02kZwCo0C3
SAIL: Self-improving Efficient Online Alignment of Large Language Models
[]
Reinforcement Learning from Human Feedback (RLHF) is a critical method for aligning large language models (LLMs) with human preferences. However, existing offline alignment approaches, such as DPO, IPO, and SLiC, rely heavily on static datasets of human preferences, often leading to suboptimal performance. Recent efforts in the literature have moved towards online RLHF methods, but they lack a unified framework and suffer from distribution shift issues. In this work, we formalize online LLM alignment as a bilevel optimization problem. By reducing this formulation to a more computationally efficient single-level first-order method, utilizing reward-policy equivalence, we propose SAIL (Self-improving Efficient Online Alignment).SAIL generates new samples and iteratively refines model alignment through online exploration and regulation of preference labels. This enables continuous, self-improving alignment and generalizes prior online RLHF methods as special cases. Compared to state-of-the-art RLHF methods, SAIL delivers significant performance gains, with up to 11.6\% improvement in win rate and a 3.6-point increase in evaluation rewards, while maintaining low computational overhead.
[ "RLHF", "Alignment", "Online Alignment", "Self-Play" ]
https://openreview.net/pdf?id=02kZwCo0C3
https://openreview.net/forum?id=02kZwCo0C3
1QaKMNvqWa
official_comment
1,732,371,584,480
25dqYHH6wI
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission12435/Authors" ]
ICLR.cc/2025/Conference
2025
title: Response to Reviewer urgR (1/1) comment: > Regarding the three variants of the SAIL method, Table 3 shows that in the Eval-Reward and MT-bench columns, the SAIL method performs worse than the baseline DPO. Please clarify whether these experimental results undermine the assertion that the SAIL method is superior to the baseline DPO. **Response:** Thank you for your thorough analysis of our experimental results. In Table 3, we observe that among our variants, only SAIL-DP demonstrates marginally lower performance than the baseline DPO in Eval-Reward and MT-Bench metrics. However, this observation does not affect our broader conclusions regarding the effectiveness of our two primary SAIL implementations: SAIL-PR and SAIL-PP. Let us clarify the key points: - SAIL-DP employs a distinct methodology, utilizing responses from the offline dataset with self-generated preference labels. This contrasts with SAIL-PR and SAIL-PP, which generate responses online. Additionally, SAIL-DP operates with a reduced number of preference labels compared to standard DPO. - While SAIL-DP shows slightly decreased performance in Eval-Reward and MT-Bench metrics, it achieves notable improvements in Reward Margin. This is particularly significant given its reduced preference label requirements and minimal computational overhead. These findings support our overall conclusion regarding SAIL methods' superiority over baseline DPO. We will enhance the manuscript to better articulate the distinct characteristics and trade-offs of each SAIL variant. > There is a large amount of blank space below Section 6.1. Is there any missing content in this part of the paper? **Response:** Thank you for pointing this out. The blank space below Section 6.1 is not due to missing content; it is a LaTeX formatting problem. We will address this in the updated manuscript.
02kZwCo0C3
SAIL: Self-improving Efficient Online Alignment of Large Language Models
[]
Reinforcement Learning from Human Feedback (RLHF) is a critical method for aligning large language models (LLMs) with human preferences. However, existing offline alignment approaches, such as DPO, IPO, and SLiC, rely heavily on static datasets of human preferences, often leading to suboptimal performance. Recent efforts in the literature have moved towards online RLHF methods, but they lack a unified framework and suffer from distribution shift issues. In this work, we formalize online LLM alignment as a bilevel optimization problem. By reducing this formulation to a more computationally efficient single-level first-order method, utilizing reward-policy equivalence, we propose SAIL (Self-improving Efficient Online Alignment).SAIL generates new samples and iteratively refines model alignment through online exploration and regulation of preference labels. This enables continuous, self-improving alignment and generalizes prior online RLHF methods as special cases. Compared to state-of-the-art RLHF methods, SAIL delivers significant performance gains, with up to 11.6\% improvement in win rate and a 3.6-point increase in evaluation rewards, while maintaining low computational overhead.
[ "RLHF", "Alignment", "Online Alignment", "Self-Play" ]
https://openreview.net/pdf?id=02kZwCo0C3
https://openreview.net/forum?id=02kZwCo0C3
0WYdN2f4Gf
official_comment
1,732,556,952,554
FcLVLsBQIy
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission12435/Reviewer_Rdtx" ]
ICLR.cc/2025/Conference
2025
comment: Thank you for the insightful responses. I will keep the current positive score as it is!
02kZwCo0C3
SAIL: Self-improving Efficient Online Alignment of Large Language Models
[]
Reinforcement Learning from Human Feedback (RLHF) is a critical method for aligning large language models (LLMs) with human preferences. However, existing offline alignment approaches, such as DPO, IPO, and SLiC, rely heavily on static datasets of human preferences, often leading to suboptimal performance. Recent efforts in the literature have moved towards online RLHF methods, but they lack a unified framework and suffer from distribution shift issues. In this work, we formalize online LLM alignment as a bilevel optimization problem. By reducing this formulation to a more computationally efficient single-level first-order method, utilizing reward-policy equivalence, we propose SAIL (Self-improving Efficient Online Alignment).SAIL generates new samples and iteratively refines model alignment through online exploration and regulation of preference labels. This enables continuous, self-improving alignment and generalizes prior online RLHF methods as special cases. Compared to state-of-the-art RLHF methods, SAIL delivers significant performance gains, with up to 11.6\% improvement in win rate and a 3.6-point increase in evaluation rewards, while maintaining low computational overhead.
[ "RLHF", "Alignment", "Online Alignment", "Self-Play" ]
https://openreview.net/pdf?id=02kZwCo0C3
https://openreview.net/forum?id=02kZwCo0C3
01R8mdOaXU
official_comment
1,731,715,465,328
02kZwCo0C3
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission12435/Authors" ]
ICLR.cc/2025/Conference
2025
title: Status Update on Additional Experiments comment: Thank you for your detailed feedback and suggestions for additional experiments. We have carefully reviewed all comments and experimental requests, and are actively conducting the requested evaluations. We will provide a comprehensive response with results soon. We greatly appreciate your constructive feedback and patience as we work to strengthen our work.
02haSpO453
VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation
[]
VILA-U is a Unified foundation model that integrates Video, Image, Language understanding and generation. Traditional visual language models (VLMs) use separate modules for understanding and generating visual content, which can lead to misalignment and increased complexity. In contrast, VILA-U employs a single autoregressive next-token prediction framework for both tasks, eliminating the need for additional components like diffusion models. This approach not only simplifies the model but also achieves near state-of-the-art performance in visual language understanding and generation. The success of VILA-U is attributed to two main factors: the unified vision tower that aligns discrete visual tokens with textual inputs during pretraining, which enhances visual perception, and autoregressive image generation can achieve similar quality as diffusion models with high-quality dataset. This allows VILA-U to perform comparably to more complex models using a fully token-based autoregressive framework.
[ "Unified Visual Language Model", "Autoregressive Model" ]
https://openreview.net/pdf?id=02haSpO453
https://openreview.net/forum?id=02haSpO453
uMPyFz62XX
official_review
1,730,093,669,491
02haSpO453
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission2027/Reviewer_n4tc" ]
ICLR.cc/2025/Conference
2025
summary: Summary: VILA-U is a foundation model that unifies video, image, and language understanding and generation. Unlike traditional models that use separate components for different tasks, VILA-U simplifies this by employing a single autoregressive framework. This reduces misalignment and maintains near state-of-the-art performance in both understanding and generating visual language content. Key factors for its success include a unified vision tower that aligns visual and textual inputs, enhancing perception, and the ability to achieve high-quality image generation similar to diffusion models. Contributions: 1. VILA-U strives for an end-to-end autoregressive model that handles both visual and textual inputs through a unified next-token prediction approach. This approach eliminates the need for external components like diffusion models, simplifying the infrastructure. 2. VILA-U is tested across a range of tasks, including image-language and video-language understanding, as well as image and video generation. It demonstrates notable improvements, particularly in narrowing the gap between autoregressive and continuous-token models in visual understanding, while also offering robust visual generation capabilities. soundness: 3 presentation: 3 contribution: 3 strengths: 1. The idea of VILA-U is very straightforward, and the experiments are solid. It significantly enhances the capabilities of end-to-end autoregressive multimodal models in visual-language tasks, bridging the gap between autoregressive multimodal models and the LLAVA series, while also excelling in image generation. 2. The structure of the VILA-U paper is simple and easy to read, and the model implementation is very easy to follow. weaknesses: 1.Regarding the issue of missing in context learning assessments, VILA-U has undergone extensive training on image-text sequences and can accept any interleaved layouts of images and text. Therefore, it should possess excellent contextual learning abilities. This work could be enhanced by conducting tests on its ICT capabilities. 2.The description of the data curation process is not sufficiently clear, making it uncertain whether the data was meticulously selected or randomly chosen. If it is the former, I suspect that most of the improvements stem from high-quality data engineering rather than advancements in model architecture. questions: 1. The solid experimental results of VILA-U have largely reignited my confidence in the autoregressive image-text unified modeling direction. However, why is there no comparison with other text-image unified modeling models such as \textbf{MM-Interleaved, SEED, and DEEM} on image understanding tasks? Ignoring the contributions of pioneers is not advisable. 2. The video generation experiments are insufficient. Why not compare with methods like \textbf{OpenSora} and \textbf{CogVideoX} on \textbf{VBench}? 3. The article is unclear in its expression; are the visual tokens features directly discretized by the visual encoder, or are they encoded by a large language model? I suspect it is the former. 4. VILA-U claims to have lower computational complexity and to avoid misalignment. While I recognize the importance of addressing misalignment, the claim of lower complexity requires experimental support. Specifically, compared to unified autoregressive image-text modeling models, using separate models like fine-tuning Stable Diffusion can also construct end-to-end autoregressive image-text modeling, which is more efficient in training and performs better. Moreover, utilizing existing mature acceleration schemes offers fast speeds. VILA-U should emphasize more on data cleansing quality and misalignment. 5. Lastly, and most critically, I hypothesize that the structural improvements of the model provide minimal benefits compared to previous autoregressive unified models, with the majority of improvements stemming from the engineered data cleansing. For instance, MMC4-Core contains 22.4M data while MMC4 has 375M, yet some research indicates that training with these two datasets yields similar outcomes. Large-scale datasets like MMC4 are of very low quality. However, using just 6M of data to achieve excellent results suggests that your data is meticulously filtered, yet the paper lacks any detail on the core contributions of data construction. Conducting experiments on the same data with other model structures like \textbf{DreamLLM} is necessary to demonstrate the efficiency of \textbf{VILA-U}. I will improve my rating score if my concerns are addressed. flag_for_ethics_review: ['No ethics review needed.'] details_of_ethics_concerns: All datasets used are public, no ethics review needed. rating: 6 confidence: 5 code_of_conduct: Yes
02haSpO453
VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation
[]
VILA-U is a Unified foundation model that integrates Video, Image, Language understanding and generation. Traditional visual language models (VLMs) use separate modules for understanding and generating visual content, which can lead to misalignment and increased complexity. In contrast, VILA-U employs a single autoregressive next-token prediction framework for both tasks, eliminating the need for additional components like diffusion models. This approach not only simplifies the model but also achieves near state-of-the-art performance in visual language understanding and generation. The success of VILA-U is attributed to two main factors: the unified vision tower that aligns discrete visual tokens with textual inputs during pretraining, which enhances visual perception, and autoregressive image generation can achieve similar quality as diffusion models with high-quality dataset. This allows VILA-U to perform comparably to more complex models using a fully token-based autoregressive framework.
[ "Unified Visual Language Model", "Autoregressive Model" ]
https://openreview.net/pdf?id=02haSpO453
https://openreview.net/forum?id=02haSpO453
cGas6kZlaM
official_review
1,730,681,406,236
02haSpO453
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission2027/Reviewer_7Smq" ]
ICLR.cc/2025/Conference
2025
summary: - The paper presents VILA-U, a unified model for language, image and video understanding + generation - The model is trained with an autoregressive next token prediction loss for all tasks - The paper explores vision encoder choices to ensure understanding and generation performance soundness: 4 presentation: 3 contribution: 3 strengths: - The paper's most interesting contribution is the unified vision tower exploration to unify generation and understanding and the appropriate ways to train such an encoder - The approach is quite straightforward and the application of RQ-VAE allows for token efficiency while preserving more information - VILA-U is close to SOTA on visual understanding tasks (image and video) with comparable models - The model also fares well on image generation tasks and comes close to diffusion models weaknesses: - The method chooses RQ-VAE for efficiency, but there isn't a discussion / results around this. How would the results look if the vision tower didn't use RQ-VAE? How important is the RQ-VAE? - The generated images are relatively low-resolution (256 or 384px), especially since the RQ-VAE allows for increased efficiency in tokens - The paper doesn't really discuss video implementation details. Video understanding and generation have a mismatch in FPS / durations they usually support, what does VILA-U support? There isn't a discussion around this. - The paper claims to support video generation, but there are no quantitative results around this. The two qualitative examples are also very simplistic in Figure 7. questions: - Please share missing details as mentioned in the weaknesses - What are the number of image and video tokens going into the LLM? How many tokens are processed by the RQ-transformer and what is its size (the RQ-VAE paper has multiple different settings)? - It would be interesting to see if the vision tower training results hold for a general VAE setup instead of an RQ-VAE since that would make the results even more broadly applicable flag_for_ethics_review: ['No ethics review needed.'] rating: 8 confidence: 4 code_of_conduct: Yes
02haSpO453
VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation
[]
VILA-U is a Unified foundation model that integrates Video, Image, Language understanding and generation. Traditional visual language models (VLMs) use separate modules for understanding and generating visual content, which can lead to misalignment and increased complexity. In contrast, VILA-U employs a single autoregressive next-token prediction framework for both tasks, eliminating the need for additional components like diffusion models. This approach not only simplifies the model but also achieves near state-of-the-art performance in visual language understanding and generation. The success of VILA-U is attributed to two main factors: the unified vision tower that aligns discrete visual tokens with textual inputs during pretraining, which enhances visual perception, and autoregressive image generation can achieve similar quality as diffusion models with high-quality dataset. This allows VILA-U to perform comparably to more complex models using a fully token-based autoregressive framework.
[ "Unified Visual Language Model", "Autoregressive Model" ]
https://openreview.net/pdf?id=02haSpO453
https://openreview.net/forum?id=02haSpO453
OxnQkdPwss
official_comment
1,732,517,466,307
uMPyFz62XX
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission2027/Reviewer_n4tc" ]
ICLR.cc/2025/Conference
2025
comment: Sorry for the late reply. Thank the author for the detailed rebuttals. My main concerns have been addressed, so I increase my score to 6. Looking forward for you open-sourced codebase and models. Please add missing references about DEEM, SEED, and MM-Interleaved.
02haSpO453
VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation
[]
VILA-U is a Unified foundation model that integrates Video, Image, Language understanding and generation. Traditional visual language models (VLMs) use separate modules for understanding and generating visual content, which can lead to misalignment and increased complexity. In contrast, VILA-U employs a single autoregressive next-token prediction framework for both tasks, eliminating the need for additional components like diffusion models. This approach not only simplifies the model but also achieves near state-of-the-art performance in visual language understanding and generation. The success of VILA-U is attributed to two main factors: the unified vision tower that aligns discrete visual tokens with textual inputs during pretraining, which enhances visual perception, and autoregressive image generation can achieve similar quality as diffusion models with high-quality dataset. This allows VILA-U to perform comparably to more complex models using a fully token-based autoregressive framework.
[ "Unified Visual Language Model", "Autoregressive Model" ]
https://openreview.net/pdf?id=02haSpO453
https://openreview.net/forum?id=02haSpO453
L9rXkxDShj
official_review
1,730,291,386,237
02haSpO453
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission2027/Reviewer_X72f" ]
ICLR.cc/2025/Conference
2025
summary: The paper, VILA-U presents a unified framework of autoregressive multimodal generation and understanding. It achieves this by first training a vision encoder (discretized via RQ codebook) for text-conditioned image tokens (initialized from CLIP) and then training image+text data using autoregressive modeling. It presents a complete training recipe for creating autoregressive multimodal models, and the resulting model is benchmarked against a wide range of existing models across tasks (generation and understanding) soundness: 2 presentation: 3 contribution: 3 strengths: 1. The unification of multiple modalities in the same architecture (with the same training objective) is a very important topic. The paper is a valuable contribution to this overall research program. In the current work, the choice of quantized image tokens for image representation makes the autoregressive modeling task more natural as the image modality is tokenized into discrete tokens much like language. This helps minimizes the amount of code development required for adapting existing LLM code bases to their multimodal counterparts. 2. The paper performed fairly complete evaluations (image-text, video-text, text-image, ) and ablation studies that include model backbone and training objective. weaknesses: 1. It is not clear to me how to position the work in its novelty or effectiveness and this may be addressable with some rewriting. I see 3 potential angles 1. Training effectiveness by leveraging pretrained networks. The authors motivates the work by emphasizing that existing methods that attempt to unify multimodal generation and understanding either require significant architectural modifications to their uni-modal counterparts, or training from scratch. However, this comparison seems not to play a central role in the subsequent discussions. If the effectiveness of the proposed method is reflected in ease of training, then readers would expect to see comparison of training time/compute for comparable performances. 2. Effective token representation of image modality as discrete tokens: VILA-U differs from prior work in its adoption of RQ-VAE embedding for images. However, if this is the main innovation, the choice of RQ, its superiority over alternative methods, the important of discontinuous embedding of images (as compared to, for example, continuous embedding as in LaViT) will need to be elevated. 3. State-of-the-art performance: If the main contribution is instead just the shear effectiveness of the method. Then it should demonstrate this quantitative in the paper. Unfortunately, the comparison tables doesn’t seem to suggest that the VILA-U model is the state-of-the-art in most benchmarks. Perhaps it achieves Pareto frontier between understanding and generation tasks? Or outperforms other models for the same training compute/time? Either way I’m not clear what the main advantage of the current work is over others. 2. The discussion around training recipe is very important and useful for practitioners. However, it lacks both quantitative and qualitative (with examples) comparisons of the different training recipes. With the conclusion seems to be use an aligned CLIP model for image encoder initialization, which doesn’t seem to be a novel finding. I would recommend either supporting the discussion with more evaluation (quantitive or qualitative, ideally both) or moving the discussion to the appendix. 3. The paper suffers from unsubstantiated claims ( neither references nor experimental support). I've highlighted a few statements that are very important for the message in the paper below: - "replacing continuous tokens with VQ tokens in VLMs usually results in a severe performance drop" - "A straightforward combination of contrastive and reconstruction loss cannot converge" - "both the rFID and Top-1 accuracy of the vision tower only serves as a medium indicator instead of directly linearly correlated to the final performance of our whole multi-modal framework." questions: My biggest suggestion/question is related to the number 1 weakness described above. If the author could highlight the main contribution of the work that would make its positioning much easier. One positioning that was left out in the weakness section above is to position the work as the "first" in some regards. However, while autoregressive modeling of text + language is a burgeoning field, VILA-U is not the first model that performs autoregressive modeling of multiple modalities. flag_for_ethics_review: ['No ethics review needed.'] rating: 6 confidence: 4 code_of_conduct: Yes
02haSpO453
VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation
[]
VILA-U is a Unified foundation model that integrates Video, Image, Language understanding and generation. Traditional visual language models (VLMs) use separate modules for understanding and generating visual content, which can lead to misalignment and increased complexity. In contrast, VILA-U employs a single autoregressive next-token prediction framework for both tasks, eliminating the need for additional components like diffusion models. This approach not only simplifies the model but also achieves near state-of-the-art performance in visual language understanding and generation. The success of VILA-U is attributed to two main factors: the unified vision tower that aligns discrete visual tokens with textual inputs during pretraining, which enhances visual perception, and autoregressive image generation can achieve similar quality as diffusion models with high-quality dataset. This allows VILA-U to perform comparably to more complex models using a fully token-based autoregressive framework.
[ "Unified Visual Language Model", "Autoregressive Model" ]
https://openreview.net/pdf?id=02haSpO453
https://openreview.net/forum?id=02haSpO453
ERPUllpxWY
official_review
1,730,534,797,789
02haSpO453
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission2027/Reviewer_ma7u" ]
ICLR.cc/2025/Conference
2025
summary: The paper presents VILA-U, a unified foundation model for visual understanding and generation that integrates image and language processing into a single autoregressive next-token prediction framework. Unlike traditional visual language models that rely on separate modules or diffusion models for generation, VILA-U employs a unified vision tower to discretize visual inputs, aligning them with textual tokens through contrastive learning. From the experiments, the authors show that VILA-U can achieve state-of-the-art performance in both image generation and comprehension. soundness: 3 presentation: 3 contribution: 3 strengths: 1. VILA-U introduces a unified framework that handles both visual understanding and generation in a single autoregressive next-token prediction model. 2. The model leverages a unified vision tower that uses contrastive learning to align discrete visual tokens with textual inputs, which enhances the model's visual perception and text-visual alignment capabilities. 3. The experiments indicate the state-of-the-art performance of VILA-U in both image generation and understanding. weaknesses: 1. Missing the clarification between VILA-U and other tokenization-based multimodal models, like AnyGPT [1] and SEED-LLaMa [2]. Those models also used visual tokenizers to discrete the images and trained with causal language modeling loss. I noticed the authors cite the SEED-LLaMa in the line 102, but the claim of “In this work, we design our framework based on the autoregressive next-token prediction method for visual generation and make our VLM learn to generate visual content effectively.” does not the main difference between VILA-U and SEED-LLaMa. 2. One of the claimed contributions of this paper is about proposing the training strategy for the unified foundation vision tower. However, the training strategy seems similar to SEED [3], which also used contrastive loss between image embeddings and text embeddings. Can authors clarify the difference between the unified foundation vision tower and SEED? 3. Comparisons with other tokenization-based multimodal models [1,2] and Emu2 [4] are missing. 4. The limitation section, which is required, is missing. [1] Zhan, Jun, et al. "Anygpt: Unified multimodal llm with discrete sequence modeling." arXiv preprint arXiv:2402.12226 (2024). [2] Ge, Yuying, et al. "Making llama see and draw with seed tokenizer." arXiv preprint arXiv:2310.01218 (2023). [3] Ge, Yuying, et al. "Planting a seed of vision in large language model." arXiv preprint arXiv:2307.08041 (2023). [4] Sun, Quan, et al. "Generative multimodal models are in-context learners." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. questions: Please refer to the weaknesses section. flag_for_ethics_review: ['No ethics review needed.'] rating: 6 confidence: 4 code_of_conduct: Yes
02Od16GFRW
Ensembles provably learn equivariance through data augmentation
[]
Recently, it was proved that group equivariance emerges in ensembles of neural networks as the result of full augmentation in the limit of infinitely wide neural networks (neural tangent kernel limit). In this paper, we extend this result significantly. We provide a proof that this emergence does not depend on the neural tangent kernel limit at all. We also consider stochastic settings, and furthermore general architectures. For the latter, we provide a simple sufficient condition on the relation between the architecture and the action of the group for our results to hold. We validate our findings through simple numeric experiments.
[ "equivariance", "invariance", "ensemble models", "data augmentation", "SGD" ]
https://openreview.net/pdf?id=02Od16GFRW
https://openreview.net/forum?id=02Od16GFRW
zqMFzNLabE
official_comment
1,732,535,249,280
QtZIIxQssu
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission4689/Reviewer_nmuK" ]
ICLR.cc/2025/Conference
2025
title: Response to rebuttal comment: I would like to thank the authors for their response. I have gone through the reviews and the authors' responses. I believe my main concerns regarding novelty and empirical evaluation remain, and so I will keep my initial score.
02Od16GFRW
Ensembles provably learn equivariance through data augmentation
[]
Recently, it was proved that group equivariance emerges in ensembles of neural networks as the result of full augmentation in the limit of infinitely wide neural networks (neural tangent kernel limit). In this paper, we extend this result significantly. We provide a proof that this emergence does not depend on the neural tangent kernel limit at all. We also consider stochastic settings, and furthermore general architectures. For the latter, we provide a simple sufficient condition on the relation between the architecture and the action of the group for our results to hold. We validate our findings through simple numeric experiments.
[ "equivariance", "invariance", "ensemble models", "data augmentation", "SGD" ]
https://openreview.net/pdf?id=02Od16GFRW
https://openreview.net/forum?id=02Od16GFRW
p57cKHF38N
official_review
1,730,497,417,502
02Od16GFRW
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission4689/Reviewer_nmuK" ]
ICLR.cc/2025/Conference
2025
summary: The paper presents a theoretical analysis showing that data augmentation can lead to equivariance in deep ensembles. The paper's main result is that under several assumptions (e.g. on initialization, architecture, etc.), deep ensembles trained with data augmentation are equivariant in mean, even when individual models are generally not. A similar result was previously presented, but the paper extends these previous results, which were primarily focused on infinitely wide NNs trained with gradient descent under full augmentation, to ensembles of finite-width trained with SGD and random augmentation. The paper is mainly theoretical and validates the theoretical results through limited and small-scale empirical experiments. soundness: 3 presentation: 3 contribution: 3 strengths: 1. The paper is well-structured and easy to follow. 1. The paper extends previous results to more reasonable and applicable settings. This is a significant extension. weaknesses: I like the paper and believe it has a sufficient contribution and interesting results. However, there are several limitations stated below: 1. While the assumptions for the theoretical analysis are more applicable compared to previous works, they still hold only for infinite-size ensembles. Any analysis (including empirical) on the error bounds for finite ensembles would be beneficial. 1. While the results are important, the novelty is somewhat moderate in the sense that the emergent equivariance property of ensembles was previously proposed and the fact that the theoretical analysis heavily relies on previous works [1]. 1. From the empirical evidence, it is unclear if some of the assumptions (like symmetric initialization) are indeed necessary. The authors discuss this, but I believe it can be extended further. 1. Empirical evaluation is limited. It would be beneficial to extend it to more settings, even by small modifications like considering cyclic groups C_k of different orders (k), different architectures, model sizes, etc. 1. It would be beneficial to see the impact of ensemble size on the metrics in Table 1, like adding a line plot for ensemble size vs. OSP. The authors show results for different sizes, but summarizing them in one clear view would make it easier to follow. 1. The paper could benefit from a clearer and more explicit discussion of the limitations of the results. 1. Minor: - Line 37: “... a definitive question to the question…”. Reference [1] Flinth & Ohlsson, Optimization Dynamics of Equivariant and Augmented Neural Networks, 2023. questions: 1. Why does the OSP not increase at initialization when ensemble size increases? 1. From the figures, it seems like the results could improve with more epochs (also for baselines). Could you please provide results with a larger number of epochs? flag_for_ethics_review: ['No ethics review needed.'] rating: 6 confidence: 3 code_of_conduct: Yes
02Od16GFRW
Ensembles provably learn equivariance through data augmentation
[]
Recently, it was proved that group equivariance emerges in ensembles of neural networks as the result of full augmentation in the limit of infinitely wide neural networks (neural tangent kernel limit). In this paper, we extend this result significantly. We provide a proof that this emergence does not depend on the neural tangent kernel limit at all. We also consider stochastic settings, and furthermore general architectures. For the latter, we provide a simple sufficient condition on the relation between the architecture and the action of the group for our results to hold. We validate our findings through simple numeric experiments.
[ "equivariance", "invariance", "ensemble models", "data augmentation", "SGD" ]
https://openreview.net/pdf?id=02Od16GFRW
https://openreview.net/forum?id=02Od16GFRW
iJ0f5MQ97z
official_comment
1,732,031,597,237
5gHcJIEzFj
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission4689/Authors" ]
ICLR.cc/2025/Conference
2025
comment: We thank the reviewer for their constructive criticism. We are happy to hear that the reviewer finds our results to be of interest to the research community. ### Presentation of results in main body of text We agree with reviewer that the main result which is proved in the main text is not as interesting as the result which is proved in Appendix B (the result in the appendix is stronger). Note however that both results are entirely novel, as far as we could tell. Our reasoning for laying out the text as we do is that snce the two proofs follow essentially the same outline, presenting the simpler result in the main text is more pedagogical. That is, the version of our main theorem which could be proved by using the results on equivariant flows from Köhler et al. is presented in the main text precisely *because* it is simpler - less energy is put on the technical details and more on the conceptual. ### On the notation of the affine space $\mathcal{L}$ The point here is simply that $\mathcal{L}$ is an affine space (linear manifold) and not a linear space (vector space), that is, it can be described as a base point + the tangent space, which in this case is the parallel space going through the origin. Hopefully this clarifies the notation. We could of course choose another terminology for $\mathrm{T}\mathcal{L}$, such as 'parallel space', or the like, but we think that 'tangent space' is the clearest one. The reason we consider this as the space of linear layers is simply to include more potential architectures into our analysis. ### The results in Table 1 The results in Table 1 are in line with the theory we have developed. Since the space of convolutions with asymmetric filters (the asymmetric case) is not invariant under the action of the group, our results no longer guarantee the emergence of equivariance, even though they are invariantly distributed at initialization. It should be noted that the results for the asymmetric model are also quite close to equivariant, which naturally leads to the question if the sufficient condition we have in our theorem is a necessary one or not. In the paper we hypothesize that it may have to do with the fact that the energy of the asymmetric part of the filters is small, so that the asymmetric filters are approximately symmetric in some sense. In Appendix E, we compare what happens in the case of $5\times 5$ filters and we see that the gap between the symmetric and asymmetric indeed grows when the energy of the asymmetric part is increased.
02Od16GFRW
Ensembles provably learn equivariance through data augmentation
[]
Recently, it was proved that group equivariance emerges in ensembles of neural networks as the result of full augmentation in the limit of infinitely wide neural networks (neural tangent kernel limit). In this paper, we extend this result significantly. We provide a proof that this emergence does not depend on the neural tangent kernel limit at all. We also consider stochastic settings, and furthermore general architectures. For the latter, we provide a simple sufficient condition on the relation between the architecture and the action of the group for our results to hold. We validate our findings through simple numeric experiments.
[ "equivariance", "invariance", "ensemble models", "data augmentation", "SGD" ]
https://openreview.net/pdf?id=02Od16GFRW
https://openreview.net/forum?id=02Od16GFRW
gRu2UBNEQL
official_comment
1,732,737,691,728
02Od16GFRW
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission4689/Authors" ]
ICLR.cc/2025/Conference
2025
title: Updated version of the pdf comment: We have now uploaded an updated version of the pdf. All changes are marked in blue. In short, the added material is more or less what was posted in our last comment. We have also made some other minor changes, such as changing the terminology "tangent space" and correcting typos. In a final version of the paper, we will redo the failed experiments for C16 with BILINEAR interpolation, and make an experiment with standard CNN:s for the NEAREST interpolation. We apologize for not being able to do so before the deadline for pdf updates. We would like to thank the reviewers again their suggested changes. We think that they have improved the paper.
02Od16GFRW
Ensembles provably learn equivariance through data augmentation
[]
Recently, it was proved that group equivariance emerges in ensembles of neural networks as the result of full augmentation in the limit of infinitely wide neural networks (neural tangent kernel limit). In this paper, we extend this result significantly. We provide a proof that this emergence does not depend on the neural tangent kernel limit at all. We also consider stochastic settings, and furthermore general architectures. For the latter, we provide a simple sufficient condition on the relation between the architecture and the action of the group for our results to hold. We validate our findings through simple numeric experiments.
[ "equivariance", "invariance", "ensemble models", "data augmentation", "SGD" ]
https://openreview.net/pdf?id=02Od16GFRW
https://openreview.net/forum?id=02Od16GFRW
g0WeyMOGGc
official_comment
1,732,031,576,659
02Od16GFRW
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission4689/Authors" ]
ICLR.cc/2025/Conference
2025
title: Planned updates comment: We would like to thank all the reviewers for their work. Their reviews are all insightful, and contain many valuable suggestion. We have responded to their questions and comments in individual posts. Let us here only advertize the two big updates we will make to the manuscript before the end of the discussion period. * We will perform a new set of experiments for the C16 group. * We will make a more serious evaluation of our models also for smaller ensemble sizes, providing some empirical results in this direction. Here is already a plot formed by measuring the metrics at epoch 10 for the different models (shown is a mean of 10 bootstrapped ensemble samples for each size and model) : [Plot](https://anonymous.4open.science/api/repo/ensemble_experiment-1B83/file/graphics/complete_plot.png?v=5df1ae3f) -- the general trend is that the difference in equivariance is detectable already for moderate ensemble sizes. In the updated version of the paper, we will provide data for 30 bootstraps, and perform some statistical tests. See also the comment to reviewer nmuK. We will try to make the updates as soon as possible.
02Od16GFRW
Ensembles provably learn equivariance through data augmentation
[]
Recently, it was proved that group equivariance emerges in ensembles of neural networks as the result of full augmentation in the limit of infinitely wide neural networks (neural tangent kernel limit). In this paper, we extend this result significantly. We provide a proof that this emergence does not depend on the neural tangent kernel limit at all. We also consider stochastic settings, and furthermore general architectures. For the latter, we provide a simple sufficient condition on the relation between the architecture and the action of the group for our results to hold. We validate our findings through simple numeric experiments.
[ "equivariance", "invariance", "ensemble models", "data augmentation", "SGD" ]
https://openreview.net/pdf?id=02Od16GFRW
https://openreview.net/forum?id=02Od16GFRW
e27HEbI0AF
official_comment
1,732,470,978,606
5M1yC2n4Sl
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission4689/Reviewer_YfbU" ]
ICLR.cc/2025/Conference
2025
comment: I thank the author for their response and the additional plots. I have gone through the author's response as well as the other reviews. Unfortunately, my concerns about the usefulness (theoretical/empirical)/non-trivialness of the theory remain. Moreover, the experiments are not convincing enough to make a case for the theory (e.g., the symmetry component important in theory seems to have minimal empirical impact). I look forward to more experiments the authors have promised in their global response. If there is a way to connect the theory and experiments better or provide more use cases (theoretical/empirical), I would be happy to increase my score. But currently, unfortunately, I am unable to do so.
02Od16GFRW
Ensembles provably learn equivariance through data augmentation
[]
Recently, it was proved that group equivariance emerges in ensembles of neural networks as the result of full augmentation in the limit of infinitely wide neural networks (neural tangent kernel limit). In this paper, we extend this result significantly. We provide a proof that this emergence does not depend on the neural tangent kernel limit at all. We also consider stochastic settings, and furthermore general architectures. For the latter, we provide a simple sufficient condition on the relation between the architecture and the action of the group for our results to hold. We validate our findings through simple numeric experiments.
[ "equivariance", "invariance", "ensemble models", "data augmentation", "SGD" ]
https://openreview.net/pdf?id=02Od16GFRW
https://openreview.net/forum?id=02Od16GFRW
aFqJ817RpX
official_comment
1,732,736,818,668
TlcGqugBRP
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission4689/Authors" ]
ICLR.cc/2025/Conference
2025
comment: Thank you again. We completely agree that readability of papers is very important. We have changed the word 'tangent space' to 'direction', as used on for example Wikipedia, Planetmath and in Geometric Methods and Applications for computer science and engineering, J. Gallier, Springer, 2011. As for the disposition of the text, we completely understand and respect the reviewer's opinion. It would be possible to write the paper only concentrating on the more technically involved version of Theorem 4.2. However, we feel the need to point out that we do not present any theorems in the appendix which are not at least clearly advertized in the main text. Appendix B is only containing lemmas used in the proof of one of the versions of Theorem 4.2, and the proof of that version. Note that the theorem in the main text mentions the case of training with SGD using random augmentation. We agree that the theorem formulated and proven in appendix C is only mentioned in passing in the main text. However, the statement of the theorem is still there, albeit not in a theorem environment. Note that precisely stating the result is quite involved, and the result is not needed to prove our main result.
02Od16GFRW
Ensembles provably learn equivariance through data augmentation
[]
Recently, it was proved that group equivariance emerges in ensembles of neural networks as the result of full augmentation in the limit of infinitely wide neural networks (neural tangent kernel limit). In this paper, we extend this result significantly. We provide a proof that this emergence does not depend on the neural tangent kernel limit at all. We also consider stochastic settings, and furthermore general architectures. For the latter, we provide a simple sufficient condition on the relation between the architecture and the action of the group for our results to hold. We validate our findings through simple numeric experiments.
[ "equivariance", "invariance", "ensemble models", "data augmentation", "SGD" ]
https://openreview.net/pdf?id=02Od16GFRW
https://openreview.net/forum?id=02Od16GFRW
VVQSMcZxgP
official_comment
1,732,549,199,100
02Od16GFRW
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission4689/Authors" ]
ICLR.cc/2025/Conference
2025
title: Results of updated experiments (I) comment: Dear reviewers, we have had some technical issues, but have now finally managed to run our updated experiments. The results are interesting, and not as clear-cut as one could have wished for. Still, we think that they support the relevance of our theory rather than speak against it. First, we have, as advertized, re-evaluated our previous experiments (i.e., for $C_4$) for 30 bootstraps instead of 10 bootstraps per sample size. There are no surprises here: the symmetric architecture still outperforms the asymmetric ones, and does so with statistical significance ($p<.001$) from $250$ ensemble members onwards (with respect to the divergence metric, even from $75$ ensemble members). Here is an updated plot: [plot_C4](https://anonymous.4open.science/api/repo/ensemble_experiment-1B83/file/graphics/C4_nearest.png?v=b54e4155) We have also run the same experiment for the bigger group $C_{16}$. Let us first note that when using this group, we stray from the setting in the paper. The group is no longer acting directly on the support of the images - due to interpolation effects. Hence, the lifted representation $\rho$ on the linear layers $A_i$ no longer perfectly corresponds to rotation of the filters $\varphi_i$ (Example 3.2 is no longer valid). In fact, again due to interpolation, Assumption 1 and 3 are also not given and the spaces $\mathcal{L}$ are no longer invariant. With this said, here is the plot for our experiments: [plot_C16](https://anonymous.4open.science/api/repo/ensemble_experiment-1B83/file/graphics/C16_nearest.png?v=d3772c9f). We see that while the symmetric filters still produce more equivariant ensembles than the asymmetric ones on the in-distribution MNIST test data, they are actually not better, and even worse with respect to the divergence metric, on the CIFAR10 data. The most striking difference to the $C_4$ experiments are however that the all of the models are significantly less equivariant on the CIFAR10 data. This was not what was expected. One realizes that this might have to do with the way we have performed our augmentation: We have used the default 'nearest' interpolation option in torchvision to perform the augmentation, and also making sure that the background of the images are uniform. These transformations are not a representation of the group $C_{16}$ -- if we in particular think about filters of size $3\times 3$, the small rotations in fact do nothing.
02Od16GFRW
Ensembles provably learn equivariance through data augmentation
[]
Recently, it was proved that group equivariance emerges in ensembles of neural networks as the result of full augmentation in the limit of infinitely wide neural networks (neural tangent kernel limit). In this paper, we extend this result significantly. We provide a proof that this emergence does not depend on the neural tangent kernel limit at all. We also consider stochastic settings, and furthermore general architectures. For the latter, we provide a simple sufficient condition on the relation between the architecture and the action of the group for our results to hold. We validate our findings through simple numeric experiments.
[ "equivariance", "invariance", "ensemble models", "data augmentation", "SGD" ]
https://openreview.net/pdf?id=02Od16GFRW
https://openreview.net/forum?id=02Od16GFRW
TlcGqugBRP
official_comment
1,732,693,338,590
iJ0f5MQ97z
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission4689/Reviewer_Ljyp" ]
ICLR.cc/2025/Conference
2025
title: Response comment: Thank you for your comment. I think the paper has scientific merit, which is why I gave it a score above the acceptance threshold. However, the way the paper is written is important and effects is score. As it is, the main results that make this paper worthwhile are in the supplementary material, not just the proofs but the theorems themselves. This means that the average reader won't even know they exist. Note that as a reviewer "It is not necessary to read supplementary material" making a clear distinction between the main paper and supplementary material. Second, while tangent space might be clearer to some, it requires prior knowledge of differential manifolds. This isn't always the case in the general ML community, as this isn't part of the standard mathematical tools used. As such, adding this without any reason when a simple linear algebra term would suffice is something that I think is problematic as it makes the paper less accessible for no valid reason.
02Od16GFRW
Ensembles provably learn equivariance through data augmentation
[]
Recently, it was proved that group equivariance emerges in ensembles of neural networks as the result of full augmentation in the limit of infinitely wide neural networks (neural tangent kernel limit). In this paper, we extend this result significantly. We provide a proof that this emergence does not depend on the neural tangent kernel limit at all. We also consider stochastic settings, and furthermore general architectures. For the latter, we provide a simple sufficient condition on the relation between the architecture and the action of the group for our results to hold. We validate our findings through simple numeric experiments.
[ "equivariance", "invariance", "ensemble models", "data augmentation", "SGD" ]
https://openreview.net/pdf?id=02Od16GFRW
https://openreview.net/forum?id=02Od16GFRW
QtZIIxQssu
official_comment
1,732,031,711,186
p57cKHF38N
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission4689/Authors" ]
ICLR.cc/2025/Conference
2025
title: Answers to questions comment: ### OSP at initialization Let us first state that we do not think that this goes against our theory. Instead, we think that this is essentially what is going on: Before training, the predictions of the networks should be more or less random -- that is, the predictions are independent of the data, and rather only are different due to different draws of the parameters at initialization. Thus, the infinite-member ensembles should more or less, for each datum $x$, give one of the 10 classes completely at random. Note that the latter will almost be true also for finite-sample ensembles. Each rotated version hence has a one in ten chance of being the same as the the non-rotation examples, and the expected value of the OSP is $1.3$, which indeed seems to be approximately the OSP of the big ensembles at initialization. A shorter answer is that this is due to the $\mathrm{argmax}$ function, which is used to determine the predictions, being discontinuous. Note in particular that the KL-divergence-metric is getting smaller when we compare then at 10,100 and 1000 ensemble members (see appendix), so that the ensembles get more and more equivariant at initialization with growing ensemble sizes. ### Longer training We agree that it seems that longer training definitely could lead to more equivariant ensembles. We will however not make any experiments for this, and instead prioritize the C16-experiments. A continuing trend of more and more equivariant ensembles would, as we see it, not say *that* much in this context - the fact that the symmetric ensembles converge faster will still provide the same support to our theory as before. We deem whether the trends continue on another group, where the assumptions are not met in the same clean manner as for C4, a more interesting question, and will therefore prioritize them. We hope the reviewer understands this decision.
02Od16GFRW
Ensembles provably learn equivariance through data augmentation
[]
Recently, it was proved that group equivariance emerges in ensembles of neural networks as the result of full augmentation in the limit of infinitely wide neural networks (neural tangent kernel limit). In this paper, we extend this result significantly. We provide a proof that this emergence does not depend on the neural tangent kernel limit at all. We also consider stochastic settings, and furthermore general architectures. For the latter, we provide a simple sufficient condition on the relation between the architecture and the action of the group for our results to hold. We validate our findings through simple numeric experiments.
[ "equivariance", "invariance", "ensemble models", "data augmentation", "SGD" ]
https://openreview.net/pdf?id=02Od16GFRW
https://openreview.net/forum?id=02Od16GFRW
LeCUJGZHEZ
official_comment
1,732,549,293,504
VVQSMcZxgP
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission4689/Authors" ]
ICLR.cc/2025/Conference
2025
title: Results of updated experiments (II) comment: We therefore repeated our experiments with the 'bilinear' interpolation option. This is also not a representation of the group in a formal manner, but is at least closer to one -- the action of the small rotations is no longer trivial on small filters, for instance. Here is the plot for those results: [plot_C16_bilinear](https://anonymous.4open.science/api/repo/ensemble_experiment-1B83/file/graphics/C16_bilinear.png?v=23256f1b) We see that our models now become less invariant on the MNIST data, but more invariant on the CIFAR data. The former can be explained with the fact that the bilinear interpolation will produce images that are blurrier, and also result in a non-uniform background -- the dataset hence becomes more diverse, and it will be a harder problem to learn it. The models can hence not rely on simply learning to perform well on the dataset to become equivariant, as seemed to be enough in the case of using the 'nearest interpolation'. The still bad performance of the symmetric and asymmetric models is in fact explained by our main result! When rotating an $\mathcal{L}^{\mathrm{sym}}$-filter with $\pi/4$, we will approximately end up with a filter with only non-zero elements on the corners. This is very far from being a $\mathcal{L}^{\mathrm{sym}}$ filter -- the invariance condition is hence far from being satisfied. The asymmetric support for some reason performs slightly better -- we could speculate on why, but ultimately, they perform badly, as would be predicted by the fact that space $\mathcal{L}^{\mathrm{asym}}$ is asymmetric. When repeating the experiments with standard $3\times 3$-filters, something different happens though -- as can be seen in the plot, they vastly outperform the non-standard filters. The corresponding subspace $\mathcal{L}^{\mathrm{cnn}}$ is still not perfectly invariant to non-$\pi/2$-rotations - and also do not yield perfectly equivariant ensembles -- but they are definitely 'more' invariant than both the non-standard filter supports considered -- a rotated $3\times 3$-filter will 'bleed' somewhat, but not as extremely as a $C_4$-symmetric filter. For full disclosure, we should mention that the bootstraps in the final plot for the non-standard filters are only over approximately 900 total ensemble members -- due to technical difficulties, not all 1000 members finished their training. This will be fixed in a final version. We should course also repeat the CNN experiments also for the nearest interpolation - we will do so in the final version, but already want to report the results we have now for the reviewers to consider. All in all, we believe that this new set of experiments speaks *in favour* of the practical importance of our theory. Our experiments indicate that in situations where the compatibility condition is not satisfied, the augmentation will *not* lead to equivariant ensembles by itself! One can also note that this aspect of the theory is not at all present in Gerken and Kessel - and only somewhat tangentially in Nordenfors, Ohlsson and Flinth. In this spirit, we thank the reviewers very much for suggesting these experiments. We understand that we are very close to the end of the discussion period, and understand that the reviewers have already put a lot of effort into reviewing our work, but still hope that they can take the time to consider also these last-minute developments.
02Od16GFRW
Ensembles provably learn equivariance through data augmentation
[]
Recently, it was proved that group equivariance emerges in ensembles of neural networks as the result of full augmentation in the limit of infinitely wide neural networks (neural tangent kernel limit). In this paper, we extend this result significantly. We provide a proof that this emergence does not depend on the neural tangent kernel limit at all. We also consider stochastic settings, and furthermore general architectures. For the latter, we provide a simple sufficient condition on the relation between the architecture and the action of the group for our results to hold. We validate our findings through simple numeric experiments.
[ "equivariance", "invariance", "ensemble models", "data augmentation", "SGD" ]
https://openreview.net/pdf?id=02Od16GFRW
https://openreview.net/forum?id=02Od16GFRW
JiDmgDtNHX
official_comment
1,732,031,644,247
p57cKHF38N
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission4689/Authors" ]
ICLR.cc/2025/Conference
2025
title: Comments on weaknesses comment: We thank for the constructive review. We are happy to hear that the reviewer thinks that our paper is easy to follow, and that our extension makes the results applicable in more reasonable settings compared to previous results. All of the points the reviewer makes are valid, as are the suggestions. Let us in the following comment on each on the weaknesses and the questions. ### Infinite vs finite size ensembles It is a reasonable suggestion to include more results about ensembles of finite size. It should be noted that we already have some plots related to the importance of ensemble size in the appendix. We agree that these are somewhat hard to interpret. We have therefore chosen to redo the evaluations, to include more sizes. We have at the time of writing of this rebuttal built new sub-ensembles from our trained models for more ensemble sizes, and measured each of our metrics for the resulting models at epoch 10. Using a simple t-test on 10 (bootstrapped) samples per size and model, we can confirm with statistical significance (p<.005) that with respect to the KL-divergence, * $\mathcal{L}^{\mathrm{sym}}$-ensembles are more equivariant than the $\mathcal{L}^{\mathrm{assym}}$ with symmetric initialization for ensemble sizes bigger than or equal to 75 on MNIST, and bigger than 100 on CIFAR. * $\mathcal{L}^{\mathrm{sym}}$-ensembles are more equivariant than the $\mathcal{L}^{\mathrm{assym}}$ with asymmetric initialization for ensemble sizes bigger than or equal to 25 on MNIST, and bigger than 25 on CIFAR. See also the following plot (also showcasing OSP) : [Plot](https://anonymous.4open.science/api/repo/ensemble_experiment-1B83/file/graphics/complete_plot.png?v=5df1ae3f) In the updated version of the paper, we will present data for 30 bootstrapped examples (a setting in which a t-test makes more sense) on all metrics. We can already now conclude that the difference in performance between the different versions is present already for moderate ensemble sizes. ### Novelty We understand and respect the reviewer's point, but hope that they can also agree that the results from the different papers have been put together in a non-trivial way to produce new, meaningful results. ### Necessary vs. sufficient conditions We have indeed only proven sufficient conditions, and we agree that this can be made clearer in the text. We however genuinely believe that proving more than we have already done goes beyond the scope of this work - significantly new ideas need to be applied to obtain a result about convergence towards, rather invariance of, the symmetrical models. ### More groups The reason for only testing the C4 group is that we there have a clean example of where our results apply. When going over to rotation groups of higher order, one starts to need to interpolate, and the invariance condition will not be as clear cut as before. We however agree that it is beneficial to also perform experiments in a more 'dirty' setting as far as our theory concerns, since this will provide more information about its practical relevance. We will make one other rounds of experiments, for C16. This will take some time to setup and evaluate, whence we cannot report on results now - we will do this as soon as possible. ### Limitations It is a reasonable suggestion to include a compilation of the limitations in order to increase the readability of the paper. We will do so in an updated version of the paper. As we see it, our main limitations are * Our condition is sufficient rather than necessary * Our guarantee is only about the infinite-member limit of ensembles. We can also remark that the following are things we *speculate* on, but *haven't* proven: * The extent to which $\Pi_{\mathcal{L}}$ and $\Pi_G$ commute is indicative of emergent equivariance - the smaller it is, the more equivariant the ensembles should be. * The set of symmetric models may be an attractor of the dynamics, and not only stable.
02Od16GFRW
Ensembles provably learn equivariance through data augmentation
[]
Recently, it was proved that group equivariance emerges in ensembles of neural networks as the result of full augmentation in the limit of infinitely wide neural networks (neural tangent kernel limit). In this paper, we extend this result significantly. We provide a proof that this emergence does not depend on the neural tangent kernel limit at all. We also consider stochastic settings, and furthermore general architectures. For the latter, we provide a simple sufficient condition on the relation between the architecture and the action of the group for our results to hold. We validate our findings through simple numeric experiments.
[ "equivariance", "invariance", "ensemble models", "data augmentation", "SGD" ]
https://openreview.net/pdf?id=02Od16GFRW
https://openreview.net/forum?id=02Od16GFRW
HKJJNQ1JKw
official_review
1,730,600,353,980
02Od16GFRW
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission4689/Reviewer_YfbU" ]
ICLR.cc/2025/Conference
2025
summary: This paper shows that an ensemble of models when trained with data augmentation leads to emergence of equivariance properties naturally. The results generalize over past known results based on NTKs. The theory assumes some basic assumptions on the architecture and shows that, when the initialization of the weights in an architecture has some symmetry, then, the expected architecture of the ensemble is equivariant. Experimental results with various ensembles validates the results for the C4 group of symmetries. soundness: 3 presentation: 3 contribution: 2 strengths: - The work show the emergence of equivariant in ensemble models - The work generalizes previous works where the proof relied on NTKs - Experiments with large ensemble of models show the emergence of equivariance weaknesses: I have several concerns over the usefulness of the theory and the experimental results. Usefulness of theory: - What is the use of the theory in model design or practical use cases? Since equivariant models seems to give perfect equivariance and data augmentation techniques give approximate equivariance. So, I am wondering what is the use of ensemble technique for symmetries, especially, given that we need over 1000 models to get good equivariant results. - What are the advantages of the proposed technique compared to existing symmetrization and canonicalization methods [1-4] that can convert non-equivariant models into equivariant ones using techniques somewhat similar to ensemble methods but with additional transformations that looks similar to augmentation. Experimental Results: - Although the experimental does show that the architecture with symmetric support does give invariant output, but even the asymmetric architecture seems to be giving invariant output, questioning the usefulness of the theory. It is also discussed in the paper about the symmetric states being attractors potentially, but, it still makes the current theory not very useful. - Experiments are only shown for C4 symmetries [1] Basu, Sourya, et al. "Equi-tuning: Group equivariant fine-tuning of pretrained models." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 6. 2023. [2] Mondal, Arnab Kumar, et al. "Equivariant adaptation of large pretrained models." Advances in Neural Information Processing Systems 36 (2023): 50293-50309. [3] Basu, Sourya, et al. "Efficient equivariant transfer learning from pretrained models." Advances in Neural Information Processing Systems 36 (2024). [4] Kaba, Sékou-Oumar, et al. "Equivariance with learned canonicalization functions." International Conference on Machine Learning. PMLR, 2023. questions: Please see the weaknesses. flag_for_ethics_review: ['No ethics review needed.'] rating: 3 confidence: 4 code_of_conduct: Yes
02Od16GFRW
Ensembles provably learn equivariance through data augmentation
[]
Recently, it was proved that group equivariance emerges in ensembles of neural networks as the result of full augmentation in the limit of infinitely wide neural networks (neural tangent kernel limit). In this paper, we extend this result significantly. We provide a proof that this emergence does not depend on the neural tangent kernel limit at all. We also consider stochastic settings, and furthermore general architectures. For the latter, we provide a simple sufficient condition on the relation between the architecture and the action of the group for our results to hold. We validate our findings through simple numeric experiments.
[ "equivariance", "invariance", "ensemble models", "data augmentation", "SGD" ]
https://openreview.net/pdf?id=02Od16GFRW
https://openreview.net/forum?id=02Od16GFRW
5gHcJIEzFj
official_review
1,730,565,560,048
02Od16GFRW
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission4689/Reviewer_Ljyp" ]
ICLR.cc/2025/Conference
2025
summary: The paper expands the results of Gerken & Kessel that show that data augmentation produces equivariant ensembles of models using NTK, by looking at finite network sizes. They then show empirically that their theoretical results indeed hold in practice (up to sampling errors). soundness: 3 presentation: 2 contribution: 2 strengths: - It generalizes the results in Gerken & Kessel - The topic of invariance/equivariance is important so these results would be of interest to people in that community weaknesses: My main issue is with the writing: - The results presented in the main text are quite trivial, that if you start with an invariant distribution and use an invariant flow you end up with an invariant distribution. The more interesting results are in the appendix (appendix B and C) - You writing $\mathcal{L} = A_\mathcal{L} + T\mathcal{L}$ with $T\mathcal{L}$ the tangent space is very confusing, as tangent space is defined for a manifold and we are talking about a linear space. It needlessly complicates things as there is no need to involve differential geometry when we are working on linear spaces. questions: The results in Table 1 aren't that clear to me. In the asymmetric case where you have a symmetric initialization, shouldn't you get results that are similar to the symmetric case? Yet there is a large gap flag_for_ethics_review: ['No ethics review needed.'] rating: 6 confidence: 3 code_of_conduct: Yes
02Od16GFRW
Ensembles provably learn equivariance through data augmentation
[]
Recently, it was proved that group equivariance emerges in ensembles of neural networks as the result of full augmentation in the limit of infinitely wide neural networks (neural tangent kernel limit). In this paper, we extend this result significantly. We provide a proof that this emergence does not depend on the neural tangent kernel limit at all. We also consider stochastic settings, and furthermore general architectures. For the latter, we provide a simple sufficient condition on the relation between the architecture and the action of the group for our results to hold. We validate our findings through simple numeric experiments.
[ "equivariance", "invariance", "ensemble models", "data augmentation", "SGD" ]
https://openreview.net/pdf?id=02Od16GFRW
https://openreview.net/forum?id=02Od16GFRW
5M1yC2n4Sl
official_comment
1,732,031,587,812
HKJJNQ1JKw
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission4689/Authors" ]
ICLR.cc/2025/Conference
2025
comment: We thank the reviewer for their constructive criticism. We also understand the reviewer's concerns regarding the applicability of the theoretical developments to model design. However, we hope that we can convince the reviewer of the importance of the theoretical developments regardless of their immediate applicability. The general question that motivated this paper is: "Does data augmentation lead to equivariance?" The technique of data augmentation has been used for a long time in order to align models with various operations, that is, to make them more robust. There is however little in the way of theoretical guarantees of this observed property of data augmentation. In our case we restrict ourselves to studying alignment with symmetries, that is, to emergent group equivariance from data augmentation. In this context, our theoretical results can be viewed as a partial answer to the general question that motivates our research. ### Usefulness of theory In our paper, the objective is to show that when training ensembles of networks from scratch under data augmentation, there is an emergent equivariance coming from the optimization process itself. On the other hand, in the papers [1-4], the goal is to develop methods for making a pre-trained model equivariant. In papers [1,3], this is done by averaging the model over the orbit under the group action. This differs from ensembling as considered in out paper, since we average over initializations and random draws of group elements during training. In papers [2,4], it is done by precomposing the model with an equivariant canonicalization map. Although the authors of paper [2] note an augmentation effect of the not yet aligned canonicalization map during training, this is not the cause of the equivariance in this case, and the augmetation effect goes down over time. The results in papers [1-4] are very interesting and we are not suggesting that people should favor our methods over the ones found in these papers. In fact, it is hard to see how our results would apply in the context of *finetuning* foundation models, which is the main focus in at least [1,3]. (They are in principle applicable when the models are trained from scratch). ### Experimental results As the reviewer notes, our experimental results seem to indicate that even the models with asymmetric filters become equivariant. This suggests that the sufficient condition in our theorem is not a necessary one. We do not think that this weakens our theory, it merely suggests that further developments are possible. Furthermore, in the paper we hypothesize that this might have to do with the fact that the asymmetric filters are approximately symmetric in the sense that the energy in the asymmetric part is quite small. In Appendix E we provide details on the same experiment performed with $5\times 5$ filters instead of $3\times 3$ filters and we see that indeed the gap between the symmetric and the asymmetric model grows when the energy of the asymmetric part is increased. ### Experiments beyond C4 The reason for only testing the C4 group is that we there have a clean example of where our results apply. When going over to rotation groups of higher order, one starts to need to interpolate, and the invariance condition will not be as clear cut as before. We however agree that it is beneficial to also perform experiments in a more 'dirty' setting as far as our theory concerns, since this will provide more information about its practical relevance. We will make one other rounds of experiments, for C16. This will take some time to setup and evaluate, whence we cannot report on results now - we will do this as soon as possible.
02DCEU6vSU
Gen-LRA: Towards a Principled Membership Inference Attack for Generative Models
[ "Joshua Ward", "Chi-Hua Wang", "Guang Cheng" ]
Evaluating the potential privacy leakage of synthetic data is an important but unresolved problem. Most existing adversarial auditing frameworks for synthetic data rely on heuristics and unreasonable assumptions to attack the failure modes of generative models, exhibiting limited capability to describe and detect the privacy exposure of training data. In this paper, we study designing Membership Inference Attacks (MIAs) that specifically exploit the observation that generative models tend to memorize certain data points in their training sets, leading to significant local overfitting. Here, we propose Generative Likelihood Ratio Attack (Gen-LRA), a novel, computationally efficient shadow-box MIA that, with no assumption of model knowledge or access, attacks the generated synthetic dataset by conducting a hypothesis test that it is locally overfit to potential training data. Assessed over a comprehensive benchmark spanning diverse datasets, model architectures, and attack parameters, we find that Gen-LRA consistently dominates other MIAs for generative models across multiple performance metrics. These results underscore Gen-LRA's effectiveness as an interpretable and robust privacy auditing tool, highlighting the significant privacy risks posed by generative model overfitting in real-world applications
[ "Privacy", "Membership Inference Attacks", "Generative Models" ]
https://openreview.net/pdf?id=02DCEU6vSU
https://openreview.net/forum?id=02DCEU6vSU
v2kiJYWXcq
official_review
1,730,638,197,975
02DCEU6vSU
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission12851/Reviewer_cCQH" ]
ICLR.cc/2025/Conference
2025
summary: This paper introduces the Generative Likelihood Ratio Attack (Gen-LRA), a novel membership inference attack specifically aimed at detecting privacy leakage due to overfitting in generative models. Unlike prior methods, Gen-LRA employs a likelihood ratio-based hypothesis testing approach to infer membership without requiring extensive knowledge of the model structure or parameters. By leveraging density estimation techniques, the authors assess whether synthetic data generated by a model is overfitting to specific training data points, particularly in regions with outliers. The authors demonstrate that Gen-LRA significantly outperforms existing MIA methods across various generative architectures and datasets, with particular success in scenarios with low false positive rates, highlighting the nuanced privacy risks associated with generative models. soundness: 2 presentation: 3 contribution: 2 strengths: This paper introduces the Generative Likelihood Ratio Attack (Gen-LRA), a novel membership inference attack specifically aimed at detecting privacy leakage due to overfitting in generative models. Unlike prior methods, Gen-LRA employs a likelihood ratio-based hypothesis testing approach to infer membership without requiring extensive knowledge of the model structure or parameters. By leveraging density estimation techniques, the authors assess whether synthetic data generated by a model is overfitting to specific training data points, particularly in regions with outliers. The authors demonstrate that Gen-LRA significantly outperforms existing MIA methods across various generative architectures and datasets, with particular success in scenarios with low false positive rates, highlighting the nuanced privacy risks associated with generative models. weaknesses: 1. The effectiveness of Gen-LRA depends heavily on accurate density estimation, which can be challenging in high-dimensional data settings. The use of kernel density estimation (KDE) or principal component analysis (PCA) for dimensionality reduction may limit applicability and accuracy. This limitation is critical because the success of the Gen-LRA method hinges on reliable density estimation, which becomes less accurate in high-dimensional spaces without significant computational expense. Inaccuracies here can undermine the method's robustness, making this the most pressing limitation. 2. Although Gen-LRA performs well at low false positive rates, its reliance on outlier detection may lead to elevated false positives in datasets with inherently high variability or complex distributions. False positives can impair the practical applicability of Gen-LRA in privacy-sensitive contexts, as overly cautious results may lead to unnecessary restrictions on data release. 3. Gen-LRA presumes that privacy leakage primarily stems from overfitting, potentially overlooking other forms of leakage that may not manifest as local overfitting. This could lead to incomplete privacy assessments, as the Gen-LRA approach might miss privacy vulnerabilities that do not align with the overfitting model. Expanding Gen-LRA’s scope to address other leakage types could enhance its overall utility. questions: 1.The manuscript lacks a clear explanation of the practical utility of applying MIA to synthetic data. It remains unclear why synthetic data was chosen as the focus, rather than real-world or other benchmark datasets. The authors are encouraged to provide references in the Related Work section to strengthen the justification for studying synthetic data specifically. Expounding on the unique relevance of synthetic data to MIA would better demonstrate the necessity and contributions of this study. 2.Several typographical errors and repeated references are present in the reference section, such as on Line 527 and Line 729. A thorough review of the references is recommended to ensure accuracy and consistency across all citations. flag_for_ethics_review: ['No ethics review needed.'] rating: 3 confidence: 4 code_of_conduct: Yes
02DCEU6vSU
Gen-LRA: Towards a Principled Membership Inference Attack for Generative Models
[ "Joshua Ward", "Chi-Hua Wang", "Guang Cheng" ]
Evaluating the potential privacy leakage of synthetic data is an important but unresolved problem. Most existing adversarial auditing frameworks for synthetic data rely on heuristics and unreasonable assumptions to attack the failure modes of generative models, exhibiting limited capability to describe and detect the privacy exposure of training data. In this paper, we study designing Membership Inference Attacks (MIAs) that specifically exploit the observation that generative models tend to memorize certain data points in their training sets, leading to significant local overfitting. Here, we propose Generative Likelihood Ratio Attack (Gen-LRA), a novel, computationally efficient shadow-box MIA that, with no assumption of model knowledge or access, attacks the generated synthetic dataset by conducting a hypothesis test that it is locally overfit to potential training data. Assessed over a comprehensive benchmark spanning diverse datasets, model architectures, and attack parameters, we find that Gen-LRA consistently dominates other MIAs for generative models across multiple performance metrics. These results underscore Gen-LRA's effectiveness as an interpretable and robust privacy auditing tool, highlighting the significant privacy risks posed by generative model overfitting in real-world applications
[ "Privacy", "Membership Inference Attacks", "Generative Models" ]
https://openreview.net/pdf?id=02DCEU6vSU
https://openreview.net/forum?id=02DCEU6vSU
RCzL0WikF4
official_review
1,730,623,641,021
02DCEU6vSU
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission12851/Reviewer_Yeu8" ]
ICLR.cc/2025/Conference
2025
summary: The paper proposes a new approach to do membership inference attacks for tabular data generative models. The approach first estimates the distributions of (1) the reference samples plus the target sample and (2) the reference samples with kernel density estimation, and then computes the density ratio of synthetic samples over these two distributions. The intuition is that, if the target sample were used in training, the density of synthetic samples on distribution (1) would be higher. Results across various datasets and models show that the proposed approach yields better AUC-ROC and TPR at low FPRs. soundness: 3 presentation: 3 contribution: 2 strengths: * The proposed method is simple and effective. * In general, the writing of the paper is clear. * The paper has demonstrated results on many datasets and models. weaknesses: * The assumption that the reference data is available to the attacker is too strong. * The title and the abstract do not reflect the scope and constraint of the method sufficiently. questions: First, I would like to point out that I am not fully up-to-date on the literature regarding membership inference attacks, especially those involving tabular data. As a result, I may be unable to assess the novelty of this work and might not be familiar with the common settings examined in recent literature. 1. The paper assumes the reference data is available to the attacker. This does not seem to be very realistic to me. Section 1 discussed that a common scenario for synthetic data release is that the data owner wants to release data for open research. This implies that such data is not available to the public before that (if such data is already available, then there is no motivation or value for the data owner to release an additional dataset). That means that the attacker does not have access to the reference data either. The prior work I knew often considered attacks that do not make such assumptions (e.g., https://arxiv.org/pdf/1705.07663 and https://arxiv.org/pdf/1909.03935). The paper claims that this setting is realistic in Section 2: "We assume this in practice because this represents a plausible scenario for the owner of S as an attacker may be able to find comparable data in the real world..." Unfortuantely, I do not fully understand this example. It would be great if the author can explain it in more detail in the rebuttal. 2. Continuing on the above point, the paper needs to make it clearer what assumptions each of the baseline methods in Section 5 make. Which of them also makes the assumption that reference data is available to the attacker? This would clarify whether the claimed improvement comes from the relaxation of the assumptions or the fundamental advances of the algorithm itself. 3. The paper only evaluates the proposed algorithm on tabular data. But this is not reflected in the title and abstract. By reading only the title and the abstract, the readers might be misled to think that the paper proposes and evaluates the attack on diverse data types. I think it is important to clarify that, as the proposed approach relies on kernel density estimation, which (as discussed in the paper) does not scale well with the data dimension. (The proposed approach relies on dimension-reduction techniques to tackle the issue.) Therefore, it is unclear if such a pipeline can work well on other more high-dimensional and complicated data such as images and text. 4. How do you determine the kernel size and the type of the kernel in the experiments? Is the algorithm sensitive to that? 5. Section 5 mentioned that "For Gen-LRA, we found that the choice of k can have a small impact on the performance of the attack (See Appendix A.3), we therefore use the results of the best k choice for each run as the goal for an MIA is to characterize the maximal empirical privacy risk." I understand that choosing the best k could help "characterize the maximal empirical privacy risk". However, this table is mainly for comparing between different baselines. The comparison would be unfair if you chose the best hyper-parameter for your own approach while not doing that for the baseline methods. 7. The discussion in Section 6.2 is nice, but it would be more self-contained if the paper could describe how DCR works in the main text. Other minor questions: 1. Section 1: "We demonstrate that Gen-LRA identifies a different source of privacy leakage relative to other commonly used MIAs." It would be better to clarify what "the different source" means here. I could only understand it after reading Section 5. 2. Line 116 and 117: what are M and D? These notations do not seem consistent with what was used before. 3. Line 127: typo on the left quotation mark 4. Line 266: missing a ) flag_for_ethics_review: ['Yes, Privacy, security and safety', 'Yes, Potentially harmful insights, methodologies and applications'] details_of_ethics_concerns: The paper focuses on membership inference attacks, which could be leveraged by adversaries to launch privacy attacks. rating: 5 confidence: 3 code_of_conduct: Yes
02DCEU6vSU
Gen-LRA: Towards a Principled Membership Inference Attack for Generative Models
[ "Joshua Ward", "Chi-Hua Wang", "Guang Cheng" ]
Evaluating the potential privacy leakage of synthetic data is an important but unresolved problem. Most existing adversarial auditing frameworks for synthetic data rely on heuristics and unreasonable assumptions to attack the failure modes of generative models, exhibiting limited capability to describe and detect the privacy exposure of training data. In this paper, we study designing Membership Inference Attacks (MIAs) that specifically exploit the observation that generative models tend to memorize certain data points in their training sets, leading to significant local overfitting. Here, we propose Generative Likelihood Ratio Attack (Gen-LRA), a novel, computationally efficient shadow-box MIA that, with no assumption of model knowledge or access, attacks the generated synthetic dataset by conducting a hypothesis test that it is locally overfit to potential training data. Assessed over a comprehensive benchmark spanning diverse datasets, model architectures, and attack parameters, we find that Gen-LRA consistently dominates other MIAs for generative models across multiple performance metrics. These results underscore Gen-LRA's effectiveness as an interpretable and robust privacy auditing tool, highlighting the significant privacy risks posed by generative model overfitting in real-world applications
[ "Privacy", "Membership Inference Attacks", "Generative Models" ]
https://openreview.net/pdf?id=02DCEU6vSU
https://openreview.net/forum?id=02DCEU6vSU
IDT940ZREW
official_review
1,730,286,651,189
02DCEU6vSU
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission12851/Reviewer_6TYk" ]
ICLR.cc/2025/Conference
2025
summary: This paper introduces Gen-LRA, a novel membership inference attack (MIA) methodology for evaluating privacy risks in synthetic tabular data. The authors propose a hypothesis testing framework that computes a likelihood ratio specifically targeted at identifying any local overfitting of the target record. The method requires minimal assumptions, just access to the released synthetic dataset and a reference dataset. They find their method to outperform baselines from the literature across 15 datasets. They further find their method to be particularly successful against outliers, in contrast with other MIAs from the literature. soundness: 2 presentation: 3 contribution: 2 strengths: - Technically novel, and interesting, way to compute the membership inference inference signal from synthetic data. The method is theoretically grounded, computationally efficient and relies on limited assumptions for the attacker. - They show the method to outperform a range of MIAs from the literature - Comprehensive evaluation of the attack across 15 datasets - Authors include intuitive examples (eg Fig 1 and Sec 6.2) that are well explained and help the understanding of the paper. weaknesses: (More details see questions) - My main concern comes down to a lack of related work being discussed. A range of important works have studied MIAs against synthetic tabular data using shadow modeling [1,2,3]. While I understand that these works are computationally more expensive and additionally rely on the attacker's knowledge of the training algorithm, I find these works to be very relevant to position this paper and its findings. - Limited secondary insights with experimental depth. For instance, to make the claim that the method works better for outliers (especially compared to other methods), section 5.3 is mostly anecdotal. [1] Stadler, T., Oprisanu, B., & Troncoso, C. (2022). Synthetic data–anonymisation groundhog day. In 31st USENIX Security Symposium (USENIX Security 22) (pp. 1451-1468). [2] Houssiau, F., Jordon, J., Cohen, S. N., Daniel, O., Elliott, A., Geddes, J., ... & Szpruch, L. TAPAS: a Toolbox for Adversarial Privacy Auditing of Synthetic Data. In NeurIPS 2022 Workshop on Synthetic Data for Empowering ML Research. [3] Meeus, M., Guepin, F., Creţu, A. M., & de Montjoye, Y. A. (2023, September). Achilles’ heels: vulnerable record identification in synthetic data publishing. In European Symposium on Research in Computer Security (pp. 380-399). Cham: Springer Nature Switzerland. questions: - Can you expand the related work to also include the shadow-modeling based MIAs? - To truly understand the contribution, could you implement the shadow-modeling based MIAs [1,2,3] as well and report their results? Right now, the Gen-LRA method seems to be better than the prior work you consider, and does so with limited assumptions for the attacker and with limited computational cost. How does this change when the attacker now (i) has knowledge of the training algorithm and (ii) has the computational resources to train shadow models? Could authors implement these shadow-model MIAs and report the results alongside Gen-LRA? This would help to position the method and its results in the literature, giving a clear understanding of the impact of certain assumptions and computational cost on the MIA results. - Similarly, the work on shadow modeling MIAs also discusses disparate vulnerability of outliers [1,2,3]. Stadler et al [1] finds outliers to be more vulnerable than randomly selected records, while Meeus et al [3] proposes a method to identify more vulnerable records. Could authors have more elaborate results for the outlier discussion (e.g. show MIA results for outliers vs random points across datasets) and relate these findings to prior work? While the fact that Gen-LRA focuses on outliers is distinct from distance-based methods, these findings might not be very different than the ones in shadow-modeling based MIAs. flag_for_ethics_review: ['No ethics review needed.'] rating: 3 confidence: 4 code_of_conduct: Yes
02DCEU6vSU
Gen-LRA: Towards a Principled Membership Inference Attack for Generative Models
[ "Joshua Ward", "Chi-Hua Wang", "Guang Cheng" ]
Evaluating the potential privacy leakage of synthetic data is an important but unresolved problem. Most existing adversarial auditing frameworks for synthetic data rely on heuristics and unreasonable assumptions to attack the failure modes of generative models, exhibiting limited capability to describe and detect the privacy exposure of training data. In this paper, we study designing Membership Inference Attacks (MIAs) that specifically exploit the observation that generative models tend to memorize certain data points in their training sets, leading to significant local overfitting. Here, we propose Generative Likelihood Ratio Attack (Gen-LRA), a novel, computationally efficient shadow-box MIA that, with no assumption of model knowledge or access, attacks the generated synthetic dataset by conducting a hypothesis test that it is locally overfit to potential training data. Assessed over a comprehensive benchmark spanning diverse datasets, model architectures, and attack parameters, we find that Gen-LRA consistently dominates other MIAs for generative models across multiple performance metrics. These results underscore Gen-LRA's effectiveness as an interpretable and robust privacy auditing tool, highlighting the significant privacy risks posed by generative model overfitting in real-world applications
[ "Privacy", "Membership Inference Attacks", "Generative Models" ]
https://openreview.net/pdf?id=02DCEU6vSU
https://openreview.net/forum?id=02DCEU6vSU
FSjb9PJzIo
official_review
1,730,152,041,539
02DCEU6vSU
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission12851/Reviewer_Phn8" ]
ICLR.cc/2025/Conference
2025
summary: The paper proposes a novel membership inference attack on synthetic data generators called Gen-LRA, based on estimating a likelihood ratio between the synthetic data coming from a reference distribution vs. it coming from the reference distribution with a target point included. Gen-LRA is benchmarked againt several competing attacks on a variety of datasets, where Gen-LRA generally outperforms the competition. soundness: 2 presentation: 3 contribution: 3 strengths: The likelihood ratio that Gen-LRA estimates is novel to my knowledge, and seems to be closer to the likelihood ratio that would be theoretically optimal than what previous work has looked at. The paper is easy to understand, and the writing is generally polished. Looking at TPR @ low FPR is good practice, and too often neglected in the MIA literature. The paper could even highlight these results further: most of the AUC-ROC scores for all methods are close to random guessing, but Gen-LRA is much more accurate than random guessing at FPR = 0.001. weaknesses: Using the PCA+KDE density estimator for DOMIAS is not fully fair, since the DOMIAS paper used a more sophisticated density estimator which was found to perform better than the KDE. Of course, the same estimator could also improve the results of Gen-LRA, and PCA+KDE could be computationally cheaper, but these should be checked empirically. Using PCA may limit the applicability of outlier overfitting detection for outliers with rare categorical values. For example, consider the detection of overfitting on datapoints of French people on the Adult dataset. PCA weights the input dimensions based on how much variance they have, so the indicator for being French would have a very low weight (<1% of the data is French). As a result, the PCA outputs would be very similar between French and non-French people, and Gen-LRA would not be able to detect overfitting affecting French people. Unless I'm completely mistaken about this phenomenon, this should be mentioned as a limitation. For a similar reason, you should check if datapoints with high DCR score have similarities. It could be that they do, but UMAP is not considering these important. This could change the interpretation of Figure 2 that DCR does not target specific outlier regions. You should also discuss the fact that Ward et al. (2024) report a very similar finding to your Figure 2 with their MIA. As a part of this, it would be interesting to see analogues of Figure 2 for the other MIAs used as baselines. Please include separate results from each dataset in addition to the mean results across datasets. The datasets could have significant performance differences that the aggregation hides. I'm also not sure if the standard deviations of performance across different datasets are meaningful in any way. Minor points: - The paper should make the differences between DOMIAS and Gen-LRA clearer, since the methods are fairly similar. - It not clear what $\mathbb{P}\cup \{x^*\}$ precisely is, which makes the motivation leading to Equation 4 seem a bit handwavy. - Contribution 1: this sentence is a bit unclear, making it seem like the null and alternative hypotheses are the same. - Line 172: capitalise "equation 4". - Line 266: missing parenthesis. - Line 346: "scale" is ambiguous, I would suggest "normalise" if that is what you are doing. - Several references are missing the publication forum, for example Durkan et al. (2019), Ganev and De Cristofaro (2023). questions: No further questions. flag_for_ethics_review: ['No ethics review needed.'] rating: 5 confidence: 4 code_of_conduct: Yes
02DCEU6vSU
Gen-LRA: Towards a Principled Membership Inference Attack for Generative Models
[ "Joshua Ward", "Chi-Hua Wang", "Guang Cheng" ]
Evaluating the potential privacy leakage of synthetic data is an important but unresolved problem. Most existing adversarial auditing frameworks for synthetic data rely on heuristics and unreasonable assumptions to attack the failure modes of generative models, exhibiting limited capability to describe and detect the privacy exposure of training data. In this paper, we study designing Membership Inference Attacks (MIAs) that specifically exploit the observation that generative models tend to memorize certain data points in their training sets, leading to significant local overfitting. Here, we propose Generative Likelihood Ratio Attack (Gen-LRA), a novel, computationally efficient shadow-box MIA that, with no assumption of model knowledge or access, attacks the generated synthetic dataset by conducting a hypothesis test that it is locally overfit to potential training data. Assessed over a comprehensive benchmark spanning diverse datasets, model architectures, and attack parameters, we find that Gen-LRA consistently dominates other MIAs for generative models across multiple performance metrics. These results underscore Gen-LRA's effectiveness as an interpretable and robust privacy auditing tool, highlighting the significant privacy risks posed by generative model overfitting in real-world applications
[ "Privacy", "Membership Inference Attacks", "Generative Models" ]
https://openreview.net/pdf?id=02DCEU6vSU
https://openreview.net/forum?id=02DCEU6vSU
DmNAPjR8Wk
comment
1,732,725,402,355
02DCEU6vSU
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission12851/Authors" ]
ICLR.cc/2025/Conference
2025
withdrawal_confirmation: I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors. comment: We thank the reviewers for their high quality and helpful reviews. Given this feedback, we believe that this work would be better presented with additional experiments and writing revisions that are outside of the scope of this rebuttal window. For these reasons, we are withdrawing our submission.
02DCEU6vSU
Gen-LRA: Towards a Principled Membership Inference Attack for Generative Models
[ "Joshua Ward", "Chi-Hua Wang", "Guang Cheng" ]
Evaluating the potential privacy leakage of synthetic data is an important but unresolved problem. Most existing adversarial auditing frameworks for synthetic data rely on heuristics and unreasonable assumptions to attack the failure modes of generative models, exhibiting limited capability to describe and detect the privacy exposure of training data. In this paper, we study designing Membership Inference Attacks (MIAs) that specifically exploit the observation that generative models tend to memorize certain data points in their training sets, leading to significant local overfitting. Here, we propose Generative Likelihood Ratio Attack (Gen-LRA), a novel, computationally efficient shadow-box MIA that, with no assumption of model knowledge or access, attacks the generated synthetic dataset by conducting a hypothesis test that it is locally overfit to potential training data. Assessed over a comprehensive benchmark spanning diverse datasets, model architectures, and attack parameters, we find that Gen-LRA consistently dominates other MIAs for generative models across multiple performance metrics. These results underscore Gen-LRA's effectiveness as an interpretable and robust privacy auditing tool, highlighting the significant privacy risks posed by generative model overfitting in real-world applications
[ "Privacy", "Membership Inference Attacks", "Generative Models" ]
https://openreview.net/pdf?id=02DCEU6vSU
https://openreview.net/forum?id=02DCEU6vSU
1jXn1ww1AV
official_review
1,730,745,668,671
02DCEU6vSU
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission12851/Reviewer_tZB4" ]
ICLR.cc/2025/Conference
2025
summary: The paper describes a membership inference attack on generative models. It requires a set of examples generated by the model, S, and a set of reference examples, R, presumably from the same distribution as the data the model was trained on. Then to guess whether some new point x* was part of the training data, it estimates the likelihood ratio of S between a model trained on R vs. a model trained on $R \cup \{x*\}$ using two kernel density estimators. It then thresholds on the likelihood ratio. Experimental results demonstrate impressive improvements compared to baseline models, particularly when evaluated with the critical "true positive rate at low false positive rate" metric. soundness: 3 presentation: 3 contribution: 3 strengths: The idea of performing MIA on a generative model by using likelihood ratio of generated data between models with and without the targeted example is very natural and efficient. I'm not surprised that it is very effective, as demonstrated in the experiments. The paper is mostly well-written and well-motivated, and to my knowledge original. weaknesses: I'm afraid the specific approach of using kernel density estimators will limit the method's applicability to low-dimensional tabular datasets. I would love to see this idea generalized to higher-dimensional data, probably using something that will scale better than KDEs. questions: 1. Although I could follow the gist of the idea, some of the notation is not precisely defined. $p_{\mathbb{P} \cup x*}$. It might be clearer to skip Eq.s 3/4 and jump to Eq 5. 1. Do you have any ideas for how to generalize this to forms of data that are not amenable to KDEs (even after applying PCA)? 1. Section 5.3 is not clear to me. What exactly is the experiment here, and what is it supposed to demonstrate? flag_for_ethics_review: ['No ethics review needed.'] rating: 8 confidence: 4 code_of_conduct: Yes
029hDSVoXK
Dynamic Neural Fortresses: An Adaptive Shield for Model Extraction Defense
[]
Model extraction aims to acquire a pre-trained black-box model concealed behind a black-box API. Existing defense strategies against model extraction primarily concentrate on preventing the unauthorized extraction of API functionality. However, two significant challenges still need to be solved: (i) Neural network architecture of the API constitutes a form of intellectual property that also requires protection; (ii) The current practice of allocating the same network architecture to both attack and benign queries results in substantial resource wastage. To address these challenges, we propose a novel \textit{Dynamic Neural Fortresses} (DNF) defense method, employing a dynamic Early-Exit neural network, deviating from the conventional fixed architecture. Firstly, we facilitate the random exit of attack queries from the network at earlier layers. This strategic exit point selection significantly reduces the computational cost for attack queries. Furthermore, the random exit of attack queries from earlier layers introduces increased uncertainty for attackers attempting to discern the exact architecture, thereby enhancing architectural protection. On the contrary, we aim to facilitate benign queries to exit at later layers, preserving model utility, as these layers typically yield meaningful information. Extensive experiments on defending against various model extraction scenarios and datasets demonstrate the effectiveness of DNF, achieving a notable 2$\times$ improvement in efficiency and an impressive reduction of up to 12\% in clone model accuracy compared to SOTA defense methods. Additionally, DNF provides strong protection against neural architecture theft, effectively safeguarding network architecture from being stolen.
[ "Model Extraction Defense" ]
https://openreview.net/pdf?id=029hDSVoXK
https://openreview.net/forum?id=029hDSVoXK
eZyr33wMG6
official_review
1,730,697,380,551
029hDSVoXK
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission5212/Reviewer_FTna" ]
ICLR.cc/2025/Conference
2025
summary: Model extraction is a type of attack where an attacker tries to replicate a victim model to either: 1. Estimate the model’s parameters to emulate the model’s performance 2. Copy the model’s architecture, to recreate the model as-is. 3. Get protected knowledge of the training data of the victim model, to better understand the data distribution it was trained on, so that other type of adversarial attacks can be done. Existing defense strategies are costly – they do not differentiate between benign and malicious queries from an attacker and this form of defense allocates the same computational power to both. This paper provides a novel way to tackle model extraction attacks – Dynamic Neural Fortresses. They propose an early-exit strategy wherein the victim model has built-in early exits routes that the model can take and provide outputs that are OOD from it’s expected input-output combination. If an input query matches an early-exits threshold, the model inference exits with the output at that stage. soundness: 3 presentation: 2 contribution: 3 strengths: 1. The paper presents an interesting defenseive method to counter model extraction attacks. The paper’s novelty lies in the core idea of using a dynamic exit strategy based on the input query. While early exit strategies have been explored in the context of neural networks, their application to defensive methods is novel. 2. The paper is well written, and the core idea is simple to understand. The language is lucid but see weakness 2, 3. 3. The paper is well organized with a clear progression between sections. Figure 1 greatly aids clarity in trying to understand the pipeline, however see weakness 2. 4. Experimental evaluation is robust and does seem to support the author’s claims that DNF achieve substantial reduction is successful model cloning. 5. This paper addresses a growing concern in the space of AI/ML model deployment – protecting against model cloning and privacy and intellectual rights protection. This work does have the potential to help drive forward work in defense space for these attack types. weaknesses: 1. Despite strength 5, this method can be adapted widely only after these weaknesses are addressed and questions explored. 2. Should make better use to visual elements – probably, atleast in the appendix, add an example of what an attack query would look like, why the victim system would classify the query as attack and what the victim model’s behaviour would be on it, how early would it exit? 3. Math is useful and helps to aid the reader’s understanding but at times also hampers readability. Especially in textual sections it breaks the flow of readers. Something that may help is condense the math and limit them to equations that can be repeatedly referenced or have a table of symbol notations that readers can refer to. 4. Some sections could use with clearer explanations - OOD Data Learning Objective, underlying theory for Entropy and IB regularization. Maybe providing examples around mutual information or ER could help. 5. The paper does provide some explanation about Entropy and IB regularization but could expand a little more on how mutual information reduction leads to lower predictability and can be leveraged for distinguishing between benign and malignant queries. 6. Maybe a comparison with other information-theory based approaches such as standard adversarial training would help drive home the imminent advantages on DNF. Another set of comparisons that could strengthen the paper’s results are against other dynamic architectures (example ‘BranchyNet’). 7. The paper uses ER to determine optimal exits from the model’s inference. However the choice of thresholds is only briefly discussed. Maybe an ablation study of various hyperparameters, exit thresholds and entropy weights could help explain the choice a certain threshold or explain the assumptions that the authors may have made. questions: 1. Concepts related to entropy and IB regularization are presented with some mathematical rigor and learning objectives for both ID and OOD data are presented with entropy and IB regularization contratints; However some additional insights into potential limitations are necessary – How would the strategy perform under adaptive attacks with a much varied and increasingly sophisticated OOD spectrum? And how it would impact models that aim for domain generalizability and to incorporate that OOD spectrum into their model’s capabilities? 2. How does this defensive method translate to multi-modal architectures like VLMs. Or multi-pipeline methods where each branch operates on different modalities? Or ML methods where different models are trained for different modalities and their outputs are combined (via some aggregation)? flag_for_ethics_review: ['No ethics review needed.'] rating: 6 confidence: 3 code_of_conduct: Yes
029hDSVoXK
Dynamic Neural Fortresses: An Adaptive Shield for Model Extraction Defense
[]
Model extraction aims to acquire a pre-trained black-box model concealed behind a black-box API. Existing defense strategies against model extraction primarily concentrate on preventing the unauthorized extraction of API functionality. However, two significant challenges still need to be solved: (i) Neural network architecture of the API constitutes a form of intellectual property that also requires protection; (ii) The current practice of allocating the same network architecture to both attack and benign queries results in substantial resource wastage. To address these challenges, we propose a novel \textit{Dynamic Neural Fortresses} (DNF) defense method, employing a dynamic Early-Exit neural network, deviating from the conventional fixed architecture. Firstly, we facilitate the random exit of attack queries from the network at earlier layers. This strategic exit point selection significantly reduces the computational cost for attack queries. Furthermore, the random exit of attack queries from earlier layers introduces increased uncertainty for attackers attempting to discern the exact architecture, thereby enhancing architectural protection. On the contrary, we aim to facilitate benign queries to exit at later layers, preserving model utility, as these layers typically yield meaningful information. Extensive experiments on defending against various model extraction scenarios and datasets demonstrate the effectiveness of DNF, achieving a notable 2$\times$ improvement in efficiency and an impressive reduction of up to 12\% in clone model accuracy compared to SOTA defense methods. Additionally, DNF provides strong protection against neural architecture theft, effectively safeguarding network architecture from being stolen.
[ "Model Extraction Defense" ]
https://openreview.net/pdf?id=029hDSVoXK
https://openreview.net/forum?id=029hDSVoXK
dHvqGcF4MN
official_review
1,730,718,075,531
029hDSVoXK
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission5212/Reviewer_dSR3" ]
ICLR.cc/2025/Conference
2025
summary: The paper presents “Dynamic Neural Fortress” or DNF framework as a defense against Model Extraction Attacks. These attacks allow an adversary to create a copy of a pre-trained model accessible via black-box APIs, posing risks to proprietary models. The authors identify two main challenges in current defenses: (1) Neural Network architecture protection, a thing that is taken for granted in previously proposed attacks by using the same model architecture for victim and clone models, and (2) optimizing computational resources by avoiding allocation of equal resources to both benign and attack queries. The authors implement an Early-Exit neural network wrapper (EENN) on top of a trained model. This wrapper facilitates random exits at earlier layers for attack queries while preserving model utility by making benign queries exit at later layers. The authors assume the usage of out-of-distribution (OOD) data by attackers in most cases, but there are some experiments conducted for in-distribution (ID) data as well. Using concepts from deep information bottleneck theory, the authors optimize mutual information between input data, latent features, and output labels for training the EENN model. The proposed method has been evaluated via testing on various architectures and datasets, and compared against other state of the art defenses. soundness: 2 presentation: 2 contribution: 2 strengths: - The proposed idea of implementing early exits as a defense against model extraction is novel and sound. - The method is easily adaptable to different architectures like ResNets and ViTs. - The use of entropy and information bottleneck theory is sound and well-suited to the goal of reducing extractable information for the attacker. - The experiments conducted cover various scenarios, models and datasets validating its generalizability. The performance comparisons with state-of-the-art defenses further strengthen its credibility. - The ablation study is thorough and captures various scenarios that highlight the effectiveness of the proposed method and its components. weaknesses: The paper presents a technically sound idea, but the presentation is poor and needs major revisions. I am listing the weaknesses sectionwise. ### Related work: - The related work is not organized properly, and some works are not cited in their appropriate sections, although they are cited later in the paper. For example, ActiveThief by Pal et al. (2020) [1] should be present under functionality stealing. - When a model extraction attack is data-based, the data might be natural or synthetic. For E.g., I can generate a dataset of 10,000 images from a pretrained generative network and use that for model extraction. This would still fall under the category of data-based model extraction. Data-free model extraction means that the data used for stealing is generated based on some information received from the victim. - Therefore, restructuring the related work section is necessary here. ### Methodology: - The steps followed to convert a pre-trained victim model into an EENN are not easily followed. A network is trained on the ID data first. Then exit classifiers are added on top of it. Then, an OOD generator is used to generate OOD data, which is then passed through the original network without the exit networks for inference. The steps followed after this are not written in a coherent manner. One has to go through Algorithm 1 to get a clear picture of the training process. - Overuse of the term specific to start two consecutive paragraphs (224-235 and 236-241) and even inside the paragraphs when the sentences contained in both paragraphs are not specific at all. ### Experimentation: - The authors should differentiate between the DFME and DBME settings in more detail. In line 387, it is assumed that the reader will know that they are talking about the DFME setting instead of the soft-label setting. This also invites confusion regarding the budget difference between the soft and hard label settings, where the budget should be the same for valid comparison. - For the DFME setting, one clone model architecture should be the same as the victim model for valid comparison (Resnet-34 in this case). Previous methods, like the prediction poisoning [2] method used by authors for comparison, have conducted experiments that keep the victim architecture for the stolen model. Moreover, the proposed method is not better than MeCo for the CIFAR-10 dataset. This should be analyzed and discussed. - For the DBME setting, using the random strategy for sampling images is not ideal. It has been shown in the ActiveThief [1] paper that using an uncertainty-based sampling method is more effective. - To showcase the effectiveness of the in-distribution defense, using JBDA as the attack strategy is fairly obsolete, and the paper cited needs to be corrected. The paper that proposed the attack is [3]. The authors should use either ActiveThief or Knockoff nets attack for evaluation as they are more recent and utilize intelligent sampling-based strategies for attack. If an actual attacker has access to in-distribution data, they will try to use the best strategy possible. - To demonstrate the defense’s effectiveness against model architecture stealing, the authors pick the latest attack by Carlini et al. but fail to show effectiveness against previously cited work, specifically “Towards reverse-engineering black-box neural networks. In International Conference on Learning Representations, 2018.” that perform attack on imagenet models. Considering that this was one of the major claims made by the authors, they should evaluate this aspect thoroughly. ### Grammar: The paper has incoherent paragraphs, spelling mistakes, and redundant sentences. Some of them are listed below: - Line 225, it should be “convert” instead of “covert.” - In Table 1 and Table 2, the spelling of label is incorrect. - Appendix D, Lines 778-779, same line repeated twice. Citations: - [1] Pal, Soham, et al. “Activethief: Model extraction using active learning and unannotated public data.” Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 34. No. 01. 2020. - [2] Orekondy, Tribhuvanesh, Bernt Schiele, and Mario Fritz. “Prediction poisoning: Towards defenses against dnn model stealing attacks.” arXiv preprint arXiv:1906.10908 (2019). - [3] Papernot, Nicolas, et al. “Practical black-box attacks against machine learning.” Proceedings of the 2017 ACM on Asia conference on computer and communications security. 2017. questions: - The authors claim their approach falls under the model extraction prevention defense category. Still, it works like a detection approach where the OOD detector is built into the model itself and thus relies heavily on the OOD data used for classification. The results shared by authors, to argue otherwise, are insufficient. I would ask the authors to include more experiments for this argument. - If the model is trained to early exit in the case of OOD samples, but the labels used are from the original neural network (essentially the last possible exit), what is the accuracy of the model on OOD data used for training the model? I suspect that the early exit model misclassifies OOD data with high confidence. If it were learning the original network’s output labels for OOD data, then the defense would not work for the hard-label setting as the attacker would still receive a large portion of the original network’s labels as output with some erroneous ones. - Regarding the exit point evaluation ablation study, I would like to know the accuracy at each exit and the exact number of ID and OOD samples passing through each exit instead of terms such as “over half,” etc. flag_for_ethics_review: ['No ethics review needed.'] rating: 8 confidence: 5 code_of_conduct: Yes
029hDSVoXK
Dynamic Neural Fortresses: An Adaptive Shield for Model Extraction Defense
[]
Model extraction aims to acquire a pre-trained black-box model concealed behind a black-box API. Existing defense strategies against model extraction primarily concentrate on preventing the unauthorized extraction of API functionality. However, two significant challenges still need to be solved: (i) Neural network architecture of the API constitutes a form of intellectual property that also requires protection; (ii) The current practice of allocating the same network architecture to both attack and benign queries results in substantial resource wastage. To address these challenges, we propose a novel \textit{Dynamic Neural Fortresses} (DNF) defense method, employing a dynamic Early-Exit neural network, deviating from the conventional fixed architecture. Firstly, we facilitate the random exit of attack queries from the network at earlier layers. This strategic exit point selection significantly reduces the computational cost for attack queries. Furthermore, the random exit of attack queries from earlier layers introduces increased uncertainty for attackers attempting to discern the exact architecture, thereby enhancing architectural protection. On the contrary, we aim to facilitate benign queries to exit at later layers, preserving model utility, as these layers typically yield meaningful information. Extensive experiments on defending against various model extraction scenarios and datasets demonstrate the effectiveness of DNF, achieving a notable 2$\times$ improvement in efficiency and an impressive reduction of up to 12\% in clone model accuracy compared to SOTA defense methods. Additionally, DNF provides strong protection against neural architecture theft, effectively safeguarding network architecture from being stolen.
[ "Model Extraction Defense" ]
https://openreview.net/pdf?id=029hDSVoXK
https://openreview.net/forum?id=029hDSVoXK
CXqIEtrkoQ
official_review
1,730,700,974,929
029hDSVoXK
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission5212/Reviewer_zh4c" ]
ICLR.cc/2025/Conference
2025
summary: The dynamic neural fortress (DNF) defense method introduced in this paper employs a dynamic early exit neural network to defend model extraction attacks. This approach effectively provides simultaneous protection for model functionality, network architecture, and enhanced defense efficiency against these threats. Extensive experiments demonstrate that the proposed defense method outperforms SOTA model extraction defenses in terms of both effectiveness and efficiency. soundness: 3 presentation: 2 contribution: 2 strengths: * The first defense framework simultaneously offers three key protective benefits: protecting the functionality, and model architecture, while improving the efficiency of the inference. * An innovative design of the loss function is achieved by incorporating the Information Bottleneck (IB) theory. * The experimental design is well-structured and covers various scenarios, effectively validating the method's effectiveness. weaknesses: * The claims regarding the protection of model architecture are overstated. Early Exit (EE) mechanisms indeed prevent attackers from executing the entire pipeline of DNN, therefore protecting the entire model architecture information from being leaked. However, the authors fail to provide how attackers might exploit this vulnerability to steal the model architecture when executing the entire network. Furthermore, EE mechanisms typically occur in the last few layers of DNNs; therefore, while the proposed approach may protect certain layers, it only works those that are unexecuted, leaving the majority of the neural network vulnerable (if there are effective attacks that can steal the model architecture). The authors should consider discussing these limitations in a dedicated section titled "Limitations." * The definitions of out-of-distribution (OOD) and in-distribution (ID) data lack clarity. It is unclear why the authors consider OOD data to be "illegal" while ID data is deemed "legal," and the rationale behind the corresponding loss term needs further explanation. Additionally, the authors aim to minimize the mutual information between $X_{id}$ and $Z_{id}$ in Eq. (3). However, this approach could potentially compromise the overall performance of deep neural networks (DNNs). The authors should provide additional clarification on why a reduced mutual information between $X_{id}$ and $Z_{id}$ does not impact the prediction accuracy. * Table 12 indicates that queries drawn from ID dataset exit at Exit2 over 90%, while the OOD queries only exit at about 75% at the same stage. This discrepancy seems inconsistent with the motivation behind two loss terms in Eq. (3) and Eq. (4). The authors should explain this discrepancy and discuss how it impacts the effectiveness of the proposed defense mechanism. I would like to suggest the authors provide a more detailed analysis of the exit patterns for ID vs OOD data. * The explanation for choosing a specific mutual information optimization method to achieve the defense objectives lacks a deeper theoretical explanation and intuitive justification, making it challenging to fully follow the principles behind the proposed method. * The experiments conducted to protect the model architecture appear limited, which does not sufficiently demonstrate the contribution related to model architecture protection mentioned in the paper. Consider adding additional experiments and evaluation metrics specifically designed to assess the robustness of the model architecture against potential theft. * It would be advantageous to include experiments that investigate the correlation between accuracy and exit points, providing a clearer visualization of the early exit mechanism's impact. I would like to suggest a graph showing accuracy vs. exit points for both ID and OOD data or report a statistical analysis of this relationship. * It seems that all datasets utilized are classification datasets, which makes it difficult to validate the effectiveness of the proposed method in other tasks and domains. * The notations in this article have been used repetitively, e.g., $r$. questions: * Can the proposed defense be easily extended to other tasks and domains, such as object detection and NLP applications? * Does the number of exit points impact the performance of the proposed defense? * According to the design, earlier blocks are intended to reduce the model's predictive capability. However, it is notable that the ID dataset maintains high accuracy even after exiting at Exit2. This raises questions about the effectiveness of the defense mechanism. Moreover, the OOD dataset still retains 35% of its data after passing through the last two blocks. What is the observed defense effect in this case? flag_for_ethics_review: ['No ethics review needed.'] rating: 6 confidence: 3 code_of_conduct: Yes
029hDSVoXK
Dynamic Neural Fortresses: An Adaptive Shield for Model Extraction Defense
[]
Model extraction aims to acquire a pre-trained black-box model concealed behind a black-box API. Existing defense strategies against model extraction primarily concentrate on preventing the unauthorized extraction of API functionality. However, two significant challenges still need to be solved: (i) Neural network architecture of the API constitutes a form of intellectual property that also requires protection; (ii) The current practice of allocating the same network architecture to both attack and benign queries results in substantial resource wastage. To address these challenges, we propose a novel \textit{Dynamic Neural Fortresses} (DNF) defense method, employing a dynamic Early-Exit neural network, deviating from the conventional fixed architecture. Firstly, we facilitate the random exit of attack queries from the network at earlier layers. This strategic exit point selection significantly reduces the computational cost for attack queries. Furthermore, the random exit of attack queries from earlier layers introduces increased uncertainty for attackers attempting to discern the exact architecture, thereby enhancing architectural protection. On the contrary, we aim to facilitate benign queries to exit at later layers, preserving model utility, as these layers typically yield meaningful information. Extensive experiments on defending against various model extraction scenarios and datasets demonstrate the effectiveness of DNF, achieving a notable 2$\times$ improvement in efficiency and an impressive reduction of up to 12\% in clone model accuracy compared to SOTA defense methods. Additionally, DNF provides strong protection against neural architecture theft, effectively safeguarding network architecture from being stolen.
[ "Model Extraction Defense" ]
https://openreview.net/pdf?id=029hDSVoXK
https://openreview.net/forum?id=029hDSVoXK
5SBOcCjypX
official_review
1,730,714,970,912
029hDSVoXK
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission5212/Reviewer_xham" ]
ICLR.cc/2025/Conference
2025
summary: This paper introduces a new defense against model extraction attack for model architecture and model utility. The key idea is to use multi-exit neural network architecture and its random exit mechanism to protect the network's architecture while ensuring the efficiency. For benign queries, the authors trains the early-exit model to distinguish OOD data (attack queries) and in-distribution data to ensure the model utility. Finally, the authors show that DNF outperforms previous defenses and evaluate the adaptive attacks. soundness: 3 presentation: 3 contribution: 3 strengths: + Good motivation. The authors adopt multi-exit architecture to defend architecture extraction attack, which is a well motivated and interesting idea. + Extensive evaluation. The authors not only evaluate the defense effectiveness but also adaptive attacks. weaknesses: - The assumption of attack data are OOD data, although widely adopted in prior work, should be more carefully justified. Meanwhile, as the model's training data are unknown to the user, benign queries may also be OOD data. DNF might decrease the model utility in this case. - The main part of paper (Section 4) is somehow hard to follow. I would suggest the author to simplify the notations or subscripts. Moreover, I also suggest the authors to provide an overview figure to replace some descriptions. - Although the authors investigate the adaptive attacks, the adversary can still design more powerful attack by exploiting the multi-exit model. Please discuss more about the potential vulnerability of multi-exit architecture and compare with prior attacks on multi-exit networks. [1] Auditing Membership Leakages of Multi-Exit Networks. ACM CCS 2022. [2] Model Stealing Attack against Multi-Exit Networks. arXiv:2305.13584. [3] Mind your heart: Stealthy backdoor attack on dynamic deep neural network in edge computing. IEEE INFOCOM 2023. [4] Aegis: Mitigating Targeted Bit-flip Attacks against Deep Neural Networks. Usenix Security 2023. [5] Prediction Privacy in Distributed Multi-Exit Neural Networks: Vulnerabilities and Solutions. ACM CCS 2023. questions: Can you provide a formal definition or description of in-distribution and out-distribution data in this paper's setting? How to distinguish the normal user data (OOD) and attack data (OOD)? flag_for_ethics_review: ['No ethics review needed.'] rating: 6 confidence: 3 code_of_conduct: Yes
029hDSVoXK
Dynamic Neural Fortresses: An Adaptive Shield for Model Extraction Defense
[]
Model extraction aims to acquire a pre-trained black-box model concealed behind a black-box API. Existing defense strategies against model extraction primarily concentrate on preventing the unauthorized extraction of API functionality. However, two significant challenges still need to be solved: (i) Neural network architecture of the API constitutes a form of intellectual property that also requires protection; (ii) The current practice of allocating the same network architecture to both attack and benign queries results in substantial resource wastage. To address these challenges, we propose a novel \textit{Dynamic Neural Fortresses} (DNF) defense method, employing a dynamic Early-Exit neural network, deviating from the conventional fixed architecture. Firstly, we facilitate the random exit of attack queries from the network at earlier layers. This strategic exit point selection significantly reduces the computational cost for attack queries. Furthermore, the random exit of attack queries from earlier layers introduces increased uncertainty for attackers attempting to discern the exact architecture, thereby enhancing architectural protection. On the contrary, we aim to facilitate benign queries to exit at later layers, preserving model utility, as these layers typically yield meaningful information. Extensive experiments on defending against various model extraction scenarios and datasets demonstrate the effectiveness of DNF, achieving a notable 2$\times$ improvement in efficiency and an impressive reduction of up to 12\% in clone model accuracy compared to SOTA defense methods. Additionally, DNF provides strong protection against neural architecture theft, effectively safeguarding network architecture from being stolen.
[ "Model Extraction Defense" ]
https://openreview.net/pdf?id=029hDSVoXK
https://openreview.net/forum?id=029hDSVoXK
1VP9nuyC4G
official_review
1,730,301,204,098
029hDSVoXK
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission5212/Reviewer_CXR5" ]
ICLR.cc/2025/Conference
2025
summary: In this paper, a defense against model stealing attacks (targeting either the model architecture or its functionality) based on a multi-exit neural network is proposed. The main idea is to output accurate prediction scores for ID data from the later network exits, as well as uninformative scores for OOD data from the earlier exits. To do so, for each network exit, a thresholded classifier is trained on the respective intermediate layer representation with a specifically designed loss, which maximizes the aforementioned objective using concepts from information theory. During the deployment, an exit is chosen for a sample when the maximum score of an exit classifier exceeds the respective threshold. soundness: 2 presentation: 3 contribution: 4 strengths: - The paper presents a clearly novel idea to address a very relevant issue. Indeed, to the best of my knowledge, this is the first application of a multi-exit neural network to defend against model extraction attacks. - The proposed network architecture can also reduce the inference time during deployment. - The approach is very intuitive and well-justified. - The reported results are promising. weaknesses: - 90% of IID samples exit in the first 3 exits. Although this can be viewed as a benefit (it reduces the inference time), on the other side, the defense mechanism will produce less informative outputs for those samples. The impacts of these effects should be clearly understood. - I appreciate the fact that the authors consider different types of attacks and try to implement adaptive ones. However, a best practice when dealing with security is to simulate a worst-case scenario against the strongest attack. This helps understand the limitations of the defense and estimate lower bounds of robustness in these settings - even if, in practice, they are unlikely to occur. In this case, the adaptive attacks should be implemented using model extraction techniques that rely on some knowledge about the training data distribution. This assumption is not too unrealistic, as it might happen that the attacker (who knows the domain on which the model is applied) is able to gather in-distribution data from public domains - for instance, if the model is a malware detector, it should be very easy to collect samples and also very likely to have some overlap between them and the training data used by the victim. In other cases, the attacker might possess a subset of or all the training data, and she could easily train its own model, but she is rather interested in reproducing the exact model functionality and reproducing its decision boundaries to build a surrogate model and use it for other attacks (like evasion ones, aka adversarial examples). questions: - Could you please estimate the impact of early exiting for IID samples? For instance, you might compute the misalignment in model outputs for IID samples when they exit early with respect to being forwarded into the entire network. - Could you please evaluate the defense against a worst-case attacker, enhancing the already implemented adaptive attacks with (partial) knowledge of the training data distribution? flag_for_ethics_review: ['No ethics review needed.'] rating: 8 confidence: 4 code_of_conduct: Yes
01wMplF8TL
INSTRUCTION-FOLLOWING LLMS FOR TIME SERIES PREDICTION: A TWO-STAGE MULTIMODAL APPROACH
[]
We introduce Text-Informed Time Series Prediction (TITSP), an innovative multimodal framework that integrates textual knowledge with temporal dynamics using Large Language Models (LLMs). TITSP employs a two-stage process that bridges numerical data with rich contextual information for enhanced forecasting accuracy and interpretability.In the first stage, we present AutoPrompter, which captures temporal dependencies from time series data and aligns them with semantically meaningful text embeddings.In the second stage, these aligned embeddings are refined by incorporating task-specific textual instructions through LLM. We evaluate TITSP on several multimodal time series prediction tasks, demonstrating substantial improvements over state-of-the-art baselines. Quantitative results reveal significant gains in predictive performance, while qualitative analyses show that textual context enhances interpretability and actionable insights. Our findings indicate that integrating multimodal inputs not only improves prediction accuracy but also fosters more intuitive, user-centered forecasting
[ "Large Language Models", "Time-series Prediction", "Multi-modal", "Instruction-following" ]
https://openreview.net/pdf?id=01wMplF8TL
https://openreview.net/forum?id=01wMplF8TL
qTzq3ayPJ3
official_comment
1,732,598,924,451
lnU1bacqhQ
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission10424/Reviewer_GGqR" ]
ICLR.cc/2025/Conference
2025
comment: I appreciate the authors' responses, which partially address my concerns. However, I believe the writing quality of this paper does not meet the standards expected for this conference. I encourage the authors to review some of the recent papers they have cited, such as: - Jin, M., Wang, S., Ma, L., Chu, Z., Zhang, J. Y., Shi, X., ... & Wen, Q. (2023). Time-llm: Time series forecasting by reprogramming large language models. arXiv preprint arXiv:2310.01728. - Liu, Y., Hu, T., Zhang, H., Wu, H., Wang, S., Ma, L., & Long, M. (2023). itransformer: Inverted transformers are effective for time series forecasting. arXiv preprint arXiv:2310.06625. These papers exemplify the level of clarity and structure that is expected. I recommend the authors consider these examples to improve the organization and presentation of their work. Consequently, I have decided to maintain my original score.
01wMplF8TL
INSTRUCTION-FOLLOWING LLMS FOR TIME SERIES PREDICTION: A TWO-STAGE MULTIMODAL APPROACH
[]
We introduce Text-Informed Time Series Prediction (TITSP), an innovative multimodal framework that integrates textual knowledge with temporal dynamics using Large Language Models (LLMs). TITSP employs a two-stage process that bridges numerical data with rich contextual information for enhanced forecasting accuracy and interpretability.In the first stage, we present AutoPrompter, which captures temporal dependencies from time series data and aligns them with semantically meaningful text embeddings.In the second stage, these aligned embeddings are refined by incorporating task-specific textual instructions through LLM. We evaluate TITSP on several multimodal time series prediction tasks, demonstrating substantial improvements over state-of-the-art baselines. Quantitative results reveal significant gains in predictive performance, while qualitative analyses show that textual context enhances interpretability and actionable insights. Our findings indicate that integrating multimodal inputs not only improves prediction accuracy but also fosters more intuitive, user-centered forecasting
[ "Large Language Models", "Time-series Prediction", "Multi-modal", "Instruction-following" ]
https://openreview.net/pdf?id=01wMplF8TL
https://openreview.net/forum?id=01wMplF8TL
qGHr3fuaxB
official_comment
1,731,931,218,677
HNantkZwp3
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission10424/Authors" ]
ICLR.cc/2025/Conference
2025
title: Response To Reviewer GGqR- result table comment: For convenience, we also provide table 5 in the paper about the test results. ### Table: Comparison of Compliance Rate (CR) and MSE for TITSP, Time-LLM, Qwen4MTS, UniTime, and Llama-3.1-8B across various instructed actions with highlighted best (in **bold**) and second-best (in _underlined_) results. | **Instruction** | **TITSP (CR)** | **TITSP (MSE)** | **Time-LLM (CR)** | **Time-LLM (MSE)** | **Qwen4MTS (CR)** | **Qwen4MTS (MSE)** | **UniTime (Qwen) (CR)** | **UniTime (Qwen) (MSE)** | **Llama-3.1-8B (CR)** | **Llama-3.1-8B (MSE)** | |---------------------------------|----------------|-----------------|-------------------|--------------------|-------------------|--------------------|-------------------------|-------------------------|------------------------|------------------------| | Linear Growth and Linear Decay | **0.83** | **1.15** | 0.38 | 3.45 | _0.69_ | _1.90_ | 0.54 | 2.73 | 0.32 | 4.95 | | Linear Growth and Linear Decay | **0.79** | **1.17** | 0.49 | 2.85 | **0.79** | _1.34_ | 0.57 | 2.28 | 0.41 | 2.80 | | Linear Trend Up | _0.90_ | **1.03** | 0.63 | 1.71 | 0.76 | _1.08_ | 0.63 | 1.65 | **0.91** | 1.15 | | Linear Trend Down | **0.87** | **0.88** | 0.64 | 1.55 | 0.71 | 1.36 | 0.51 | 1.59 | _0.85_ | _0.92_ | | Exponential Growth | **0.89** | **1.33** | 0.58 | 2.59 | _0.63_ | _2.07_ | 0.60 | 2.38 | 0.58 | 2.35 | | Exponential Decay | **0.84** | **1.25** | 0.56 | 2.26 | 0.67 | 2.10 | _0.69_ | _2.05_ | 0.46 | 2.39 | | Keep Stable | **0.98** | _0.35_ | 0.76 | 0.76 | 0.93 | 0.48 | 0.83 | 0.62 | _0.95_ | **0.33** | | Decrease Amplitude | **0.90** | _0.91_ | 0.85 | 1.04 | **0.90** | **0.84** | 0.79 | 1.09 | 0.52 | 1.89 | | Increase Amplitude | **0.94** | **0.94** | 0.79 | 1.20 | _0.89_ | _0.96_ | 0.81 | 1.03 | 0.75 | 1.35 | | Logarithmic Growth | _0.77_ | _1.65_ | 0.49 | 2.31 | **0.79** | **1.55** | 0.60 | 1.73 | 0.55 | 1.94 | | Logarithmic Decay | **0.83** | **1.68** | 0.48 | 2.19 | _0.81_ | _1.69_ | 0.67 | 2.04 | 0.63 | 2.60 |
01wMplF8TL
INSTRUCTION-FOLLOWING LLMS FOR TIME SERIES PREDICTION: A TWO-STAGE MULTIMODAL APPROACH
[]
We introduce Text-Informed Time Series Prediction (TITSP), an innovative multimodal framework that integrates textual knowledge with temporal dynamics using Large Language Models (LLMs). TITSP employs a two-stage process that bridges numerical data with rich contextual information for enhanced forecasting accuracy and interpretability.In the first stage, we present AutoPrompter, which captures temporal dependencies from time series data and aligns them with semantically meaningful text embeddings.In the second stage, these aligned embeddings are refined by incorporating task-specific textual instructions through LLM. We evaluate TITSP on several multimodal time series prediction tasks, demonstrating substantial improvements over state-of-the-art baselines. Quantitative results reveal significant gains in predictive performance, while qualitative analyses show that textual context enhances interpretability and actionable insights. Our findings indicate that integrating multimodal inputs not only improves prediction accuracy but also fosters more intuitive, user-centered forecasting
[ "Large Language Models", "Time-series Prediction", "Multi-modal", "Instruction-following" ]
https://openreview.net/pdf?id=01wMplF8TL
https://openreview.net/forum?id=01wMplF8TL
pipXklFh72
official_comment
1,732,174,198,821
ZzrTyCsj7t
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission10424/Authors" ]
ICLR.cc/2025/Conference
2025
title: Response To Reviewer mT1k comment: Dear reviewer: I wonder if our response can solve your concerns! Thank you!
01wMplF8TL
INSTRUCTION-FOLLOWING LLMS FOR TIME SERIES PREDICTION: A TWO-STAGE MULTIMODAL APPROACH
[]
We introduce Text-Informed Time Series Prediction (TITSP), an innovative multimodal framework that integrates textual knowledge with temporal dynamics using Large Language Models (LLMs). TITSP employs a two-stage process that bridges numerical data with rich contextual information for enhanced forecasting accuracy and interpretability.In the first stage, we present AutoPrompter, which captures temporal dependencies from time series data and aligns them with semantically meaningful text embeddings.In the second stage, these aligned embeddings are refined by incorporating task-specific textual instructions through LLM. We evaluate TITSP on several multimodal time series prediction tasks, demonstrating substantial improvements over state-of-the-art baselines. Quantitative results reveal significant gains in predictive performance, while qualitative analyses show that textual context enhances interpretability and actionable insights. Our findings indicate that integrating multimodal inputs not only improves prediction accuracy but also fosters more intuitive, user-centered forecasting
[ "Large Language Models", "Time-series Prediction", "Multi-modal", "Instruction-following" ]
https://openreview.net/pdf?id=01wMplF8TL
https://openreview.net/forum?id=01wMplF8TL
lnU1bacqhQ
official_comment
1,732,174,269,565
HNantkZwp3
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission10424/Authors" ]
ICLR.cc/2025/Conference
2025
title: Response to Reviewer GGqR comment: Dear reviewer: I wonder if our response can solve your concerns! Thank you!
01wMplF8TL
INSTRUCTION-FOLLOWING LLMS FOR TIME SERIES PREDICTION: A TWO-STAGE MULTIMODAL APPROACH
[]
We introduce Text-Informed Time Series Prediction (TITSP), an innovative multimodal framework that integrates textual knowledge with temporal dynamics using Large Language Models (LLMs). TITSP employs a two-stage process that bridges numerical data with rich contextual information for enhanced forecasting accuracy and interpretability.In the first stage, we present AutoPrompter, which captures temporal dependencies from time series data and aligns them with semantically meaningful text embeddings.In the second stage, these aligned embeddings are refined by incorporating task-specific textual instructions through LLM. We evaluate TITSP on several multimodal time series prediction tasks, demonstrating substantial improvements over state-of-the-art baselines. Quantitative results reveal significant gains in predictive performance, while qualitative analyses show that textual context enhances interpretability and actionable insights. Our findings indicate that integrating multimodal inputs not only improves prediction accuracy but also fosters more intuitive, user-centered forecasting
[ "Large Language Models", "Time-series Prediction", "Multi-modal", "Instruction-following" ]
https://openreview.net/pdf?id=01wMplF8TL
https://openreview.net/forum?id=01wMplF8TL
gU5YYFB4qu
official_review
1,730,199,891,836
01wMplF8TL
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission10424/Reviewer_We4d" ]
ICLR.cc/2025/Conference
2025
summary: The paper proposes a novel two-stage for multimodal forecasting through historical data and textual cues that are useful for LLM-based forecasters. The multimodal framework is evaluated on numerous multimodal forecasting tasks. The paper provides a setup to include expert opinions for a forecasting problem. soundness: 2 presentation: 3 contribution: 2 strengths: The strengths include the relevance of the problem of text-aided forecasting and the novelty of the prompting method. The methodology section is comprehensive and well-described, and the techniques and experiments have been explained in detail and are easy to follow. The figures convey the overall idea and highlight the improvements over the no-instruction setup. weaknesses: The primary weaknesses of the paper are as follows: 1. **Incomplete Literature Coverage**: Section 2.2 does not fully address relevant multimodal forecasting models, omitting key references such as UniTime ([https://dl.acm.org/doi/10.1145/3589334.3645434](https://dl.acm.org/doi/10.1145/3589334.3645434)). 2. **Limited Comparative Analysis**: The results lack sufficient comparison with other multimodal forecasting models, reducing insight into how the proposed method performs relative to similar approaches. 3. **Insufficient Dataset Description**: Essential dataset details, including sample counts, history length, and forecasting horizon, are not provided. Additionally, the impact of the forecasting horizon on prediction quality remains underexplored. 4. **Simplistic Experimental Instructions**: The experimental instructions are overly simplistic, failing to reflect realistic scenarios. The limited set of training instructions may also suggest that simpler alternatives for instruction embedding could have been more effective. 5. **Circular Evaluation**: The evaluation datasets have been tailored from existing datasets based on the training instructions intended for evaluation, which creates a circular reasoning issue that undermines the reliability of the evaluation setup. A similar statement about the order compliance rate metric can also be made. **Minor Issues:** 1. The paper inconsistently uses closing quotes (") instead of opening quotes (``) in multiple locations, including but not limited to lines 197, 203, and 213. 2. Textual citations, rather than parenthetical citations, would be more suitable for the references in lines 117 to 128, enhancing the readability and flow of the text. 3. Appropriate citations are not provided for the original dataset sources. questions: Questions: 1. The choice of order compliance rate as an evaluation metric is intriguing. This metric appears specifically tailored to the instructions outlined in the paper, which may limit its applicability to real-world scenarios. Could you clarify the advantages this metric offers over existing metrics for evaluating forecasting performance? Suggestions: - Benchmark results against a broader selection of existing multimodal forecasting models to enhance comparative insights. - Include a detailed discussion of the dataset, covering aspects such as sample size, history length, and forecasting horizon. - If feasible, incorporate more complex textual cues in the experiments to better reflect real-world forecasting challenges. flag_for_ethics_review: ['No ethics review needed.'] rating: 5 confidence: 4 code_of_conduct: Yes
01wMplF8TL
INSTRUCTION-FOLLOWING LLMS FOR TIME SERIES PREDICTION: A TWO-STAGE MULTIMODAL APPROACH
[]
We introduce Text-Informed Time Series Prediction (TITSP), an innovative multimodal framework that integrates textual knowledge with temporal dynamics using Large Language Models (LLMs). TITSP employs a two-stage process that bridges numerical data with rich contextual information for enhanced forecasting accuracy and interpretability.In the first stage, we present AutoPrompter, which captures temporal dependencies from time series data and aligns them with semantically meaningful text embeddings.In the second stage, these aligned embeddings are refined by incorporating task-specific textual instructions through LLM. We evaluate TITSP on several multimodal time series prediction tasks, demonstrating substantial improvements over state-of-the-art baselines. Quantitative results reveal significant gains in predictive performance, while qualitative analyses show that textual context enhances interpretability and actionable insights. Our findings indicate that integrating multimodal inputs not only improves prediction accuracy but also fosters more intuitive, user-centered forecasting
[ "Large Language Models", "Time-series Prediction", "Multi-modal", "Instruction-following" ]
https://openreview.net/pdf?id=01wMplF8TL
https://openreview.net/forum?id=01wMplF8TL
eadJpVwh8t
official_comment
1,731,924,541,688
ZzrTyCsj7t
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission10424/Authors" ]
ICLR.cc/2025/Conference
2025
title: Response To Reviewer mT1k comment: ### Thank you for your precious comments! The following are our responses to your concerns. --- ### Comment 1: *There seems to be a mismatch between the described technique used to apply the modification (equation 3), and the examples shown (figure 3). According to the equation, the data in the forecast window should be a pure affine function, without any of the noise shown in figure 3.* **Response:** We thank the reviewer for highlighting this point. Equation (3) indeed describes a pure affine function; however, to ensure an increasing trend in certain time series, we allowed the slope \(A\) to vary within the forecast window. This deliberate choice introduces some noise, as shown in Figure 3, but it demonstrates the model’s ability to adapt to evolving trends. For clearer examples without slope variation, please refer to Figure 10 in the Appendix (page 20). We have clarified this in the revised manuscript on page 4, where a comment is added to clarify this important point. --- ### Comment 2: *While the model is tested against other multimodal text+timeseries models, it should also be tested against pure LLM approaches: just plugging the text and the history in a prompt for GPT-4 or Llama 3, and looking at the generated output. While such an approach won't scale to long series, recent work has shown it to be surprisingly decent at forecasting under textual instructions. See: LLM Processes by Requiema 2024 for a slightly more complex approach, but there may be more appropriate references for the more direct one.* **Response:** We thank the reviewer for this suggestion. Although LLMs have shown some capability with time series data, they are fundamentally designed for language tasks and often struggle with numerical accuracy, as highlighted by several studies. This limitation motivated our dual-channel approach, where time series and text are processed in specialized frameworks, leveraging an expert model for each modality. In *Table 2 (page 9)*, we conduct an experiment by directly prompting Llama-3.1-8B-Instruct to perform these tasks. The results show good understanding of simple instructions but significant failures in most tasks. This approach also demonstrates instability, as the output may be challenging to directly utilize due to the mixture of numerical values and textual content. The designed prompt is shown in *Appendix I*. In the Appendix (see *Table 5*), we present an experiment comparing our dual-channel method with GPT4TS—a purely LLM-based model for time series (for descriptive text instead of instructions). Despite GPT’s strong backbone (compared to Qwen used for our approach), our method outperforms it, confirming that dual-channel designs are more effective for multimodal tasks. Additionally, as the reviewer suggested, we conducted a new experiment focused on instruction-based tasks, which is the main focus of the paper. Here, our model also demonstrated superior compliance rates compared to Qwen4TS, underscoring the advantages of dual-channel methods for instruction-based text. These results are now included in the revised manuscript in *Table 2 (page 9)*. --- ### Comment 3: *Hyperparameters and training curriculum for the timeseries portion of the model are missing.* **Response:** We thank the reviewer for pointing this out. The missing experimental details regarding the hyperparameters and training curriculum for the time series feature extractor are now included in the updated version of the manuscript. These details are provided in *Section I of Appendix*, where we outline the specific settings and the training procedure used for this part of the model, as well as the experimental setup for training UniTime and Qwen4MTS for the new additional experiments.
01wMplF8TL
INSTRUCTION-FOLLOWING LLMS FOR TIME SERIES PREDICTION: A TWO-STAGE MULTIMODAL APPROACH
[]
We introduce Text-Informed Time Series Prediction (TITSP), an innovative multimodal framework that integrates textual knowledge with temporal dynamics using Large Language Models (LLMs). TITSP employs a two-stage process that bridges numerical data with rich contextual information for enhanced forecasting accuracy and interpretability.In the first stage, we present AutoPrompter, which captures temporal dependencies from time series data and aligns them with semantically meaningful text embeddings.In the second stage, these aligned embeddings are refined by incorporating task-specific textual instructions through LLM. We evaluate TITSP on several multimodal time series prediction tasks, demonstrating substantial improvements over state-of-the-art baselines. Quantitative results reveal significant gains in predictive performance, while qualitative analyses show that textual context enhances interpretability and actionable insights. Our findings indicate that integrating multimodal inputs not only improves prediction accuracy but also fosters more intuitive, user-centered forecasting
[ "Large Language Models", "Time-series Prediction", "Multi-modal", "Instruction-following" ]
https://openreview.net/pdf?id=01wMplF8TL
https://openreview.net/forum?id=01wMplF8TL
ZzrTyCsj7t
official_review
1,730,670,396,894
01wMplF8TL
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission10424/Reviewer_mT1k" ]
ICLR.cc/2025/Conference
2025
summary: The article describe a new model to incorporate textual information with a more traditional timeseries forecasting model. It does so by combining an embedding computed from the historical numerical data with an embedding computing from the textual information. The combined embedding is then used to generate the forecast. The model is tested both on real-world data, where it shows competitive results, and on generated data, where it is shown to follow the instructions included in the textual information. soundness: 3 presentation: 2 contribution: 3 strengths: 1. It is good that zero shot examples of descriptions which have not been provided in the training set have been tested with. Without those, the narrow set of possible descriptions could have made it impossible to check whether the result quality came from the model overfitting on these descriptions or not. 2. Training the model using generated data and computing how well the model follows the instructions is a relatively clean way to do a proof of concept of the idea, which is appropriate currently, as the field of using LLM and timeseries models together is still in its infancy. weaknesses: 1. There seems to be a mismatch between the described technique used to apply the modification (equation 3), and the examples shown (figure 3). According to the equation, the data in the forecast window should be a pure affine function, without any of the noise shown in figure 3. 2. While the model is tested against other multimodal text+timeseries models, it should also be tested against pure LLM approaches: just plugging the text and the history in a prompt for GPT 4 or LLama 3, and looking at the generated output. While such an approach won't scale to long series, recent work have shown it to be surprisingly decent at forecasting under textual instructions. See: LLM Processes by Requiema 2024 for a slightly more complex approach, but there may be more appropriate references for the more direct one. 3. Hyperparameters and training curiculum for the timeseries portion of the model are missing. questions: 1. For table 4, can you provide the same results, but for your model instead of only for TimeLLM? It would make it more obvious whether your model succeed on those tasks with incorrect textual information. 2. For real world dataset, was the textual information always constant (as shown in section B.3) for each dataset? This would allow a finetuned model to fully ignore it, since it could bake said information in its weights anyway. flag_for_ethics_review: ['No ethics review needed.'] rating: 5 confidence: 3 code_of_conduct: Yes
01wMplF8TL
INSTRUCTION-FOLLOWING LLMS FOR TIME SERIES PREDICTION: A TWO-STAGE MULTIMODAL APPROACH
[]
We introduce Text-Informed Time Series Prediction (TITSP), an innovative multimodal framework that integrates textual knowledge with temporal dynamics using Large Language Models (LLMs). TITSP employs a two-stage process that bridges numerical data with rich contextual information for enhanced forecasting accuracy and interpretability.In the first stage, we present AutoPrompter, which captures temporal dependencies from time series data and aligns them with semantically meaningful text embeddings.In the second stage, these aligned embeddings are refined by incorporating task-specific textual instructions through LLM. We evaluate TITSP on several multimodal time series prediction tasks, demonstrating substantial improvements over state-of-the-art baselines. Quantitative results reveal significant gains in predictive performance, while qualitative analyses show that textual context enhances interpretability and actionable insights. Our findings indicate that integrating multimodal inputs not only improves prediction accuracy but also fosters more intuitive, user-centered forecasting
[ "Large Language Models", "Time-series Prediction", "Multi-modal", "Instruction-following" ]
https://openreview.net/pdf?id=01wMplF8TL
https://openreview.net/forum?id=01wMplF8TL
ZL3Zc8835C
official_comment
1,732,173,571,215
gU5YYFB4qu
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission10424/Authors" ]
ICLR.cc/2025/Conference
2025
title: Response to Reviewer We4d comment: Dear reviewer: I wonder if our response can solve your concerns! Thank you!
01wMplF8TL
INSTRUCTION-FOLLOWING LLMS FOR TIME SERIES PREDICTION: A TWO-STAGE MULTIMODAL APPROACH
[]
We introduce Text-Informed Time Series Prediction (TITSP), an innovative multimodal framework that integrates textual knowledge with temporal dynamics using Large Language Models (LLMs). TITSP employs a two-stage process that bridges numerical data with rich contextual information for enhanced forecasting accuracy and interpretability.In the first stage, we present AutoPrompter, which captures temporal dependencies from time series data and aligns them with semantically meaningful text embeddings.In the second stage, these aligned embeddings are refined by incorporating task-specific textual instructions through LLM. We evaluate TITSP on several multimodal time series prediction tasks, demonstrating substantial improvements over state-of-the-art baselines. Quantitative results reveal significant gains in predictive performance, while qualitative analyses show that textual context enhances interpretability and actionable insights. Our findings indicate that integrating multimodal inputs not only improves prediction accuracy but also fosters more intuitive, user-centered forecasting
[ "Large Language Models", "Time-series Prediction", "Multi-modal", "Instruction-following" ]
https://openreview.net/pdf?id=01wMplF8TL
https://openreview.net/forum?id=01wMplF8TL
XgyvdYE1ln
official_comment
1,732,334,942,657
ZL3Zc8835C
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission10424/Reviewer_We4d" ]
ICLR.cc/2025/Conference
2025
comment: Thank you for addressing my comments and revising the paper. While most of my concerns have been addressed, I still have some questions regarding the core contribution of this work. In Sections 2.2, the authors claim to present a _novel framework_ for forecasting that leverages textual instructions and demonstrate its superior performance over existing frameworks in Table 2. However, the claimed novelty of this framework compared to existing methodologies remains unclear. I request the authors to further **elaborate on the framework's uniqueness as compared to the existing methods** and include the **parameter counts for both their model and the benchmarks** to confirm that the improvements are not merely due to higher computational resources. Furthermore, despite the authors' appreciating the raised typographical issues, such issues have continued into the revised sections, with some of them listed below: 1. Incorrect quotations- line 113 2. Incorrect parenthetical citations- line 127 3. Spelling errors- line 107 (_success_ - _sucess_) Such typos, though minor, are numerous enough to raise concerns about the overall credibility of the paper. For now, I will maintain my current score.
01wMplF8TL
INSTRUCTION-FOLLOWING LLMS FOR TIME SERIES PREDICTION: A TWO-STAGE MULTIMODAL APPROACH
[]
We introduce Text-Informed Time Series Prediction (TITSP), an innovative multimodal framework that integrates textual knowledge with temporal dynamics using Large Language Models (LLMs). TITSP employs a two-stage process that bridges numerical data with rich contextual information for enhanced forecasting accuracy and interpretability.In the first stage, we present AutoPrompter, which captures temporal dependencies from time series data and aligns them with semantically meaningful text embeddings.In the second stage, these aligned embeddings are refined by incorporating task-specific textual instructions through LLM. We evaluate TITSP on several multimodal time series prediction tasks, demonstrating substantial improvements over state-of-the-art baselines. Quantitative results reveal significant gains in predictive performance, while qualitative analyses show that textual context enhances interpretability and actionable insights. Our findings indicate that integrating multimodal inputs not only improves prediction accuracy but also fosters more intuitive, user-centered forecasting
[ "Large Language Models", "Time-series Prediction", "Multi-modal", "Instruction-following" ]
https://openreview.net/pdf?id=01wMplF8TL
https://openreview.net/forum?id=01wMplF8TL
VDCHyFBDlM
official_comment
1,731,931,010,638
HNantkZwp3
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission10424/Authors" ]
ICLR.cc/2025/Conference
2025
title: Response to Reviewer GGqR - question part comment: ### Question 1: *How would the proposed model perform without access to textual inputs or under noisy conditions? If textual instructions are incomplete, inconsistent, or contain noise, how would the model's performance be affected? This scenario is particularly relevant in high-stakes areas like finance, where decision-making often involves dealing with imperfect information. What measures have been taken to ensure robustness against these issues, which are common in real-world data?* **Response:** We appreciate the reviewer’s question on robustness under imperfect or noisy textual inputs. In “what-if” scenarios, even if expert instructions are incomplete or contain intentional inaccuracies, the model’s primary goal is to follow these inputs accurately. This is often precisely what an expert wants—simply to test various hypothetical scenarios by observing how the model behaves under different instructions, rather than to ensure perfectly accurate instructions. Our model’s high compliance rate shows that it reliably adheres to these inputs, enabling experts to evaluate potential outcomes and behaviors without needing to formalize each scenario mathematically within the framework. Although handling incomplete or inconsistent text is not the main focus of our work, we recognize its relevance. In the Appendix (_Table 5, Page 18_), we include experiments comparing our model's performance with and without textual inputs. These results show that even in cases without explicit instructional text, our method outperforms purely time-series models, highlighting the value of informative text for forecasting accuracy. This additional evaluation demonstrates the model’s ability to handle various text input scenarios, further affirming its robustness and versatility. --- ### Question 2: *How does the proposed framework address interpretability in practice? The paper claims that incorporating textual instructions enhances interpretability, but there are no concrete demonstrations of how this contributes to meaningful insights for domain experts. Could you provide explicit examples or user studies that validate this claim? Without such evidence, how can the claim of improved interpretability be substantiated?* **Response:** We thank the reviewer for raising this question on interpretability. The interpretability of our framework primarily comes from its capacity to directly link predictions to textual instructions. For example, if a linear growth is predicted, it can be traced back to specific input instructions, providing clear insight into why a particular behavior was forecasted. Additionally, attention map visualizations (see _Figure 21, Page 27_) reveal that the model highlights relevant keywords from the instructions, further demonstrating its focus on critical components of the input. This not only makes the reasoning process transparent but also allows experts to verify that the model is attending to meaningful terms. Our framework’s generalization ability also contributes to interpretability, as it shows the model’s capacity to apply learned associations to new contexts, indicating it understands the core instruction beyond specific examples. While we acknowledge the value of explicit user studies, these elements collectively provide substantial interpretability by aligning the model's outputs directly with expert input and highlighting the key instructions that guide predictions.
01wMplF8TL
INSTRUCTION-FOLLOWING LLMS FOR TIME SERIES PREDICTION: A TWO-STAGE MULTIMODAL APPROACH
[]
We introduce Text-Informed Time Series Prediction (TITSP), an innovative multimodal framework that integrates textual knowledge with temporal dynamics using Large Language Models (LLMs). TITSP employs a two-stage process that bridges numerical data with rich contextual information for enhanced forecasting accuracy and interpretability.In the first stage, we present AutoPrompter, which captures temporal dependencies from time series data and aligns them with semantically meaningful text embeddings.In the second stage, these aligned embeddings are refined by incorporating task-specific textual instructions through LLM. We evaluate TITSP on several multimodal time series prediction tasks, demonstrating substantial improvements over state-of-the-art baselines. Quantitative results reveal significant gains in predictive performance, while qualitative analyses show that textual context enhances interpretability and actionable insights. Our findings indicate that integrating multimodal inputs not only improves prediction accuracy but also fosters more intuitive, user-centered forecasting
[ "Large Language Models", "Time-series Prediction", "Multi-modal", "Instruction-following" ]
https://openreview.net/pdf?id=01wMplF8TL
https://openreview.net/forum?id=01wMplF8TL
Udk2U7lYYw
official_comment
1,731,925,115,254
ZzrTyCsj7t
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission10424/Authors" ]
ICLR.cc/2025/Conference
2025
title: Response To Reviewer mT1k - question parts comment: ### Question 1: *For Table 4, can you provide the same results, but for your model instead of only for TimeLLM? It would make it more obvious whether your model succeeds on those tasks with incorrect textual information.* **Response:** We thank the reviewer for this insightful suggestion. As part of our ongoing experiments, we aim to address this by evaluating our model under the same conditions. Specifically, for the same base time series (same context length), we provide multiple different instructions and observe that our model achieves a high compliance rate, demonstrating its ability to follow instructions accurately. In contrast, TimeLLM exhibits lower compliance, highlighting the importance of the instructions. We appreciate the reviewer’s input, and we have now included these results in the updated manuscript for even more models (Qwen4TS and UniTime) (see *Table 2, page 9*). --- ### Question 2: *For the real-world dataset, was the textual information always constant (as shown in Section B.3) for each dataset? This would allow a fine-tuned model to fully ignore it, since it could bake said information in its weights anyway.* **Response:** We thank the reviewer for raising this important point. In our experiments, the format of the textual prompts varied across datasets, ensuring that the model was exposed to different types of instructions and did not simply memorize a single format. However, within each dataset, the prompt format remained consistent to ensure a fair evaluation of the model's ability to handle the specific instructions. This approach prevents the model from "baking" the textual information into its weights and ensures it adapts to diverse instructions. We have clarified this in *Section B.3, page 19* of the updated manuscript, where we add another prompt format for an additional dataset (Traffic).
01wMplF8TL
INSTRUCTION-FOLLOWING LLMS FOR TIME SERIES PREDICTION: A TWO-STAGE MULTIMODAL APPROACH
[]
We introduce Text-Informed Time Series Prediction (TITSP), an innovative multimodal framework that integrates textual knowledge with temporal dynamics using Large Language Models (LLMs). TITSP employs a two-stage process that bridges numerical data with rich contextual information for enhanced forecasting accuracy and interpretability.In the first stage, we present AutoPrompter, which captures temporal dependencies from time series data and aligns them with semantically meaningful text embeddings.In the second stage, these aligned embeddings are refined by incorporating task-specific textual instructions through LLM. We evaluate TITSP on several multimodal time series prediction tasks, demonstrating substantial improvements over state-of-the-art baselines. Quantitative results reveal significant gains in predictive performance, while qualitative analyses show that textual context enhances interpretability and actionable insights. Our findings indicate that integrating multimodal inputs not only improves prediction accuracy but also fosters more intuitive, user-centered forecasting
[ "Large Language Models", "Time-series Prediction", "Multi-modal", "Instruction-following" ]
https://openreview.net/pdf?id=01wMplF8TL
https://openreview.net/forum?id=01wMplF8TL
RgQzZOxFAX
official_comment
1,731,894,943,979
01wMplF8TL
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission10424/Authors" ]
ICLR.cc/2025/Conference
2025
comment: # Dear Reviewers We would like to express our sincere gratitude for your time and valuable feedback on our paper. Below, we provide a detailed response to each of the reviewers' comments and outline the revisions made to address their concerns. We have followed your suggestions and believe the manuscript has significantly improved as a result. ## Summary of Changes In response to the reviewers' comments, we have made the following key changes to the manuscript: - Clarification of the dataset generation to handle the question on the apparent mismatch between Equation 3 and Figure 3 on **page 4** (**Reviewer mT1k**). - Additional experiments by adding two baselines (UniTime and Qwen pure LLM) for text instruction in Table 2 on **page 9** (**Reviewers mT1k, GGqR, YdJR, and We4d**). - Incorporation of additional state-of-the-art methods, including other multimodal papers such as UniTime, on **page 3** (**Reviewers We4d and GGqR**). - Detailed architecture design (number of layers, architecture, hyperparameters) for the time series portion, as well as a detailed description of the datasets in **Section I of the appendix** (on **page 30**) (**Reviewers mT1k and GGqR**). - Clarification about AutoPrompter on **page 5** (**Reviewer GGqR**). - **Added more related work** on traditional time-series prediction models in **Section 2.1**. The detailed responses to each reviewer’s comments are provided below.
01wMplF8TL
INSTRUCTION-FOLLOWING LLMS FOR TIME SERIES PREDICTION: A TWO-STAGE MULTIMODAL APPROACH
[]
We introduce Text-Informed Time Series Prediction (TITSP), an innovative multimodal framework that integrates textual knowledge with temporal dynamics using Large Language Models (LLMs). TITSP employs a two-stage process that bridges numerical data with rich contextual information for enhanced forecasting accuracy and interpretability.In the first stage, we present AutoPrompter, which captures temporal dependencies from time series data and aligns them with semantically meaningful text embeddings.In the second stage, these aligned embeddings are refined by incorporating task-specific textual instructions through LLM. We evaluate TITSP on several multimodal time series prediction tasks, demonstrating substantial improvements over state-of-the-art baselines. Quantitative results reveal significant gains in predictive performance, while qualitative analyses show that textual context enhances interpretability and actionable insights. Our findings indicate that integrating multimodal inputs not only improves prediction accuracy but also fosters more intuitive, user-centered forecasting
[ "Large Language Models", "Time-series Prediction", "Multi-modal", "Instruction-following" ]
https://openreview.net/pdf?id=01wMplF8TL
https://openreview.net/forum?id=01wMplF8TL
KnW1mGxVFx
official_comment
1,732,220,511,230
pipXklFh72
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission10424/Reviewer_mT1k" ]
ICLR.cc/2025/Conference
2025
comment: Thanks for answering my comments and questions. W1: Figure 10 seems to indicate that "Linear Growth" is gotten by adding an affine function to the original data. This may be compatible with equations (4) and (5) (which are not very clear), but is definitely still not compatible with equation (3) and Figure 3 (which is not compatible with such a transformation). Please make sure that the method you used to modify your data is accurately documented in your paper to allow other researchers to reproduce your work. W2: Thanks for adding the extra experiments. Larger scale LLMs would have performed better, but would have been more costly. W3: Thanks for adding the extra details. Q1: While Table 8 does show the impact of changing the way the textual information is phrased (and shows that it has an impact on the model), it doesn't outright give incorrect information (as in Table 4). I would still be curious to see the result of such an experiment for your model. Q2: Is the model trained with all the datasets at once, or one version is trained for each dataset? (This may already been mentioned in the paper.) It is true that varied prompts for each dataset would help in the former case, it wouldn't have an impact in the later case. Overall, I would still need to think whether to increase your paper score or keep it as is. I will have to take some time to reread my fellow reviewers comments and reread the paper before doing so.
01wMplF8TL
INSTRUCTION-FOLLOWING LLMS FOR TIME SERIES PREDICTION: A TWO-STAGE MULTIMODAL APPROACH
[]
We introduce Text-Informed Time Series Prediction (TITSP), an innovative multimodal framework that integrates textual knowledge with temporal dynamics using Large Language Models (LLMs). TITSP employs a two-stage process that bridges numerical data with rich contextual information for enhanced forecasting accuracy and interpretability.In the first stage, we present AutoPrompter, which captures temporal dependencies from time series data and aligns them with semantically meaningful text embeddings.In the second stage, these aligned embeddings are refined by incorporating task-specific textual instructions through LLM. We evaluate TITSP on several multimodal time series prediction tasks, demonstrating substantial improvements over state-of-the-art baselines. Quantitative results reveal significant gains in predictive performance, while qualitative analyses show that textual context enhances interpretability and actionable insights. Our findings indicate that integrating multimodal inputs not only improves prediction accuracy but also fosters more intuitive, user-centered forecasting
[ "Large Language Models", "Time-series Prediction", "Multi-modal", "Instruction-following" ]
https://openreview.net/pdf?id=01wMplF8TL
https://openreview.net/forum?id=01wMplF8TL
IICO2K0V4D
official_comment
1,731,932,283,644
gU5YYFB4qu
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission10424/Authors" ]
ICLR.cc/2025/Conference
2025
title: Response to Reviewer We4d comment: ### Comment 1: *Major issues* **Response:** We thank the reviewer for their insightful comments. We address each of the points raised as follows: - **Incomplete Literature Coverage:** We appreciate the reviewer pointing out the omission of key references such as UniTime. We have incorporated this important work, along with other relevant multimodal forecasting models, into the updated version of the paper. The related work section has been expanded to provide a more comprehensive overview of the field and to better position our contribution in relation to existing approaches. - **Limited Comparative Analysis:** We appreciate the reviewer’s insightful feedback regarding the need for broader comparisons with other multimodal forecasting models. To address this concern, we have expanded our comparisons to include additional multimodal models, particularly in scenarios where descriptive text is provided alongside time series data. Notably, we have included comparisons with GPT-4TS, TimeLLM, and purely time-series-based models (_Table 5_), and our method outperforms these models in the tasks considered, demonstrating its high performance even outside the scope of instruction-based tasks. Additionally, we present a detailed evaluation of our method against UniTime and Qwen4MTS in _Table 2 (Page 9)_ to further address the reviewer’s concerns. - **Insufficient Dataset Description:** We apologize for the lack of detail regarding the datasets. We have taken care to include all relevant dataset details—such as sample counts, history length, and forecasting horizon—in the updated manuscript. A detailed analysis is presented in Section I of the Appendix. - **Simplistic Experimental Instructions:** While the instructions considered in our work may seem simple, they represent an essential first step toward more complex scenarios where complete document instructions can be provided. Our research serves as a foundational effort, demonstrating that even with simple cases, several challenges must be addressed using existing algorithms like TimeLLM. We show that these challenges can be effectively managed with a specifically designed architecture. Although there is room for improvement in handling instructions, our paper already considers scenarios where the test text instructions differ from those used during training. These instructions are somewhat complex, combining several base instructions, and our methods demonstrate good generalization capabilities (_Table 3_). This success suggests promising perspectives toward the ultimate goal of tackling any instruction, regardless of complexity. - **Circular Evaluation:** We appreciate the reviewer’s feedback regarding the evaluation datasets and the potential for circular reasoning. To address this concern, we highlight that in _Table 3_, we provide an assessment of the generalization capabilities of our model. In this evaluation, the training and test instructions differ significantly, which we believe is a fair and robust way to evaluate the model's ability to handle instructions adequately. This approach ensures that our model is tested on scenarios not explicitly covered during training, providing a more reliable measure of its performance in real-world applications. We are confident that this evaluation demonstrates the model's generalization capabilities and addresses the reviewer’s concerns about the reliability of our evaluation setup. --- ### Comment 2: *Minor issues* **Response:** Thank you for your detailed feedback on our paper. We appreciate your time and effort in providing these valuable comments. We will take each of your suggestions into account in the updated version of the paper. - Regarding the inconsistent use of closing quotes (`"`) instead of opening quotes (`“`) in multiple locations, including but not limited to lines 197, 203, and 213, we will ensure that the correct quotation marks are used throughout the manuscript. - We agree with your suggestion to use textual citations rather than parenthetical citations for the references in lines 117 to 128. This will enhance the readability and flow of the text. - Additionally, we will provide appropriate citations for the original dataset sources. Thank you once again for your constructive feedback. We look forward to addressing these points in the revised version of the paper. ---
01wMplF8TL
INSTRUCTION-FOLLOWING LLMS FOR TIME SERIES PREDICTION: A TWO-STAGE MULTIMODAL APPROACH
[]
We introduce Text-Informed Time Series Prediction (TITSP), an innovative multimodal framework that integrates textual knowledge with temporal dynamics using Large Language Models (LLMs). TITSP employs a two-stage process that bridges numerical data with rich contextual information for enhanced forecasting accuracy and interpretability.In the first stage, we present AutoPrompter, which captures temporal dependencies from time series data and aligns them with semantically meaningful text embeddings.In the second stage, these aligned embeddings are refined by incorporating task-specific textual instructions through LLM. We evaluate TITSP on several multimodal time series prediction tasks, demonstrating substantial improvements over state-of-the-art baselines. Quantitative results reveal significant gains in predictive performance, while qualitative analyses show that textual context enhances interpretability and actionable insights. Our findings indicate that integrating multimodal inputs not only improves prediction accuracy but also fosters more intuitive, user-centered forecasting
[ "Large Language Models", "Time-series Prediction", "Multi-modal", "Instruction-following" ]
https://openreview.net/pdf?id=01wMplF8TL
https://openreview.net/forum?id=01wMplF8TL
HNantkZwp3
official_review
1,730,302,004,340
01wMplF8TL
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission10424/Reviewer_GGqR" ]
ICLR.cc/2025/Conference
2025
summary: The paper presents Text-Informed Time Series Prediction (TITSP), a multimodal framework that integrates textual context with time series data using Large Language Models (LLMs). The approach involves two stages: AutoPrompter, which aligns time series data with text embeddings, and a refinement stage that incorporates task-specific textual instructions to enhance prediction accuracy and interpretability. While TITSP proves particularly effective for context-rich forecasting tasks, by demonstrating improved performance under specific settings against some other methods. soundness: 2 presentation: 1 contribution: 2 strengths: - A novel two-stage framework for integrating temporal and textual data. - A data generation workflow for instruction-based forecasting, compatible with LLMs. - Comprehensive ablation studies and comparative evaluations demonstrating the effectiveness of TITSP. weaknesses: - **Technical Contributions are Incremental** The proposed approach lacks significant technical innovation. Integrating LLMs with time series is an incremental step rather than a groundbreaking contribution. The use of cross-attention and VQ-VAE offers no substantial improvement beyond established techniques. - **Poor Structure and Clarity** The paper is poorly organized, with unclear explanations and an incoherent flow. The motivation and rationale for the proposed method are inadequately communicated, and critical components like AutoPrompter are explained in a convoluted manner, hindering comprehension. - **Inadequate Experiments** Experimental validation is weak, relying heavily on synthetic datasets that limit the assessment of practical applicability. Comparisons to related state-of-the-art methods are lacking, and statistical significance testing is absent, making it difficult to validate the performance claims. - **Superficial Related Work** The related work section lacks depth and fails to properly differentiate the contribution from prior research. Key works are missing or insufficiently discussed, weakening the justification for originality. - **Numerous Typos and Lack of Polish** Frequent typos (e.g. citation mistaches in line 54-55), poorly formatted figures(fig. 6), and poorly constructed tables suggest a lack of careful proofreading, which detracts from the overall quality and credibility of the paper. - **Insufficient Practical Insights** The claimed interpretability through textual integration lacks demonstration. There are no real-world examples showing how domain experts would benefit from these insights, making the practical value of TITSP unclear. questions: - **How would the proposed model perform without access to textual inputs or under noisy conditions?** If textual instructions are incomplete, inconsistent, or contain noise, how would the model's performance be affected? This scenario is particularly relevant in high-stakes areas like finance, where decision-making often involves dealing with imperfect information. What measures have been taken to ensure robustness against these issues, which are common in real-world data? - **How does the proposed framework address interpretability in practice?** The paper claims that incorporating textual instructions enhances interpretability, but there are no concrete demonstrations of how this contributes to meaningful insights for domain experts. Could you provide explicit examples or user studies that validate this claim? Without such evidence, how can the claim of improved interpretability be substantiated? flag_for_ethics_review: ['No ethics review needed.'] rating: 3 confidence: 3 code_of_conduct: Yes
01wMplF8TL
INSTRUCTION-FOLLOWING LLMS FOR TIME SERIES PREDICTION: A TWO-STAGE MULTIMODAL APPROACH
[]
We introduce Text-Informed Time Series Prediction (TITSP), an innovative multimodal framework that integrates textual knowledge with temporal dynamics using Large Language Models (LLMs). TITSP employs a two-stage process that bridges numerical data with rich contextual information for enhanced forecasting accuracy and interpretability.In the first stage, we present AutoPrompter, which captures temporal dependencies from time series data and aligns them with semantically meaningful text embeddings.In the second stage, these aligned embeddings are refined by incorporating task-specific textual instructions through LLM. We evaluate TITSP on several multimodal time series prediction tasks, demonstrating substantial improvements over state-of-the-art baselines. Quantitative results reveal significant gains in predictive performance, while qualitative analyses show that textual context enhances interpretability and actionable insights. Our findings indicate that integrating multimodal inputs not only improves prediction accuracy but also fosters more intuitive, user-centered forecasting
[ "Large Language Models", "Time-series Prediction", "Multi-modal", "Instruction-following" ]
https://openreview.net/pdf?id=01wMplF8TL
https://openreview.net/forum?id=01wMplF8TL
GSmXJdj7HA
official_comment
1,732,597,574,189
3qv3dT4N3R
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission10424/Reviewer_YdJR" ]
ICLR.cc/2025/Conference
2025
comment: I appreciate the authors' response. However, the issue of data leakage and the resulting concerns regarding practical applicability remain unresolved. I understand the authors' claim that their model can effectively capture textual instructions about future time series, outperforming previous models. Nonetheless, in real-world scenarios, it is highly improbable that we would have access to highly accurate future textual data. This implies that the textual information representing future trends in practical applications is likely to be significantly inaccurate, resulting in a substantial difference between the training and testing datasets of the framework and real-world conditions. Even if we hypothetically assume that we could reliably obtain highly accurate textual instructions about the future, would we then only require manual intervention based on these precise descriptions to make predictions? In summary, I am concerned that there is a substantial disconnect between the future information used for training and testing and the future textual descriptions that will be available in practical applications, which raises questions about the actual efficacy of the proposed framework.
01wMplF8TL
INSTRUCTION-FOLLOWING LLMS FOR TIME SERIES PREDICTION: A TWO-STAGE MULTIMODAL APPROACH
[]
We introduce Text-Informed Time Series Prediction (TITSP), an innovative multimodal framework that integrates textual knowledge with temporal dynamics using Large Language Models (LLMs). TITSP employs a two-stage process that bridges numerical data with rich contextual information for enhanced forecasting accuracy and interpretability.In the first stage, we present AutoPrompter, which captures temporal dependencies from time series data and aligns them with semantically meaningful text embeddings.In the second stage, these aligned embeddings are refined by incorporating task-specific textual instructions through LLM. We evaluate TITSP on several multimodal time series prediction tasks, demonstrating substantial improvements over state-of-the-art baselines. Quantitative results reveal significant gains in predictive performance, while qualitative analyses show that textual context enhances interpretability and actionable insights. Our findings indicate that integrating multimodal inputs not only improves prediction accuracy but also fosters more intuitive, user-centered forecasting
[ "Large Language Models", "Time-series Prediction", "Multi-modal", "Instruction-following" ]
https://openreview.net/pdf?id=01wMplF8TL
https://openreview.net/forum?id=01wMplF8TL
C7yO82j6FM
official_comment
1,731,932,869,712
IICO2K0V4D
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission10424/Authors" ]
ICLR.cc/2025/Conference
2025
title: Response to Reviewer We4d comment: ### Comment 3: *The choice of order compliance rate as an evaluation metric is intriguing. This metric appears specifically tailored to the instructions outlined in the paper, which may limit its applicability to real-world scenarios. Could you clarify the advantages this metric offers over existing metrics for evaluating forecasting performance?* **Response:** We appreciate the reviewer’s thoughtful question. The order compliance rate was specifically chosen as an evaluation metric because the primary goal of our work is to assess how well the model adheres to the given textual instructions. Since we are the first to propose text instruction-based forecasting, existing metrics for traditional forecasting tasks may not fully capture the performance of a model that must follow complex, hypothetical instructions. The compliance rate, therefore, offers a tailored and effective way to measure this adherence. While it may seem specific to our setting, we believe it is a novel and valuable metric for this new approach. Its design enables us to quantify how well the model aligns with textual instructions, which is central to the novelty of our framework. We hope this clarifies why this metric is appropriate and meaningful in the context of our work. ### Suggestions of the reviewer: *Benchmark results against a broader selection of existing multimodal forecasting models to enhance comparative insights. Include a detailed discussion of the dataset, covering aspects such as sample size, history length, and forecasting horizon. If feasible, incorporate more complex textual cues in the experiments to better reflect real-world forecasting challenges.* **Response:** We thank the reviewer for their valuable suggestions. In the updated version of the paper (_Table 2, page 9_), we have included an exhaustive comparison with other methods tailored for text-instruction-based forecasting, including Qwen4MTS and UniTime, as also requested by Reviewer mT1k. Additionally, we extend the evaluation to descriptive tasks, where text serves as a description rather than an instruction. In the Appendix (_Table 5_), we compare our method against several benchmarks (including GPT4TS, TimeLLM, and time series-based forecasters) and show that, even without textual instructions (with descriptive texts about the task), our approach outperforms other models, demonstrating its broader applicability. We have also added more details on the datasets used, including sample size, history length, and forecasting horizon in Section I of the Appendix. Further More, our method could handle long sequence input which is shown in appendix G, and it shows a good attention map that it can extract key words regarding to the user instruction so that it could be used in real life.
01wMplF8TL
INSTRUCTION-FOLLOWING LLMS FOR TIME SERIES PREDICTION: A TWO-STAGE MULTIMODAL APPROACH
[]
We introduce Text-Informed Time Series Prediction (TITSP), an innovative multimodal framework that integrates textual knowledge with temporal dynamics using Large Language Models (LLMs). TITSP employs a two-stage process that bridges numerical data with rich contextual information for enhanced forecasting accuracy and interpretability.In the first stage, we present AutoPrompter, which captures temporal dependencies from time series data and aligns them with semantically meaningful text embeddings.In the second stage, these aligned embeddings are refined by incorporating task-specific textual instructions through LLM. We evaluate TITSP on several multimodal time series prediction tasks, demonstrating substantial improvements over state-of-the-art baselines. Quantitative results reveal significant gains in predictive performance, while qualitative analyses show that textual context enhances interpretability and actionable insights. Our findings indicate that integrating multimodal inputs not only improves prediction accuracy but also fosters more intuitive, user-centered forecasting
[ "Large Language Models", "Time-series Prediction", "Multi-modal", "Instruction-following" ]
https://openreview.net/pdf?id=01wMplF8TL
https://openreview.net/forum?id=01wMplF8TL
85x40s7jSQ
official_comment
1,732,603,619,049
GSmXJdj7HA
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission10424/Authors" ]
ICLR.cc/2025/Conference
2025
comment: Thank you for your insightful response. We truly appreciate your feedback, which encourages us to further elaborate on our approach. TITSP is designed to revolutionize time-series prediction by making it interactive. In contrast to traditional deep learning methods, which often fail in real-world applications due to their reliance solely on historical data, TITSP empowers users to actively participate in the prediction process. Deep learning models tend to learn only from past patterns, requiring users to repetitively re-engineer features and retrain models to have better performance. In our framework, we enable users to engage directly with the prediction process, not by assuming perfect textual descriptions, but by allowing them to inject their expert judgment and professional knowledge. This integration of human insight into time-series forecasting marks a significant departure from conventional methods, creating a more dynamic and adaptive approach to prediction.
01wMplF8TL
INSTRUCTION-FOLLOWING LLMS FOR TIME SERIES PREDICTION: A TWO-STAGE MULTIMODAL APPROACH
[]
We introduce Text-Informed Time Series Prediction (TITSP), an innovative multimodal framework that integrates textual knowledge with temporal dynamics using Large Language Models (LLMs). TITSP employs a two-stage process that bridges numerical data with rich contextual information for enhanced forecasting accuracy and interpretability.In the first stage, we present AutoPrompter, which captures temporal dependencies from time series data and aligns them with semantically meaningful text embeddings.In the second stage, these aligned embeddings are refined by incorporating task-specific textual instructions through LLM. We evaluate TITSP on several multimodal time series prediction tasks, demonstrating substantial improvements over state-of-the-art baselines. Quantitative results reveal significant gains in predictive performance, while qualitative analyses show that textual context enhances interpretability and actionable insights. Our findings indicate that integrating multimodal inputs not only improves prediction accuracy but also fosters more intuitive, user-centered forecasting
[ "Large Language Models", "Time-series Prediction", "Multi-modal", "Instruction-following" ]
https://openreview.net/pdf?id=01wMplF8TL
https://openreview.net/forum?id=01wMplF8TL
6wfNfd2oup
official_comment
1,731,925,218,784
ZzrTyCsj7t
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission10424/Authors" ]
ICLR.cc/2025/Conference
2025
title: Response To Reviewer mT1k - result table comment: For convenience, we also provide table 5 in the paper about test results ### Table: Comparison of Compliance Rate (CR) and MSE for TITSP, Time-LLM, Qwen4MTS, UniTime, and Llama-3.1-8B across various instructed actions with highlighted best (in **bold**) and second-best (in _underlined_) results. | **Instruction** | **TITSP (CR)** | **TITSP (MSE)** | **Time-LLM (CR)** | **Time-LLM (MSE)** | **Qwen4MTS (CR)** | **Qwen4MTS (MSE)** | **UniTime (Qwen) (CR)** | **UniTime (Qwen) (MSE)** | **Llama-3.1-8B (CR)** | **Llama-3.1-8B (MSE)** | |---------------------------------|----------------|-----------------|-------------------|--------------------|-------------------|--------------------|-------------------------|-------------------------|------------------------|------------------------| | Linear Growth and Linear Decay | **0.83** | **1.15** | 0.38 | 3.45 | _0.69_ | _1.90_ | 0.54 | 2.73 | 0.32 | 4.95 | | Linear Growth and Linear Decay | **0.79** | **1.17** | 0.49 | 2.85 | **0.79** | _1.34_ | 0.57 | 2.28 | 0.41 | 2.80 | | Linear Trend Up | _0.90_ | **1.03** | 0.63 | 1.71 | 0.76 | _1.08_ | 0.63 | 1.65 | **0.91** | 1.15 | | Linear Trend Down | **0.87** | **0.88** | 0.64 | 1.55 | 0.71 | 1.36 | 0.51 | 1.59 | _0.85_ | _0.92_ | | Exponential Growth | **0.89** | **1.33** | 0.58 | 2.59 | _0.63_ | _2.07_ | 0.60 | 2.38 | 0.58 | 2.35 | | Exponential Decay | **0.84** | **1.25** | 0.56 | 2.26 | 0.67 | 2.10 | _0.69_ | _2.05_ | 0.46 | 2.39 | | Keep Stable | **0.98** | _0.35_ | 0.76 | 0.76 | 0.93 | 0.48 | 0.83 | 0.62 | _0.95_ | **0.33** | | Decrease Amplitude | **0.90** | _0.91_ | 0.85 | 1.04 | **0.90** | **0.84** | 0.79 | 1.09 | 0.52 | 1.89 | | Increase Amplitude | **0.94** | **0.94** | 0.79 | 1.20 | _0.89_ | _0.96_ | 0.81 | 1.03 | 0.75 | 1.35 | | Logarithmic Growth | _0.77_ | _1.65_ | 0.49 | 2.31 | **0.79** | **1.55** | 0.60 | 1.73 | 0.55 | 1.94 | | Logarithmic Decay | **0.83** | **1.68** | 0.48 | 2.19 | _0.81_ | _1.69_ | 0.67 | 2.04 | 0.63 | 2.60 |
01wMplF8TL
INSTRUCTION-FOLLOWING LLMS FOR TIME SERIES PREDICTION: A TWO-STAGE MULTIMODAL APPROACH
[]
We introduce Text-Informed Time Series Prediction (TITSP), an innovative multimodal framework that integrates textual knowledge with temporal dynamics using Large Language Models (LLMs). TITSP employs a two-stage process that bridges numerical data with rich contextual information for enhanced forecasting accuracy and interpretability.In the first stage, we present AutoPrompter, which captures temporal dependencies from time series data and aligns them with semantically meaningful text embeddings.In the second stage, these aligned embeddings are refined by incorporating task-specific textual instructions through LLM. We evaluate TITSP on several multimodal time series prediction tasks, demonstrating substantial improvements over state-of-the-art baselines. Quantitative results reveal significant gains in predictive performance, while qualitative analyses show that textual context enhances interpretability and actionable insights. Our findings indicate that integrating multimodal inputs not only improves prediction accuracy but also fosters more intuitive, user-centered forecasting
[ "Large Language Models", "Time-series Prediction", "Multi-modal", "Instruction-following" ]
https://openreview.net/pdf?id=01wMplF8TL
https://openreview.net/forum?id=01wMplF8TL
6rIeTQhMDr
official_comment
1,731,931,927,581
0lF97M7CMQ
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission10424/Authors" ]
ICLR.cc/2025/Conference
2025
title: Response to Reviewer YdJR - result table comment: For convenience, we also provide table 5 in the paper, which is about the test results. We add more experiments to show the effectiveness of our algorithm. ### Table: Comparison of Compliance Rate (CR) and MSE for TITSP, Time-LLM, Qwen4MTS, UniTime, and Llama-3.1-8B across various instructed actions with highlighted best (in **bold**) and second-best (in _underlined_) results. | **Instruction** | **TITSP (CR)** | **TITSP (MSE)** | **Time-LLM (CR)** | **Time-LLM (MSE)** | **Qwen4MTS (CR)** | **Qwen4MTS (MSE)** | **UniTime (Qwen) (CR)** | **UniTime (Qwen) (MSE)** | **Llama-3.1-8B (CR)** | **Llama-3.1-8B (MSE)** | |---------------------------------|----------------|-----------------|-------------------|--------------------|-------------------|--------------------|-------------------------|-------------------------|------------------------|------------------------| | Linear Growth and Linear Decay | **0.83** | **1.15** | 0.38 | 3.45 | _0.69_ | _1.90_ | 0.54 | 2.73 | 0.32 | 4.95 | | Linear Growth and Linear Decay | **0.79** | **1.17** | 0.49 | 2.85 | **0.79** | _1.34_ | 0.57 | 2.28 | 0.41 | 2.80 | | Linear Trend Up | _0.90_ | **1.03** | 0.63 | 1.71 | 0.76 | _1.08_ | 0.63 | 1.65 | **0.91** | 1.15 | | Linear Trend Down | **0.87** | **0.88** | 0.64 | 1.55 | 0.71 | 1.36 | 0.51 | 1.59 | _0.85_ | _0.92_ | | Exponential Growth | **0.89** | **1.33** | 0.58 | 2.59 | _0.63_ | _2.07_ | 0.60 | 2.38 | 0.58 | 2.35 | | Exponential Decay | **0.84** | **1.25** | 0.56 | 2.26 | 0.67 | 2.10 | _0.69_ | _2.05_ | 0.46 | 2.39 | | Keep Stable | **0.98** | _0.35_ | 0.76 | 0.76 | 0.93 | 0.48 | 0.83 | 0.62 | _0.95_ | **0.33** | | Decrease Amplitude | **0.90** | _0.91_ | 0.85 | 1.04 | **0.90** | **0.84** | 0.79 | 1.09 | 0.52 | 1.89 | | Increase Amplitude | **0.94** | **0.94** | 0.79 | 1.20 | _0.89_ | _0.96_ | 0.81 | 1.03 | 0.75 | 1.35 | | Logarithmic Growth | _0.77_ | _1.65_ | 0.49 | 2.31 | **0.79** | **1.55** | 0.60 | 1.73 | 0.55 | 1.94 | | Logarithmic Decay | **0.83** | **1.68** | 0.48 | 2.19 | _0.81_ | _1.69_ | 0.67 | 2.04 | 0.63 | 2.60 |
01wMplF8TL
INSTRUCTION-FOLLOWING LLMS FOR TIME SERIES PREDICTION: A TWO-STAGE MULTIMODAL APPROACH
[]
We introduce Text-Informed Time Series Prediction (TITSP), an innovative multimodal framework that integrates textual knowledge with temporal dynamics using Large Language Models (LLMs). TITSP employs a two-stage process that bridges numerical data with rich contextual information for enhanced forecasting accuracy and interpretability.In the first stage, we present AutoPrompter, which captures temporal dependencies from time series data and aligns them with semantically meaningful text embeddings.In the second stage, these aligned embeddings are refined by incorporating task-specific textual instructions through LLM. We evaluate TITSP on several multimodal time series prediction tasks, demonstrating substantial improvements over state-of-the-art baselines. Quantitative results reveal significant gains in predictive performance, while qualitative analyses show that textual context enhances interpretability and actionable insights. Our findings indicate that integrating multimodal inputs not only improves prediction accuracy but also fosters more intuitive, user-centered forecasting
[ "Large Language Models", "Time-series Prediction", "Multi-modal", "Instruction-following" ]
https://openreview.net/pdf?id=01wMplF8TL
https://openreview.net/forum?id=01wMplF8TL
3qv3dT4N3R
official_comment
1,732,174,335,714
2QUpfkxDtD
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission10424/Authors" ]
ICLR.cc/2025/Conference
2025
title: Response to Reviewer YdJR comment: Dear reviewer: I wonder if our response can solve your concerns! Thank you!
01wMplF8TL
INSTRUCTION-FOLLOWING LLMS FOR TIME SERIES PREDICTION: A TWO-STAGE MULTIMODAL APPROACH
[]
We introduce Text-Informed Time Series Prediction (TITSP), an innovative multimodal framework that integrates textual knowledge with temporal dynamics using Large Language Models (LLMs). TITSP employs a two-stage process that bridges numerical data with rich contextual information for enhanced forecasting accuracy and interpretability.In the first stage, we present AutoPrompter, which captures temporal dependencies from time series data and aligns them with semantically meaningful text embeddings.In the second stage, these aligned embeddings are refined by incorporating task-specific textual instructions through LLM. We evaluate TITSP on several multimodal time series prediction tasks, demonstrating substantial improvements over state-of-the-art baselines. Quantitative results reveal significant gains in predictive performance, while qualitative analyses show that textual context enhances interpretability and actionable insights. Our findings indicate that integrating multimodal inputs not only improves prediction accuracy but also fosters more intuitive, user-centered forecasting
[ "Large Language Models", "Time-series Prediction", "Multi-modal", "Instruction-following" ]
https://openreview.net/pdf?id=01wMplF8TL
https://openreview.net/forum?id=01wMplF8TL
2QUpfkxDtD
official_review
1,730,296,779,771
01wMplF8TL
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission10424/Reviewer_YdJR" ]
ICLR.cc/2025/Conference
2025
summary: The paper introduces Text-Informed Time Series Prediction (TITSP), a novel two-stage framework that enhances time series forecasting by integrating domain-specific textual information. The paper demonstrates that TITSP significantly outperforms traditional and existing multimodal approaches, improving both predictive accuracy and interpretability. soundness: 2 presentation: 3 contribution: 3 strengths: 1. The paper presents a novel approach to time series forecasting by integrating textual instructions, which is a creative extension of existing multimodal time series models. The introduction of a two-stage framework and the focus on instruction-based forecasting address a significant gap in the field. 2. The paper is well-written and logically organized. The figures and tables are clear and effectively support the text. The problem formulation and the description of the methodology are easy to follow. weaknesses: 1. Given the synthetic data generation process, how can the authors ensure that there is no data leakage between the text data and forecasting targets? Could the authors provide a detailed explanation of the data generation process to address this concern? 2. How practical is the proposed approach in real-world scenarios where textual instructions may not always be available or may be ambiguous? Could the authors discuss the potential limitations and challenges in deploying TITSP in practical applications? 3. Has the model been tested on any other multimodal time series analysis tasks beyond forecasting? If not, what are the potential challenges in applying TITSP to other tasks? questions: Please see the weaknesses. flag_for_ethics_review: ['No ethics review needed.'] details_of_ethics_concerns: The paper does not raise any significant ethical concerns. rating: 5 confidence: 3 code_of_conduct: Yes
01wMplF8TL
INSTRUCTION-FOLLOWING LLMS FOR TIME SERIES PREDICTION: A TWO-STAGE MULTIMODAL APPROACH
[]
We introduce Text-Informed Time Series Prediction (TITSP), an innovative multimodal framework that integrates textual knowledge with temporal dynamics using Large Language Models (LLMs). TITSP employs a two-stage process that bridges numerical data with rich contextual information for enhanced forecasting accuracy and interpretability.In the first stage, we present AutoPrompter, which captures temporal dependencies from time series data and aligns them with semantically meaningful text embeddings.In the second stage, these aligned embeddings are refined by incorporating task-specific textual instructions through LLM. We evaluate TITSP on several multimodal time series prediction tasks, demonstrating substantial improvements over state-of-the-art baselines. Quantitative results reveal significant gains in predictive performance, while qualitative analyses show that textual context enhances interpretability and actionable insights. Our findings indicate that integrating multimodal inputs not only improves prediction accuracy but also fosters more intuitive, user-centered forecasting
[ "Large Language Models", "Time-series Prediction", "Multi-modal", "Instruction-following" ]
https://openreview.net/pdf?id=01wMplF8TL
https://openreview.net/forum?id=01wMplF8TL
1qSqW4mfU6
official_comment
1,731,930,751,362
HNantkZwp3
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission10424/Authors" ]
ICLR.cc/2025/Conference
2025
title: Response to Reviewer GGqR comment: Thank you for your precious comments! ### Comment 1: *Technical Contributions are Incremental* _The proposed approach lacks significant technical innovation. Integrating LLMs with time series is an incremental step rather than a groundbreaking contribution. The use of cross-attention and VQ-VAE offers no substantial improvement beyond established techniques._ **Response:** We appreciate the reviewer’s feedback. While it is true that certain components of our architecture, such as cross-attention and VQ-VAE, are established techniques, our contribution lies in the development of a novel methodological framework. This framework includes a tailored data pipeline, innovative architecture design, and a comprehensive evaluation approach, all specifically geared towards integrating text-based instructions with time series forecasting. We believe this is a significant contribution because it provides a structured approach to applying language models in the context of time series, which is a growing area of interest with wide applicability in fields like supply and demand forecasting. The ability to define and manipulate hypothetical scenarios through textual instructions opens new avenues for adaptable and context-sensitive forecasting models. We are confident that this framework will be valuable to the community, as it sets a foundation for future work in this space. --- ### Comment 2: *Poor Structure and Clarity* _The paper is poorly organized, with unclear explanations and an incoherent flow. The motivation and rationale for the proposed method are inadequately communicated, and critical components like AutoPrompter are explained in a convoluted manner, hindering comprehension._ **Response:** We are sorry to hear that some aspects of the paper were unclear. We greatly value the reviewer’s feedback and would be very interested in a constructive discussion to better understand which specific parts of the motivation and rationale were difficult to follow. In the related work, we have added a paragraph to highlight the main motivation and differences of our work compared to state-of-the-art methods, especially those targeting instruction-based forecasting. Regarding the reviewer’s concern with the explanation of AutoPrompter, we would like to clarify its purpose: AutoPrompter serves as a bridge that translates the time series data into the text embedding space. By quantizing the time series space, we map it into a compressed semantic space, which may have contributed to some of the complexity in the explanation. We have added additional clarifications in the updated version of the paper (_Page 5_) to ensure this concept is more accessible and the overall flow is clearer. We appreciate the reviewer’s insights and hope the revised manuscript will address these concerns effectively. --- ### Comment 3: *Inadequate Experiments, Superficial Related Work, Numerous Typos and Lack of Polish, Insufficient Practical Insights* **Response:** **Inadequate Experiments:** We acknowledge the reviewer’s concern about the reliance on synthetic datasets. While we agree that real-world data is crucial for evaluating practical applicability, synthetic datasets were used primarily to demonstrate the model’s capacity to handle controlled scenarios where the impact of specific factors can be isolated. We have now included benchmarks against state-of-the-art approaches (Llama-3.1-8B-instruct, Qwen4MTS and Unitime in Table 2). The results are highly stable and reproducible, with substantial performance margins over competitors, which reduce the necessity for statistical significance testing. **Superficial Related Work:** We have expanded the related work section to better differentiate our approach from prior research, particularly in the integration of text and time series. References such as Unitime have been added to strengthen the justification for our originality. **Insufficient Practical Insights:** The interpretability of our framework lies in facilitating interaction between expert users and the model through hypothetical scenarios. For example, the model generates forecasting scenarios based on textual instructions about supply and demand conditions, enabling experts to evaluate potential outcomes. This is particularly useful in fields like supply chain management, where generating and testing "what-if" scenarios through textual inputs offers clear practical benefits. ---
01wMplF8TL
INSTRUCTION-FOLLOWING LLMS FOR TIME SERIES PREDICTION: A TWO-STAGE MULTIMODAL APPROACH
[]
We introduce Text-Informed Time Series Prediction (TITSP), an innovative multimodal framework that integrates textual knowledge with temporal dynamics using Large Language Models (LLMs). TITSP employs a two-stage process that bridges numerical data with rich contextual information for enhanced forecasting accuracy and interpretability.In the first stage, we present AutoPrompter, which captures temporal dependencies from time series data and aligns them with semantically meaningful text embeddings.In the second stage, these aligned embeddings are refined by incorporating task-specific textual instructions through LLM. We evaluate TITSP on several multimodal time series prediction tasks, demonstrating substantial improvements over state-of-the-art baselines. Quantitative results reveal significant gains in predictive performance, while qualitative analyses show that textual context enhances interpretability and actionable insights. Our findings indicate that integrating multimodal inputs not only improves prediction accuracy but also fosters more intuitive, user-centered forecasting
[ "Large Language Models", "Time-series Prediction", "Multi-modal", "Instruction-following" ]
https://openreview.net/pdf?id=01wMplF8TL
https://openreview.net/forum?id=01wMplF8TL
0lF97M7CMQ
official_comment
1,731,931,481,026
2QUpfkxDtD
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission10424/Authors" ]
ICLR.cc/2025/Conference
2025
title: Response to Reviewer YdJR comment: ### Comment 1: *Given the synthetic data generation process, how can the authors ensure that there is no data leakage between the text data and forecasting targets? Could the authors provide a detailed explanation of the data generation process to address this concern.* **Response:** We thank the reviewer for raising this important question. While the concern about data leakage is valid in many contexts, it is not a central issue in our case. The primary goal of our work is to assess the model's adherence to specific textual instructions rather than predict the target based solely on the time series data. To clarify, consider three samples with identical context length: a deterministic machine learning model would typically produce the same forecast for these samples. However, by adding textual instructions that specify a particular scenario or condition, we introduce a new layer of information that the model must adhere to. This is not a data leakage problem but rather a way of interacting with the model through different hypothetical scenarios. The compliance rate is explicitly defined to measure how well the model follows these instructions while preserving the underlying time series structure. Thus, the model’s ability to follow instructions is the focus, rather than predicting targets based solely on historical data. This being said, while the focus of the paper is on text-instructed problems, we also perform experiments in the Appendix on other types of data where the text describes the task (e.g., domain, forecasting type, features). In these cases, the dataset contains no leakage, and our proposed algorithm outperforms the state-of-the-art. --- ### Comment 2: *How practical is the proposed approach in real-world scenarios where textual instructions may not always be available or may be ambiguous? Could the authors discuss the potential limitations and challenges in deploying TITSP in practical applications?* **Response:** We appreciate the reviewer’s thoughtful question. As discussed in our response to the first comment, the primary purpose of our approach is to evaluate how well the model adheres to specific textual instructions in controlled scenarios. While textual instructions are central to this evaluation, we acknowledge that in real-world applications, such instructions may not always be available or could be ambiguous. One potential limitation is the reliance on clear, actionable instructions, which may not always be feasible in dynamic or unstructured environments. Additionally, the model’s performance may be affected by the quality and specificity of the textual input. However, our framework is designed to handle a wide range of instruction formats and adapt to different hypothetical scenarios, making it flexible for practical deployment. We also envision that the model could be augmented with supplementary mechanisms (e.g., user feedback loops or clarification prompts) to address ambiguity in real-world use cases. --- ### Comment 3: *Has the model been tested on any other multimodal time series analysis tasks beyond forecasting? If not, what are the potential challenges in applying TITSP to other tasks?* **Response:** We appreciate the reviewer’s question. While our model has primarily been tested for forecasting tasks, extending it to other multimodal time series analysis tasks such as classification presents some challenges. In classification, the output is often constrained to predefined labels, which limits the flexibility needed to explore different hypothetical scenarios through textual instructions. This makes it difficult to leverage the full potential of our approach in such tasks. However, as demonstrated in the Appendix, our framework can be extended to scenarios where instead of instructions, we incorporate other types of additional information related to the task at hand. In these cases, our model still outperforms existing methods, suggesting that the framework has potential beyond forecasting, even for tasks with more constrained output spaces. Furthermore, we believe that imputation tasks could be a natural extension of our framework, as it can easily accommodate missing data by conditioning on other available information, showing that our approach is adaptable to different problem settings.
00ezkB2iZf
MedFuzz: Exploring the Robustness of Large Language Models in Medical Question Answering
[]
Large language models (LLM) have achieved impressive performance on medical question-answering benchmarks. However, high benchmark accuracy does not imply robust performance in real-world clinical settings. Medical question-answering benchmarks rely on assumptions consistent with quantifying LLM performance but that may not hold in the open world of the clinic. Yet LLMs learn broad knowledge that could help the LLM perform in practical conditions regardless of unrealistic assumptions in celebrated benchmarks. We seek to quantify how robust LLM medical question-answering benchmark performance is to violations of unrealistic benchmark assumptions. Specifically, we present an adversarial method that we call MedFuzz (for medical fuzzing). MedFuzz attempts to modify benchmark questions in ways aimed at confounding the LLM. We demonstrate the approach by targeting unrealistic assumptions about patient characteristics presented in the MedQA benchmark. Successful "attacks" modify a benchmark item in ways that would be unlikely to fool a medical expert but nonetheless "trick" the LLM into changing from a correct to an incorrect answer. Further, we present a non-parametric test for calculating the statistic significance of a successful attack. We show how to use calculate "MedFuzzed" performance on a medical QA benchmark, as well to find individual cases of statistically significant successful attacks. The methods show promise at providing insights into the ability of an LLM to operate robustly in more realistic settings.
[ "large language model", "adversarial machine learning", "automatic red teaming" ]
https://openreview.net/pdf?id=00ezkB2iZf
https://openreview.net/forum?id=00ezkB2iZf
v9MPHvKB79
official_review
1,730,166,452,694
00ezkB2iZf
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission11424/Reviewer_6sJS" ]
ICLR.cc/2025/Conference
2025
summary: This paper proposes MedFuzz, a novel approach designed to evaluate the robustness of large language models (LLMs) in medical question-answering contexts. MedFuzz introduces controlled perturbations in input text by adding patient characteristics (PC) and social bias information to simulate real-world variability and challenges encountered in clinical settings. The authors highlight the limitations of traditional medical benchmarks that often simplify clinical scenarios and position MedFuzz as an advancement towards “beyond-the-benchmark” evaluations. Specifically, the paper presents experiments assessing LLMs' responses to MedFuzz perturbations and evaluates the consistency of chain-of-thought (CoT) explanations under these conditions. The study offers a new perspective on testing LLM robustness by addressing potential risks in clinical decision-making when assumptions of canonical benchmarks do not hold. soundness: 3 presentation: 3 contribution: 4 strengths: 1. This paper introduces MedFuzz, a novel approach for testing the robustness of large language models (LLMs) in clinical contexts, which addresses the simplifications found in traditional benchmarks. MedFuzz is distinct in its approach by adding specific patient characteristics and social bias information to simulate the complexity of real-world clinical scenarios. This innovative framework offers a new direction for assessing LLM robustness by examining potential vulnerabilities in medical question-answering settings. 2. The paper clearly explains the concept of MedFuzz and its application, particularly in using patient characteristics and bias elements to test model robustness. The experimental procedures and components are consistently described, making the study's objectives and methodology easy for readers to follow. 3. MedFuzz presents a significant contribution as it provides a framework to evaluate how LLMs may perform in real clinical settings, beyond simplified benchmarks. This work has high practical relevance for the safe implementation of LLMs in healthcare by strengthening robustness assessment and reducing potential errors. It contributes an essential tool for enhancing LLM applicability in clinical practice, highlighting the importance of robustness in medical AI. weaknesses: 1. The authors clarified the distinction between robustness and generalization in their response, emphasizing that robustness in this study is tied to resilience against violations of benchmark assumptions. This clarification addresses the original concern, though ensuring this explanation is explicitly included in the revised manuscript remains important. 2. The authors clarified that MedFuzz is designed to surface biases already present in the target model and does not introduce confusion into clinical decision-making itself. While this explanation addresses the primary concern, ensuring that the revised manuscript provides sufficient justification for the use of specific patient characteristics as perturbations will remain critical. 3. The authors acknowledged that the scale of perturbations could be further refined and suggested this as future work. Including a brief discussion in the revised manuscript about the implications of perturbation scale would strengthen this point. 4. The authors agreed to expand the analysis of CoT fidelity to include unsuccessful attacks in addition to successful ones. This addition should provide a more comprehensive baseline for evaluating the vulnerabilities identified by MedFuzz. Ensuring this analysis is effectively implemented in the revised manuscript will be crucial. questions: 1. It would be helpful to have specific examples illustrating the risks posed by the simplified assumptions in traditional benchmarks within clinical settings. For instance, if omitting certain patient characteristics or clinical contexts could lead to diagnostic errors, providing these examples would clarify the importance of this study for readers and highlight its relevance. 2. I am curious whether the patient characteristics (e.g., age, gender) and social bias information added as perturbations in MedFuzz genuinely act as confusion factors within actual clinical environments. These details often serve as crucial data points in clinical decision-making, so further explanation on how these elements were deemed appropriate as confusion-inducing factors would enhance the clinical validity of this study. 3. A clear explanation regarding the rationale for setting the perturbation iteration count to K=5 would be beneficial. For instance, do you have experimental results comparing the initial attack (K=1) with subsequent attacks (K=5) to illustrate how the LLM maintains performance with increasing perturbation levels? Such a comparison could provide a more reliable basis for evaluating the impact of iteration count on robustness in this study. flag_for_ethics_review: ['No ethics review needed.'] details_of_ethics_concerns: In the MedFuzz study, patient characteristics (PC) such as age, gender, race, and socioeconomic factors are added as perturbations to induce confusion in LLMs. One specific example presented by the authors is the use of “excessive hospital service usage by low-income patients.” This type of information could inadvertently reinforce social biases or perpetuate negative perceptions about certain demographic groups, rather than reflect clinical validity or fairness. When such characteristics are introduced as confusion-inducing factors, there is a risk that essential background information—critical for accurate diagnosis and treatment—could lead to biased outcomes. Therefore, further clarification and evaluation are needed to ensure that MedFuzz’s inclusion of such data as perturbations aligns with clinical relevance and fairness, and to mitigate any potential reinforcement of harmful social biases in the model. No further questions rating: 5 confidence: 5 code_of_conduct: Yes
00ezkB2iZf
MedFuzz: Exploring the Robustness of Large Language Models in Medical Question Answering
[]
Large language models (LLM) have achieved impressive performance on medical question-answering benchmarks. However, high benchmark accuracy does not imply robust performance in real-world clinical settings. Medical question-answering benchmarks rely on assumptions consistent with quantifying LLM performance but that may not hold in the open world of the clinic. Yet LLMs learn broad knowledge that could help the LLM perform in practical conditions regardless of unrealistic assumptions in celebrated benchmarks. We seek to quantify how robust LLM medical question-answering benchmark performance is to violations of unrealistic benchmark assumptions. Specifically, we present an adversarial method that we call MedFuzz (for medical fuzzing). MedFuzz attempts to modify benchmark questions in ways aimed at confounding the LLM. We demonstrate the approach by targeting unrealistic assumptions about patient characteristics presented in the MedQA benchmark. Successful "attacks" modify a benchmark item in ways that would be unlikely to fool a medical expert but nonetheless "trick" the LLM into changing from a correct to an incorrect answer. Further, we present a non-parametric test for calculating the statistic significance of a successful attack. We show how to use calculate "MedFuzzed" performance on a medical QA benchmark, as well to find individual cases of statistically significant successful attacks. The methods show promise at providing insights into the ability of an LLM to operate robustly in more realistic settings.
[ "large language model", "adversarial machine learning", "automatic red teaming" ]
https://openreview.net/pdf?id=00ezkB2iZf
https://openreview.net/forum?id=00ezkB2iZf
lm5Z9TT5lJ
official_comment
1,732,591,625,014
v9MPHvKB79
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission11424/Authors" ]
ICLR.cc/2025/Conference
2025
title: Response to Reviewer 6sJS's feedback comment: We appreciate the reviewer’s detailed evaluation and thoughtful feedback on our manuscript. Below, we address each of the concerns and questions raised. ### **1. Definition of Robustness vs. Generalization** Robustness in the context of MedFuzz refers to the resilience of a model’s performance statistic (e.g., accuracy) when assumptions underlying the benchmark are violated in real-world settings. This includes maintaining performance when diagnostically irrelevant details are introduced. By contrast, generalization in statistics refers to the ability of a model to perform well on unseen data sampled from the same distribution as the training data. We will revise the manuscript to clarify this distinction and emphasize that robustness here is specifically tied to the benchmark’s assumptions and the model’s ability to handle clinically irrelevant or misleading details. ### **2. Patient Characteristics and Bias** We regret that we weren't more clear about how the use of patient characteristics (PC) in MedFuzz does not introduce or reinforce bias. Rather, it aims to surface biases already implicit in the target model. MedFuzz is a diagnostic tool to evaluate LLMs before they are deployed in clinical decision-making scenarios. Importantly, MedFuzz itself does not serve answers to questions in clinical settings—it evaluates the robustness of models that do. In that evaluation, it does not change or modify the target model. This distinction will be made clearer in the revised manuscript. ### **3. Scale of Perturbations** We did not constrain the proportion of added text during perturbations because, in our experience, the length of added text was still well within the length of the context windows for the target LLMs. We agree with the reviewer that analyzing how varying amounts of irrelevant information impact target model performance would be valuable. We will include this as a suggestion for future work. ### **4. Chain-of-Thought Fidelity** The CoT analysis focused on successful attaks to demonstrate that inspecting CoT explanations alone is insufficient to reveal the vulnerabilities surfaced by MedFuzz. We will expand the analysis to include unsuccessful attacks as well. ### **5. Examples of Benchmark Assumption Errors** The manuscript cites examples of errors that are not caught by traditional benchmark evaluation due to simplifying assumptions in those benchmarks. For example, we cite references showing GPT-3 demonstrating biases toward certain patient groups. We will expand on these examples in the revised manuscript to better illustrate the risks posed by such assumptions. ### **6. Ethical Concerns Regarding Bias** We address the ethical concerns raised by clarifying that MedFuzz is designed to surface biases in the target model, not to introduce or reinforce them. MedFuzz operates as an evaluation tool, diagnosing vulnerabilities in LLMs that may be deployed in clinical settings. We explicitly state that failure to surface such biases does not imply their absence. Furthermore, MedFuzz is not intended to answer medical questions but rather to assess the robustness of models that do. We will revise the manuscript to better highlight these points and allay concerns about bias reinforcement. ### **7. Perturbation Iteration Count \(K\)** The results for different values of \(K\) are shown in Figure 2. We demonstrate how performance changes as the number of perturbation iterations increases, providing empirical support for the choice of \(K=5\) as a practical balance between computational cost and perturbation effectiveness. We will ensure that this explanation is clearly referenced in the manuscript. ### **Revisions to the Manuscript** To address the reviewer’s feedback, we will: 1. Clarify the distinction between robustness and generalization, explicitly tying robustness to real-world violations of benchmark assumptions. 2. Emphasize that MedFuzz evaluates models to surface implicit biases, rather than introducing or reinforcing them. 3. Expand on examples of errors caused by traditional benchmark assumptions to strengthen the motivation for MedFuzz. 4. Expand the analysis of CoT fidelity to cover questions where attacks were unsuccessful, to establish a baseline for the analysis. 5. Ensure the ethical role of MedFuzz as an evaluation tool is clearly communicated. 6. Expand upon the discussion of \(K\) and iteration counts and how to select ideal values of \(K\). We appreciate the reviewer’s constructive feedback, which has helped us identify areas to strengthen the manuscript and address concerns. These revisions will further clarify MedFuzz’s methodology, ethical considerations, and contributions to LLM robustness evaluation.
00ezkB2iZf
MedFuzz: Exploring the Robustness of Large Language Models in Medical Question Answering
[]
Large language models (LLM) have achieved impressive performance on medical question-answering benchmarks. However, high benchmark accuracy does not imply robust performance in real-world clinical settings. Medical question-answering benchmarks rely on assumptions consistent with quantifying LLM performance but that may not hold in the open world of the clinic. Yet LLMs learn broad knowledge that could help the LLM perform in practical conditions regardless of unrealistic assumptions in celebrated benchmarks. We seek to quantify how robust LLM medical question-answering benchmark performance is to violations of unrealistic benchmark assumptions. Specifically, we present an adversarial method that we call MedFuzz (for medical fuzzing). MedFuzz attempts to modify benchmark questions in ways aimed at confounding the LLM. We demonstrate the approach by targeting unrealistic assumptions about patient characteristics presented in the MedQA benchmark. Successful "attacks" modify a benchmark item in ways that would be unlikely to fool a medical expert but nonetheless "trick" the LLM into changing from a correct to an incorrect answer. Further, we present a non-parametric test for calculating the statistic significance of a successful attack. We show how to use calculate "MedFuzzed" performance on a medical QA benchmark, as well to find individual cases of statistically significant successful attacks. The methods show promise at providing insights into the ability of an LLM to operate robustly in more realistic settings.
[ "large language model", "adversarial machine learning", "automatic red teaming" ]
https://openreview.net/pdf?id=00ezkB2iZf
https://openreview.net/forum?id=00ezkB2iZf
XeqVl4YWA6
official_comment
1,732,578,540,365
TeO25XUwES
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission11424/Authors" ]
ICLR.cc/2025/Conference
2025
title: Response to Reviewer GdQb: Questions about P-value Distribution, Trends in Duccessful Attacks, and Human Evaluation comment: We thank the reviewer for their thorough evaluation and constructive feedback on our manuscript. Below, we address each point raised: ### **Faithfulness of Reformulated Questions** We acknowledge the concern about the reliance on the attacker LLM (GPT-4) to maintain the medically correct answer while generating fuzzed questions. In our approach, the attacker LLM is explicitly prompted to preserve the correct answer, which is provided during the fuzzing process. This ensures that the fuzzes remain anchored to the original question's intent. Furthermore, we rely on the attacker LLM’s demonstrated human-level accuracy on the benchmark as an assumption for generating high-quality fuzzes at scale. Given the large scale of MedFuzz experiments, manual quality assurance for every fuzzed question is infeasible. However, our workflow incorporates user inspection for particularly interesting or insightful cases, with the final judgment of whether an attack is “fair” being left to human reviewers. While this assumption introduces some dependence on the attacker LLM’s capabilities, we believe it is reasonable for achieving scalability. ### **Distribution of P-Values** To address the request for an impression of the p-value distribution, we ran an analysis on a run where GPT-4 was the target model, resulting in 85 successful attacks. Below are summary statistics of the p-values: - **Min:** 0.0, **5%:** 0.0, **25%:** 0.10, **Median:** 0.40, **Mean:** 0.408, **75%:** 0.63, **Max:** 1.0 To explore trends, we categorized successful attacks into two groups: (1) significant attacks (\( p < 0.01 \)) and (0) insignificant attacks. We calculated an odds ratio for being in group 1 vs group 0. Analysis revealed that topics like “rash,” “substance abuse,” and “ultrasound” were more than twice as likely to fall into the significant group, while others like “HIV,” “breastfeeding,” and “chronic kidney disease” were also overrepresented. However, we recognize that calculating p-values for these odds ratios would conflate p-values used as thresholds, resulting in unsound statistical inference (i.e., p-hacking). A more robust approach, which we plan to explore in future work, would involve estimating the success probability for specific topics using repeated attacks on individual questions. ### **Evaluation of Chain of Thought (CoT) Faithfulness** The reviewer highlights an important point about the assessment of CoT faithfulness. In our study, we manually evaluated CoTs from successful attacks, focusing on whether the added fuzz content was explicitly referenced. This process was conducted by inspecting each CoT explanation and verifying its alignment with the fuzzed information that caused the incorrect response. ### **Human Performance Comparison and Quality Control** We recognize the value of including human medical experts to evaluate the quality of fuzzed questions. However, due to resource constraints, this was not feasible in the current study. We plan to include human evaluation in future work to provide an additional layer of validation for the attacker LLM’s performance and the robustness of fuzzed questions. Regarding quality control, we rely on the attacker LLM’s prompt-engineered constraints to ensure that generated fuzzes are medically plausible and consistent with the original question’s correct answer. This reliance on a high-performing LLM is a tradeoff we make to scale the MedFuzz method across a large dataset like MedQA. ### **Analysis of Errors and Robustness to Specific Problem Types** The reviewer’s request for a more granular analysis of errors and model vulnerabilities is well-taken. Our exploratory analysis of attack outcomes highlighted certain topics (e.g., “rash,” “substance abuse,” “ultrasound”) that appear more susceptible to significant attacks. However, as noted above, a more rigorous approach to topic-level success probability estimation is necessary for conclusive insights. We plan to develop a framework for repeated attacks on specific topics, allowing us to model robustness conditional on problem type. ### **Future Directions** The reviewer’s suggestions align with our broader vision for improving MedFuzz. Specifically, we aim to: 1. Incorporate human evaluation for assessing question quality and attack outcomes. 2. Develop Monte Carlo-based topic-specific estimates of probability of attack success to give insight into which topics are vulnerable. We believe these improvements will address the limitations of the current study and enhance the utility of MedFuzz for evaluating medical LLMs. **Summary** We appreciate the reviewer’s insightful feedback and have outlined both responses to the identified weaknesses and concrete steps for future work. While some limitations remain, we are confident that MedFuzz provides valuable insights into LLM robustness and look forward to building on this foundation.
00ezkB2iZf
MedFuzz: Exploring the Robustness of Large Language Models in Medical Question Answering
[]
Large language models (LLM) have achieved impressive performance on medical question-answering benchmarks. However, high benchmark accuracy does not imply robust performance in real-world clinical settings. Medical question-answering benchmarks rely on assumptions consistent with quantifying LLM performance but that may not hold in the open world of the clinic. Yet LLMs learn broad knowledge that could help the LLM perform in practical conditions regardless of unrealistic assumptions in celebrated benchmarks. We seek to quantify how robust LLM medical question-answering benchmark performance is to violations of unrealistic benchmark assumptions. Specifically, we present an adversarial method that we call MedFuzz (for medical fuzzing). MedFuzz attempts to modify benchmark questions in ways aimed at confounding the LLM. We demonstrate the approach by targeting unrealistic assumptions about patient characteristics presented in the MedQA benchmark. Successful "attacks" modify a benchmark item in ways that would be unlikely to fool a medical expert but nonetheless "trick" the LLM into changing from a correct to an incorrect answer. Further, we present a non-parametric test for calculating the statistic significance of a successful attack. We show how to use calculate "MedFuzzed" performance on a medical QA benchmark, as well to find individual cases of statistically significant successful attacks. The methods show promise at providing insights into the ability of an LLM to operate robustly in more realistic settings.
[ "large language model", "adversarial machine learning", "automatic red teaming" ]
https://openreview.net/pdf?id=00ezkB2iZf
https://openreview.net/forum?id=00ezkB2iZf
TeO25XUwES
official_review
1,730,704,092,447
00ezkB2iZf
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission11424/Reviewer_GdQb" ]
ICLR.cc/2025/Conference
2025
summary: This paper investigates the robustness of large language models in handling medical QA tasks by introducing a new evaluation method, MedFuzz. For each multiple-choice question in the original benchmarks, MedFuzz uses an LLM (referred to as the attacker LLM) to reformulate questions by adding patient characteristics that may introduce social bias without affecting the clinical decision-making process. If the target LLM answers correctly, the attacker LLM is prompted to generate additional distracting questions based on the target LLM’s feedback. Additionally, a non-parametric statistical significance test was developed by prompting the attacker LLM to create questions with patient characteristics that avoid social bias. Using this evaluation method, the authors tested seven LLMs and found a significant performance drop across all models. Moreover, they observed that when current LLMs answer incorrectly, they tend not to reference the added biased information, indicating inconsistency in faithfully adhering to the clinical decision-making process. soundness: 2 presentation: 2 contribution: 2 strengths: + This paper examines the robustness of LLMs in the clinical decision-making process, a critical aspect of their application in the medical domain. + The evaluation results demonstrate that current LLMs lack robustness in the clinical decision-making process, offering valuable insights for the development of medical LLMs. weaknesses: + A major weakness of this paper is the faithfulness of the reformulated questions. The proposed MedFuzz method relies solely on prompt engineering with the attacker LLM (GPT-4) to modify original MedQA questions, making the attack process difficult to control. The attacker LLM may potentially alter critical information in the original questions, resulting in less reliable reformulated questions. The example in Section 3.1 also demonstrates that the attacker LLM added extensive information about the patient’s family medical history, consultation history, and medication history. These details are highly relevant in real clinical diagnosis and can significantly influence a doctor’s assessment of the patient’s condition. + Moreover, although the authors propose a non-parametric statistical significance test, they do not provide the full distribution of p-values across the MedQA benchmark. In line 485, they note that for the successful attacks they selected, the p-values are <1/30, 0.1, 0.16, 0.5, and 0.63. Here, the p-value represents the probability that a control fuzz is more challenging than the original fuzz. Therefore, cases with p-values of 0.5 and 0.63 suggest that the performance decline in the target LLM is due to the perturbations themselves, rather than social bias. + For the study of target LLM's faithfulness, it is important to also study the proportion of CoT that mentions the critical information in the original MedQA benchmark for comparison with the results provided in Figure 2B. Additionally, the authors should provide more information to help readers understand the specific process of this study. For example, how many cases were analyzed? Was the determination of whether fuzzed information was included made manually, or was an automated algorithm used? questions: 1. The authors need to provide further experiments and analyses to demonstrate the reliability of the questions generated by this method, such as incorporating the performance of human experts or introducing relevant methods for quality control of the questions in the methods section. 2. Also, more analysis of the evaluation results should be included. For example, what are the main types of errors introduced by attacks across different turns? Which specific diseases or problem types is the target LLM less robust against? By supplementing these analyses, further insights can be provided for the development of medical LLMs. flag_for_ethics_review: ['No ethics review needed.'] rating: 3 confidence: 4 code_of_conduct: Yes
00ezkB2iZf
MedFuzz: Exploring the Robustness of Large Language Models in Medical Question Answering
[]
Large language models (LLM) have achieved impressive performance on medical question-answering benchmarks. However, high benchmark accuracy does not imply robust performance in real-world clinical settings. Medical question-answering benchmarks rely on assumptions consistent with quantifying LLM performance but that may not hold in the open world of the clinic. Yet LLMs learn broad knowledge that could help the LLM perform in practical conditions regardless of unrealistic assumptions in celebrated benchmarks. We seek to quantify how robust LLM medical question-answering benchmark performance is to violations of unrealistic benchmark assumptions. Specifically, we present an adversarial method that we call MedFuzz (for medical fuzzing). MedFuzz attempts to modify benchmark questions in ways aimed at confounding the LLM. We demonstrate the approach by targeting unrealistic assumptions about patient characteristics presented in the MedQA benchmark. Successful "attacks" modify a benchmark item in ways that would be unlikely to fool a medical expert but nonetheless "trick" the LLM into changing from a correct to an incorrect answer. Further, we present a non-parametric test for calculating the statistic significance of a successful attack. We show how to use calculate "MedFuzzed" performance on a medical QA benchmark, as well to find individual cases of statistically significant successful attacks. The methods show promise at providing insights into the ability of an LLM to operate robustly in more realistic settings.
[ "large language model", "adversarial machine learning", "automatic red teaming" ]
https://openreview.net/pdf?id=00ezkB2iZf
https://openreview.net/forum?id=00ezkB2iZf
RoHfL53eaw
official_comment
1,732,688,098,051
lm5Z9TT5lJ
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission11424/Reviewer_6sJS" ]
ICLR.cc/2025/Conference
2025
title: MedFuzz: Exploring the Robustness of Large Language Models in Medical Question Answering comment: Thank you for your detailed and thoughtful responses to my feedback. I look forward to reviewing your revised manuscript, which I trust will sufficiently address the concerns raised, particularly regarding the distinction between robustness and generalization, the role of patient characteristics in MedFuzz, and the ethical considerations surrounding bias.
00ezkB2iZf
MedFuzz: Exploring the Robustness of Large Language Models in Medical Question Answering
[]
Large language models (LLM) have achieved impressive performance on medical question-answering benchmarks. However, high benchmark accuracy does not imply robust performance in real-world clinical settings. Medical question-answering benchmarks rely on assumptions consistent with quantifying LLM performance but that may not hold in the open world of the clinic. Yet LLMs learn broad knowledge that could help the LLM perform in practical conditions regardless of unrealistic assumptions in celebrated benchmarks. We seek to quantify how robust LLM medical question-answering benchmark performance is to violations of unrealistic benchmark assumptions. Specifically, we present an adversarial method that we call MedFuzz (for medical fuzzing). MedFuzz attempts to modify benchmark questions in ways aimed at confounding the LLM. We demonstrate the approach by targeting unrealistic assumptions about patient characteristics presented in the MedQA benchmark. Successful "attacks" modify a benchmark item in ways that would be unlikely to fool a medical expert but nonetheless "trick" the LLM into changing from a correct to an incorrect answer. Further, we present a non-parametric test for calculating the statistic significance of a successful attack. We show how to use calculate "MedFuzzed" performance on a medical QA benchmark, as well to find individual cases of statistically significant successful attacks. The methods show promise at providing insights into the ability of an LLM to operate robustly in more realistic settings.
[ "large language model", "adversarial machine learning", "automatic red teaming" ]
https://openreview.net/pdf?id=00ezkB2iZf
https://openreview.net/forum?id=00ezkB2iZf
NvznhEBuAw
official_review
1,730,619,456,036
00ezkB2iZf
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission11424/Reviewer_Dsnm" ]
ICLR.cc/2025/Conference
2025
summary: The paper proposes an automated red teaming approach to attack LLMs. They attempt this in the medical context by modifying medical Q&A datasets (specifically on MedQA), by violating assumptions that do not hold good in real life settings. The goal of MedFuzz is to make LLMs provide the wrong answer while ensuring that clinicians can still provide the right answer. The authors have identified a crucial problem with the evaluations of LLMs in the medical domain and provided a way to generate a more realistic dataset to aid subsequent LLM evaluation. The novelty lies in the proposed dataset from MedFuzz and the statistical evaluation used to check if the attack was successful. soundness: 3 presentation: 3 contribution: 3 strengths: • Clarity: The paper is well written and easy to follow along. The authors have given adequate and clear examples at appropriate locations in the draft to aid readability. Good use of illustrations after consultation with domain experts (clinical collaborators in this case). The authors have also acknowledged the limitation of using contaminated training data. • Originality: The idea to use social biases a clever way to incorporate real life information into the MedQA dataset. • Quality: The evaluation involves the use of proprietary vs open source and general purpose vs domain specific models. The experiment settings for reproducibility like temperature have been provided. The approach should be easy enough to reproduce. • Significance: The authors have tackled a relevant problem that needs to be addressed, given the rapid pace of the domain. weaknesses: • In the case of MedQA dataset, the authors have identified a social bias which may be present in real life situations, which are removed in the original benchmark. It is unclear how easy it is to identify and exploit such peculiarities in other medical benchmarking datasets like MedMCQA[1], PubMedQA[2] etc. • The authors create the adversarial questions by an iterative multi-turn approach. Although the authors allude to the target LLM forgetting about previous Q&A attempts, would the approach be better validated if the evaluation is done in a single-turn manner? • The authors, in step 4, only validate the statistical significance of 4 individual interesting cases. How would this change if considered for all successful cases? [1] Pal A, Umapathi LK, Sankarasubbu M. Medmcqa: A large-scale multi-subject multi-choice dataset for medical domain question answering. InConference on health, inference, and learning 2022 Apr 6 (pp. 248-260). PMLR. [2] Jin Q, Dhingra B, Liu Z, Cohen WW, Lu X. Pubmedqa: A dataset for biomedical research question answering. arXiv preprint arXiv:1909.06146. 2019 Sep 13. questions: • The authors can clarify how their approach to adversarial attacks differs from the misinformation approach in [1]. • The authors can clarify why unfaithfulness of generated responses is a crucial dimension to consider. • Section 2.2 Lines 104: The authors mention “two ways” in which MedFuzz differs from other adversarial ML approaches, though only one distinction is clear in the draft. I’m assuming the second way is the use of semantically coherent changes to the text. These few lines can probably be rephrased to add clarity. • The authors have conducted their experiments on the MedQA dataset and taken advantage of a constraint imposed in the curation of this dataset. The authors could potentially add broad guidelines to expand on the fuzzing idea for other medical datasets. • How can the authors ensure that the GPT-4 generated attack retains the same answer as the original QA pair being perturbed? Is there a possibility to evaluate this with the help of domain experts? • How is the value of K set in Algorithm 1? This can be elaborated on in the Appendix section. • Does the finding that LLM CoT does not mention the fuzzed information provide a way forward to identify adversarial inputs? • Another interesting avenue would be to examine how different kinds of LLMs perform when used as the attacking/ target LLM. For example, can a smaller model generate adversarial inputs faster than a larger model like GPT-4? • Minor Comment: Is line 10 a duplicate of line 11 in Algorithm 1? [1] Han T, Nebelung S, Khader F, Wang T, Müller-Franzes G, Kuhl C, Försch S, Kleesiek J, Haarburger C, Bressem KK, Kather JN. Medical large language models are susceptible to targeted misinformation attacks. npj Digital Medicine. 2024 Oct 23;7(1):288. flag_for_ethics_review: ['No ethics review needed.'] details_of_ethics_concerns: NA. Authors have provided an ethics statement in the draft as well. rating: 6 confidence: 3 code_of_conduct: Yes
00ezkB2iZf
MedFuzz: Exploring the Robustness of Large Language Models in Medical Question Answering
[]
Large language models (LLM) have achieved impressive performance on medical question-answering benchmarks. However, high benchmark accuracy does not imply robust performance in real-world clinical settings. Medical question-answering benchmarks rely on assumptions consistent with quantifying LLM performance but that may not hold in the open world of the clinic. Yet LLMs learn broad knowledge that could help the LLM perform in practical conditions regardless of unrealistic assumptions in celebrated benchmarks. We seek to quantify how robust LLM medical question-answering benchmark performance is to violations of unrealistic benchmark assumptions. Specifically, we present an adversarial method that we call MedFuzz (for medical fuzzing). MedFuzz attempts to modify benchmark questions in ways aimed at confounding the LLM. We demonstrate the approach by targeting unrealistic assumptions about patient characteristics presented in the MedQA benchmark. Successful "attacks" modify a benchmark item in ways that would be unlikely to fool a medical expert but nonetheless "trick" the LLM into changing from a correct to an incorrect answer. Further, we present a non-parametric test for calculating the statistic significance of a successful attack. We show how to use calculate "MedFuzzed" performance on a medical QA benchmark, as well to find individual cases of statistically significant successful attacks. The methods show promise at providing insights into the ability of an LLM to operate robustly in more realistic settings.
[ "large language model", "adversarial machine learning", "automatic red teaming" ]
https://openreview.net/pdf?id=00ezkB2iZf
https://openreview.net/forum?id=00ezkB2iZf
M9K6lklgnS
official_review
1,730,387,849,286
00ezkB2iZf
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission11424/Reviewer_EcvC" ]
ICLR.cc/2025/Conference
2025
summary: The paper proposes an adversarial method for evaluating LLM performance on medical question-answering benchmarks to assess their robustness in real-world clinical settings. The idea is to automatically generate new question-answer pairs from the existing benchmark such that they still represent realistic scenarios (e.g., including additional patient information) but the answers remain the same. The experiment results demonstrate that various baseline LLMs can be tricked into providing incorrect answers. soundness: 3 presentation: 3 contribution: 2 strengths: * The idea of the paper is interesting -- existing medical QA datasets are fairly simplified and may not appropriately represent real-world clinical settings. Thus, there is a need to understand how safe LLM usage is for the medical domain via robustness analysis. * The intuition for the adversarial biasing comes from medical domain understanding of the benchmark constructions. * Authors benchmark 3 closed LLMS and 4 open-source, medically fine-tuned LLMs. weaknesses: * One of the major claims of the method is that it will generate new questions that are semantically coherent and will not fool clinicians. However, there is no empirical proof that this is the case other than the analysis of a handful of case studies (one is presented in the main text). The prompt contains instructions for the attacker LLM it should not change the default answer but GPT-4 is not always guarenteed to follow the instructions or have all the correct medical knowledge appropriate. * Is there a reason why general domain adversarial prompting wasn't shown to be sufficient? A few studies are listed in 2.2 (first sentence) but no preliminary studies or experimental studies are shown to support this. * GPT-4 is chosen as the attacker LLM, but the question is why aren't other open-source models explored? In looking at OpenBIOLLM-70B performance, this also looks like a reasonable comparison to try and might even generate harder cases with less of the computation cost. * One of the comments in the introduction was the that existing benchmarks are not challenging enough including reducing real-life clinical situations to canonical multiple choice questions. Is there a reason why only one dataset was included and it was a multiple-choice one? * The statistical test is proposed to identify the significance of a successful attack using control fuzzes and to select the case studies, but what about the general distribution for the MedQA dataset? How stable is it broadly in identifying how significant a successful attack is? I understand this can be computationally intensive and costly but that also raises a bit of questions regarding the applicability of the method if it can't be done at scale. * The presentation could have been improved to provide some intuition at the beginning with potentially a simpler case study where less was added to make the LLM response change. Similarly, some of the text is written in a less digestible format. For example, the introduction of the test statistic could be improved by introducing notation first and then how you might compute it to understand what the statistic is looking to capture. * The citation format is incorrect, please use \citep instead of \cite as it detracts from readability. questions: * Why was MedQA the only dataset used? There are a few other multiple choice medical QA ones liked MedMCQA, PubMedQA, and MMLU Clinical topics. Why MedQA? * Why was only GPT-4 used as the attacker LLM? Seemingly there are other open source ones that have just as much medical knowledge especially looking at the fine-tuned example. * The workflow for the Step 2 is quite a few iterative turns. Are they all necessary to generate grounded ones? Is this workflow generalizable to other LLMs? Or is it GPT-4 specific? flag_for_ethics_review: ['No ethics review needed.'] rating: 3 confidence: 4 code_of_conduct: Yes
00ezkB2iZf
MedFuzz: Exploring the Robustness of Large Language Models in Medical Question Answering
[]
Large language models (LLM) have achieved impressive performance on medical question-answering benchmarks. However, high benchmark accuracy does not imply robust performance in real-world clinical settings. Medical question-answering benchmarks rely on assumptions consistent with quantifying LLM performance but that may not hold in the open world of the clinic. Yet LLMs learn broad knowledge that could help the LLM perform in practical conditions regardless of unrealistic assumptions in celebrated benchmarks. We seek to quantify how robust LLM medical question-answering benchmark performance is to violations of unrealistic benchmark assumptions. Specifically, we present an adversarial method that we call MedFuzz (for medical fuzzing). MedFuzz attempts to modify benchmark questions in ways aimed at confounding the LLM. We demonstrate the approach by targeting unrealistic assumptions about patient characteristics presented in the MedQA benchmark. Successful "attacks" modify a benchmark item in ways that would be unlikely to fool a medical expert but nonetheless "trick" the LLM into changing from a correct to an incorrect answer. Further, we present a non-parametric test for calculating the statistic significance of a successful attack. We show how to use calculate "MedFuzzed" performance on a medical QA benchmark, as well to find individual cases of statistically significant successful attacks. The methods show promise at providing insights into the ability of an LLM to operate robustly in more realistic settings.
[ "large language model", "adversarial machine learning", "automatic red teaming" ]
https://openreview.net/pdf?id=00ezkB2iZf
https://openreview.net/forum?id=00ezkB2iZf
EU8ZZoZ56t
official_comment
1,732,587,300,700
M9K6lklgnS
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission11424/Authors" ]
ICLR.cc/2025/Conference
2025
title: Response to Reviewer EcvC comment: We thank the reviewer for their detailed and thoughtful feedback on our manuscript. Below, we address each of the points raised and clarify our methodological choices. Firstly, we will correct the `\citep` citation format throughout the manuscript. ### **1. Empirical Validation of Semantically Coherent Fuzzes** Our qualitative evaluation of the fuzzed questions relies on feedback from medical expert users who review successful attacks and assess their plausibility. While this approach is effective in surfacing interesting cases, we recognize the need for more systematic and quantitative validation to empirically verify that clinicians would consistently provide correct answers to fuzzed questions. This limitation will be addressed in future work as part of broader medical expert evaluation efforts. ### **2. Use in Other Domains** The approach used in MedFuzz would apply in other domains. The approach relies on a domain expert who design the attacks and evaluate the results. The domain should also face serious robustness challenges in deploying in real-world settings. We chose medicine because of our experience with challenges in this domain. ### **3. Choice of GPT-4 as the Attacker LLM** We selected GPT-4 as the attacker LLM due to its exceptional performance on MedQA. The attacker LLM must perform at least at a human level on the benchmark to effectively generate attacks that preserve the correct answer while introducing subtle, diagnostically irrelevant distractors. GPT-4 has also demonstrated performed well on theory of mind tests (Strachan et al., 2024), suggesting it would be good at generating ways to "trick" a test taker (the target LLM). We recognize the potential value of exploring fine-tuned open-source models like OpenBioLLM-70B as attackers. However, current fine-tuned models lack performance on this benchmark and demonstrated generalist reasoning abilities in other settings. In future work, we aim to investigate whether fine-tuning open-source models can achieve similar attacker capabilities at a lower computational cost. ### **4. Use of MedQA Dataset** We selected MedQA because it remained a challenging benchmark for state-of-the-art language models until GPT-4 and its direct competitors achieved near-human performance. To demonstrate MedFuzz’s value, the target LLM needed perform well enough on the benchmark to reveal meaningful vulnerabilities beyond just not understanding the questions. Expanding MedFuzz to other datasets like MedMCQA, PubMedQA, or MMLU Clinical Topics is an exciting direction for future work. The challenge with these datasets is that their variety in answer format and topic makes it challenging to identifying assumptions to violate that don't hold up in clinical settings. Relative to MedQA, they do not align as closely with our specific focus on robustness to real-world assumptions. ### **5. Scalability of the Statistical Test** The computational expense of the statistical test arises primarily from generating control fuzzes. For multiple-choice benchmarks, we recommend generating at least 30 control fuzzes per attack to ensure granularity in p-values that align with conventional significance thresholds. In future work, we plan to extend this methodology to open-ended answers by embedding generated responses and deriving p-values from the embeddings. We leave this to future work because it will require theoretical treatment as well as a much larger number of control fuzzes it will improve applicability to a wider range of benchmarks. ### **6. Iterative Workflow** The iterative workflow is not specific to GPT-4 and can be applied to other high-performing models like Claude Sonnet. Iterative turns are necessary to refine fuzzes, leveraging feedback from the target LLM to ensure attacks are semantically coherent and effective. While single-shot attacks are simpler, they often fail to exploit the nuanced vulnerabilities in advanced LLMs, as demonstrated by our initial experiments with single-turn methods (these were negative results will be added to the appendix for transparency). ### **7. Presentation and Intuition** We appreciate the reviewer’s suggestion to improve readability. In the revised manuscript, we will: - Add a simpler case study early in the text to illustrate the method. - Reorganize the introduction of the test statistic to introduce notation first, followed by an explanation of how it captures the significance of successful attacks. ### **Summary** We are grateful for the reviewer’s feedback and have outlined revisions to enhance the clarity, scalability, and rigor of our work. These include: 1. Adding negative results from single-shot attacks to the appendix. 2. Revising sections for improved readability and presentation. 3. Expanding the discussion of iterative workflows and generalizability to other datasets and models. We thank the reviewer for their thoughtful suggestions and are confident that these updates will strengthen the manuscript.
00ezkB2iZf
MedFuzz: Exploring the Robustness of Large Language Models in Medical Question Answering
[]
Large language models (LLM) have achieved impressive performance on medical question-answering benchmarks. However, high benchmark accuracy does not imply robust performance in real-world clinical settings. Medical question-answering benchmarks rely on assumptions consistent with quantifying LLM performance but that may not hold in the open world of the clinic. Yet LLMs learn broad knowledge that could help the LLM perform in practical conditions regardless of unrealistic assumptions in celebrated benchmarks. We seek to quantify how robust LLM medical question-answering benchmark performance is to violations of unrealistic benchmark assumptions. Specifically, we present an adversarial method that we call MedFuzz (for medical fuzzing). MedFuzz attempts to modify benchmark questions in ways aimed at confounding the LLM. We demonstrate the approach by targeting unrealistic assumptions about patient characteristics presented in the MedQA benchmark. Successful "attacks" modify a benchmark item in ways that would be unlikely to fool a medical expert but nonetheless "trick" the LLM into changing from a correct to an incorrect answer. Further, we present a non-parametric test for calculating the statistic significance of a successful attack. We show how to use calculate "MedFuzzed" performance on a medical QA benchmark, as well to find individual cases of statistically significant successful attacks. The methods show promise at providing insights into the ability of an LLM to operate robustly in more realistic settings.
[ "large language model", "adversarial machine learning", "automatic red teaming" ]
https://openreview.net/pdf?id=00ezkB2iZf
https://openreview.net/forum?id=00ezkB2iZf
Al5ULDSosk
official_comment
1,732,583,176,623
NvznhEBuAw
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission11424/Authors" ]
ICLR.cc/2025/Conference
2025
title: Response to Reviewer Dsnm comment: We appreciate the reviewer’s thoughtful and constructive feedback on our manuscript. Below, we address each of the points raised and clarify aspects of our approach. ### **1. Single-Turn Attacks** We initially explored single-shot attacks as a baseline approach. For example, with GPT-4 achieving 88.5% accuracy on the MedQA benchmark, we created several modified datasets that added diagnostically irrelevant patient characteristics. These datasets included patients characterized by varying socioeconomic statuses (e.g., affluent or low-income) and different racial or ethnic groups (Asian, Black, Hispanic, Native American, White), while excluding questions where race was clinically relevant. Across these datasets, no statistically significant change in accuracy was observed, indicating that such single-turn perturbations were too “easy” for advanced models like GPT-4. These findings inspired MedFuzz's multi-turn approach. We can include these negative results in the appendix to demonstrate the progression of our method. ### **2. Expanding Statistical Validation** We strongly adhere to the conventional statistical approach of having the end user evaluate interesting results and then using significance tests to validate that these findings contain signal. Expanding significance testing to a broader set of results would necessitate multiple comparisons corrections, which we leave for future work. Furthermore, ranking results based on p-values invites the risk of p-hacking, which we aim to avoid. ### **3. Faithfulness of Responses** The faithfulness analysis is not intended as a core contribution but rather as a supplementary finding to highlight that vulnerabilities revealed by MedFuzz cannot simply be detected by inspecting CoT explanations. We agree this distinction can be emphasized more clearly in the manuscript. ### **4. Comparison to Misinformation Attacks in Han et al. (2024)** MedFuzz fundamentally differs from the approach in Han et al. (2024). The attacks described in that work aim to poison target LLMs by injecting falsehoods during model updates, requiring access to gradients and training data. In contrast, MedFuzz does not poison, it *detects* “poison”, and does so without access to gradients or the data used for model updates. We will clarify this distinction in the manuscript to address the reviewer’s concern. ### **5. Guidelines for Expanding MedFuzz** MedFuzz’s approach is generalizable to any domain where benchmarks rely on performance statistics (e.g., accuracy) that are contingent on assumptions not robust in real-world settings. While our study focuses on medical datasets requiring clinical expertise, domain experts in other fields can evaluate MedFuzz outputs for their respective use cases. For medical benchmarks like MedMCQA and PubMedQA, the key challenge is identifying assumptions analogous to those violated in MedQA. We can provide broad guidelines for extending MedFuzz, such as focusing on domain-specific biases, assumptions, or oversights that simplify real-world complexity. ### **6. Ensuring Correct Answers in Fuzzed Questions** We rely on the attacker LLM’s high performance on MedQA to generate effective attacks while preserving the correct answer. Medically experienced users validate successful attacks by inspecting outputs and running significance tests. When the attacker fails to “fuzz” the question well, this is discovered during that human evaluation step. We plan for extensive human medical expert evaluation in future work. ### **7. Value of \(K\) in Algorithm 1** The ideal value of \(K\) (number of iterations) depends on the target model’s capabilities on a given benchmark. We will update the manuscript to suggest tuning \(K\) on a pilot subset of the data, increasing it incrementally until the marginal gains from additional iterations are no longer worth the computational expense. ### **8. Smaller Models as Attackers** We believe that the attacker model must have reached human-level performance on the benchmark to identify effective attacks. This ensures that the attacker LLM can leverage its understanding of the benchmark and the correct answer to generate meaningful perturbations. Smaller models effectiveness would be limited by their lower performance on the benchmark. Exploring this tradeoff is a promising direction for future work. ### **9. Redundancy in Algorithm 1** Thank you for pointing out the Algorithm 1, we will clarify this in the revised manuscript. ### **Summary** We appreciate the reviewer’s positive assessment of the manuscript’s clarity, originality, quality, and significance. We have provided clarifications and plan to incorporate additional results and updates in the revised manuscript, including: 1. Negative results from single-shot attacks in the appendix. 2. Broader guidelines for applying MedFuzz to other domains. 3. Refinements to Algorithm 1 and expanded discussion on parameter tuning. We thank the reviewer for their valuable feedback.
00ezkB2iZf
MedFuzz: Exploring the Robustness of Large Language Models in Medical Question Answering
[]
Large language models (LLM) have achieved impressive performance on medical question-answering benchmarks. However, high benchmark accuracy does not imply robust performance in real-world clinical settings. Medical question-answering benchmarks rely on assumptions consistent with quantifying LLM performance but that may not hold in the open world of the clinic. Yet LLMs learn broad knowledge that could help the LLM perform in practical conditions regardless of unrealistic assumptions in celebrated benchmarks. We seek to quantify how robust LLM medical question-answering benchmark performance is to violations of unrealistic benchmark assumptions. Specifically, we present an adversarial method that we call MedFuzz (for medical fuzzing). MedFuzz attempts to modify benchmark questions in ways aimed at confounding the LLM. We demonstrate the approach by targeting unrealistic assumptions about patient characteristics presented in the MedQA benchmark. Successful "attacks" modify a benchmark item in ways that would be unlikely to fool a medical expert but nonetheless "trick" the LLM into changing from a correct to an incorrect answer. Further, we present a non-parametric test for calculating the statistic significance of a successful attack. We show how to use calculate "MedFuzzed" performance on a medical QA benchmark, as well to find individual cases of statistically significant successful attacks. The methods show promise at providing insights into the ability of an LLM to operate robustly in more realistic settings.
[ "large language model", "adversarial machine learning", "automatic red teaming" ]
https://openreview.net/pdf?id=00ezkB2iZf
https://openreview.net/forum?id=00ezkB2iZf
2XGhQ3OS4y
official_comment
1,732,633,177,998
Al5ULDSosk
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission11424/Reviewer_Dsnm" ]
ICLR.cc/2025/Conference
2025
comment: Thank you for your detailed response! I shall wait for the revised paper to look at the edits made.
00SnKBGTsz
DataEnvGym: Data Generation Agents in Teacher Environments with Student Feedback
[]
The process of creating training data to teach models is currently driven by humans, who manually analyze model weaknesses and plan how to create data that improves a student model. Recent approaches using large language models (LLMs) as annotators reduce human annotation effort, but still require humans to interpret feedback from evaluations and control the LLM to produce data the student needs. Automating this labor-intensive process by creating autonomous data generation agents – or teachers – is desirable, but requires environments that can simulate the feedback-driven, iterative, closed loop of data creation. To enable rapid and scalable testing for such agents and their modules, we introduce DataEnvGym, a testbed of teacher environments for data generation agents. DataEnvGym frames data generation as a sequential decision-making task, involving an agent consisting of a data generation policy (which generates a plan for creating training data) and a data generation engine (which transforms the plan into data), inside an environment that provides feedback from a student. The agent’s end goal is to improve student model performance. Students are iteratively trained and evaluated on generated data, with their feedback (in the form of errors or weak skills) being reported to the agent after each iteration. As a general-purpose testbed, DataEnvGym includes multiple instantiations of teacher environments across three levels of structure in the state representation and action space, with varying levels of scaffolding support. More structured environments are based on automatically-inferred skills and offer a higher degree of interpretability and control over the curriculum. We support developing and testing data generation agents in three diverse tasks covering both text and images (mathematics, programming, and visual question answering) and test multiple student models. We find that example agents in our teaching environments can iteratively improve students across diverse tasks and settings. Moreover, we show that environments can teach different skill levels and can be used to test variants of key modules, pointing to directions of future work in improving data generation agents, engines, and feedback mechanisms. We will publicly release our code and leaderboard.
[ "iterative data generation", "llm agent", "lifelong learning" ]
https://openreview.net/pdf?id=00SnKBGTsz
https://openreview.net/forum?id=00SnKBGTsz
wqTNtVDwef
official_comment
1,732,591,950,077
9OQJoesINr
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission11063/Authors" ]
ICLR.cc/2025/Conference
2025
comment: Thank you, reviewer VQ9Y! We truly appreciate your kind words and your effort in reviewing our work.
00SnKBGTsz
DataEnvGym: Data Generation Agents in Teacher Environments with Student Feedback
[]
The process of creating training data to teach models is currently driven by humans, who manually analyze model weaknesses and plan how to create data that improves a student model. Recent approaches using large language models (LLMs) as annotators reduce human annotation effort, but still require humans to interpret feedback from evaluations and control the LLM to produce data the student needs. Automating this labor-intensive process by creating autonomous data generation agents – or teachers – is desirable, but requires environments that can simulate the feedback-driven, iterative, closed loop of data creation. To enable rapid and scalable testing for such agents and their modules, we introduce DataEnvGym, a testbed of teacher environments for data generation agents. DataEnvGym frames data generation as a sequential decision-making task, involving an agent consisting of a data generation policy (which generates a plan for creating training data) and a data generation engine (which transforms the plan into data), inside an environment that provides feedback from a student. The agent’s end goal is to improve student model performance. Students are iteratively trained and evaluated on generated data, with their feedback (in the form of errors or weak skills) being reported to the agent after each iteration. As a general-purpose testbed, DataEnvGym includes multiple instantiations of teacher environments across three levels of structure in the state representation and action space, with varying levels of scaffolding support. More structured environments are based on automatically-inferred skills and offer a higher degree of interpretability and control over the curriculum. We support developing and testing data generation agents in three diverse tasks covering both text and images (mathematics, programming, and visual question answering) and test multiple student models. We find that example agents in our teaching environments can iteratively improve students across diverse tasks and settings. Moreover, we show that environments can teach different skill levels and can be used to test variants of key modules, pointing to directions of future work in improving data generation agents, engines, and feedback mechanisms. We will publicly release our code and leaderboard.
[ "iterative data generation", "llm agent", "lifelong learning" ]
https://openreview.net/pdf?id=00SnKBGTsz
https://openreview.net/forum?id=00SnKBGTsz
wnsiUkDh00
official_comment
1,732,728,515,406
NEsxOTkkIV
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission11063/Authors" ]
ICLR.cc/2025/Conference
2025
comment: Thank you Reviewer c5nB! We're glad that you felt our additional experiments were well-designed + sound. We truly appreciate your effort in reviewing the paper and are grateful for your thoughtfulness in increasing your score.