forum_id
stringlengths
8
20
forum_title
stringlengths
1
899
forum_authors
sequencelengths
0
174
forum_abstract
stringlengths
0
4.69k
forum_keywords
sequencelengths
0
35
forum_pdf_url
stringlengths
38
50
forum_url
stringlengths
40
52
note_id
stringlengths
8
20
note_type
stringclasses
6 values
note_created
int64
1,360B
1,737B
note_replyto
stringlengths
4
20
note_readers
sequencelengths
1
8
note_signatures
sequencelengths
1
2
venue
stringclasses
349 values
year
stringclasses
12 values
note_text
stringlengths
10
56.5k
00SnKBGTsz
DataEnvGym: Data Generation Agents in Teacher Environments with Student Feedback
[]
The process of creating training data to teach models is currently driven by humans, who manually analyze model weaknesses and plan how to create data that improves a student model. Recent approaches using large language models (LLMs) as annotators reduce human annotation effort, but still require humans to interpret feedback from evaluations and control the LLM to produce data the student needs. Automating this labor-intensive process by creating autonomous data generation agents – or teachers – is desirable, but requires environments that can simulate the feedback-driven, iterative, closed loop of data creation. To enable rapid and scalable testing for such agents and their modules, we introduce DataEnvGym, a testbed of teacher environments for data generation agents. DataEnvGym frames data generation as a sequential decision-making task, involving an agent consisting of a data generation policy (which generates a plan for creating training data) and a data generation engine (which transforms the plan into data), inside an environment that provides feedback from a student. The agent’s end goal is to improve student model performance. Students are iteratively trained and evaluated on generated data, with their feedback (in the form of errors or weak skills) being reported to the agent after each iteration. As a general-purpose testbed, DataEnvGym includes multiple instantiations of teacher environments across three levels of structure in the state representation and action space, with varying levels of scaffolding support. More structured environments are based on automatically-inferred skills and offer a higher degree of interpretability and control over the curriculum. We support developing and testing data generation agents in three diverse tasks covering both text and images (mathematics, programming, and visual question answering) and test multiple student models. We find that example agents in our teaching environments can iteratively improve students across diverse tasks and settings. Moreover, we show that environments can teach different skill levels and can be used to test variants of key modules, pointing to directions of future work in improving data generation agents, engines, and feedback mechanisms. We will publicly release our code and leaderboard.
[ "iterative data generation", "llm agent", "lifelong learning" ]
https://openreview.net/pdf?id=00SnKBGTsz
https://openreview.net/forum?id=00SnKBGTsz
r8ZflFk3T7
official_review
1,730,852,147,119
00SnKBGTsz
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission11063/Reviewer_VQ9Y" ]
ICLR.cc/2025/Conference
2025
summary: This paper introduces Gym environments for data synthesis, framing the problem as sequential decision-making. In these environments, actions correspond to data-generation plans, and states represent the performance summary of a student model. The paper implements environments for three tasks: visual question answering (VQA), math, and code generation. Each environment offers three state representations: open-ended, skill-list, and skill-tree. Additionally, it proposes an LLM-based policy for data generation. Experimental results demonstrate that the LLM can make strategically effective choices based on environment-state information. soundness: 4 presentation: 3 contribution: 4 strengths: - Tackle a timely and interesting problem. - Provide the necessary infrastructure for the community to study the problem, opening up opportunities for future contributions. - Consider various data generation strategies, - Well-desgined experiments which demonstrate the effectiveness of the proposed approaches and conduct insightful analyses. weaknesses: * The paper is currently dense and difficult to follow. The introduction includes excessive implementation details, which detract from providing a simple, high-level intuition. Using a specific task example to guide readers through the core concepts would make the paper more accessible. * The paper focuses solely on the data generation plan rather than a full, end-to-end data generation process. It relies on a fixed, off-the-shelf data-generation engine that cannot be modified. The authors should admit this limitation and discuss potential strategies for overcoming it. * The quality of the data-generation engine can impact both student performance and the data-generation plan itself. Current approaches do not take into account the data-generation engine capabilities in the design of the policy or the evaluation of the student. For instance, poor student performance might result from the engine producing low-quality data on a specific skill, which could prompt the policy to avoid querying the engine for that skill. * The learning procedure can be resource-intensive. The authors should report the time, cost, and computing resources used for the experiments. questions: - Is it possible to implement a random-policy baseline where you randomly chose a set of (naturally collected) datapoints from a data pool? The no-state baseline has flavor of this baseline but LLM-informed decisions could be biased. - Is it possible to compare this approach with active learning, in which instead of doing data generation, you do data *selection* and ask models to generate only synthetic labels, but not synthetic inputs? flag_for_ethics_review: ['No ethics review needed.'] rating: 8 confidence: 4 code_of_conduct: Yes
00SnKBGTsz
DataEnvGym: Data Generation Agents in Teacher Environments with Student Feedback
[]
The process of creating training data to teach models is currently driven by humans, who manually analyze model weaknesses and plan how to create data that improves a student model. Recent approaches using large language models (LLMs) as annotators reduce human annotation effort, but still require humans to interpret feedback from evaluations and control the LLM to produce data the student needs. Automating this labor-intensive process by creating autonomous data generation agents – or teachers – is desirable, but requires environments that can simulate the feedback-driven, iterative, closed loop of data creation. To enable rapid and scalable testing for such agents and their modules, we introduce DataEnvGym, a testbed of teacher environments for data generation agents. DataEnvGym frames data generation as a sequential decision-making task, involving an agent consisting of a data generation policy (which generates a plan for creating training data) and a data generation engine (which transforms the plan into data), inside an environment that provides feedback from a student. The agent’s end goal is to improve student model performance. Students are iteratively trained and evaluated on generated data, with their feedback (in the form of errors or weak skills) being reported to the agent after each iteration. As a general-purpose testbed, DataEnvGym includes multiple instantiations of teacher environments across three levels of structure in the state representation and action space, with varying levels of scaffolding support. More structured environments are based on automatically-inferred skills and offer a higher degree of interpretability and control over the curriculum. We support developing and testing data generation agents in three diverse tasks covering both text and images (mathematics, programming, and visual question answering) and test multiple student models. We find that example agents in our teaching environments can iteratively improve students across diverse tasks and settings. Moreover, we show that environments can teach different skill levels and can be used to test variants of key modules, pointing to directions of future work in improving data generation agents, engines, and feedback mechanisms. We will publicly release our code and leaderboard.
[ "iterative data generation", "llm agent", "lifelong learning" ]
https://openreview.net/pdf?id=00SnKBGTsz
https://openreview.net/forum?id=00SnKBGTsz
pOR42YNLtU
official_comment
1,732,143,304,296
h1qvpjhRP3
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission11063/Authors" ]
ICLR.cc/2025/Conference
2025
title: Response to Reviewer VQ9Y (Part 2/2) comment: **Q1: We implement a random data selection baseline.** Data selection is not possible in general as many domains lack a data source from which to easily sample data (e.g., LiveCodeBench). Therefore, we implement it for MATH, as a standard training set is available. The random selection baseline cannot improve a student when sampling an equivalent amount of data as the data generation baseline. The results are shown below. We hypothesize that the random natural data selection baseline cannot improve a student like Gemma2-2B because easily accessible data pools (e.g., the training set for MATH) have already been exhausted by extensive LLM post-training \[C$\\S$4.2,D$\\S$4\] and so do not add new information. | | Before Training | Random Data Selection | Data Generation (Without State) | Data Generation (With State) | |---|---|---|----|----| | MATH Accuracy | 15.78 | 15.26 | 19.78 | **23.44** | **Q2: We implement a data selection agent.** We implement data selection using prototypicality scores \[A\] which are standard for active learning. Similar to the random selection baseline, it is hard to improve a well-post-trained LLM like Llama3 or Gemma2 by using readily available data pools — it is much easier to improve them using generated data. Even using the full training dataset cannot improve the student. This motivates our choice to tackle data generation rather than data selection. The training of open-source frontier models (Llama3, for example) includes significant post-training that subsumes publicly available data sources \[B$\\S$4.2, C$\\S$4\], making it hard to improve them with any amount of already existing data. | | Before Training | Data Selection (Prototypicality) | Full Training Dataset | Data Generation (Open-Ended) | |---|---|-----|---|----| | MATH Accuracy | 15.78 | 16.01 | 15.18 | **23.44** | \[A\] Sorscher et al., Beyond neural scaling laws: beating power law scaling via data pruning, NeurIPS 2022 Outstanding Paper Award \[B\] Llama Team, AI @ Meta, The Llama 3 Herd of Models, arXiv 2024 \[C\] Gemma Team, Google Deepmind, Gemma 2: Improving Open Language Models at a Practical Size, arXiv 2024
00SnKBGTsz
DataEnvGym: Data Generation Agents in Teacher Environments with Student Feedback
[]
The process of creating training data to teach models is currently driven by humans, who manually analyze model weaknesses and plan how to create data that improves a student model. Recent approaches using large language models (LLMs) as annotators reduce human annotation effort, but still require humans to interpret feedback from evaluations and control the LLM to produce data the student needs. Automating this labor-intensive process by creating autonomous data generation agents – or teachers – is desirable, but requires environments that can simulate the feedback-driven, iterative, closed loop of data creation. To enable rapid and scalable testing for such agents and their modules, we introduce DataEnvGym, a testbed of teacher environments for data generation agents. DataEnvGym frames data generation as a sequential decision-making task, involving an agent consisting of a data generation policy (which generates a plan for creating training data) and a data generation engine (which transforms the plan into data), inside an environment that provides feedback from a student. The agent’s end goal is to improve student model performance. Students are iteratively trained and evaluated on generated data, with their feedback (in the form of errors or weak skills) being reported to the agent after each iteration. As a general-purpose testbed, DataEnvGym includes multiple instantiations of teacher environments across three levels of structure in the state representation and action space, with varying levels of scaffolding support. More structured environments are based on automatically-inferred skills and offer a higher degree of interpretability and control over the curriculum. We support developing and testing data generation agents in three diverse tasks covering both text and images (mathematics, programming, and visual question answering) and test multiple student models. We find that example agents in our teaching environments can iteratively improve students across diverse tasks and settings. Moreover, we show that environments can teach different skill levels and can be used to test variants of key modules, pointing to directions of future work in improving data generation agents, engines, and feedback mechanisms. We will publicly release our code and leaderboard.
[ "iterative data generation", "llm agent", "lifelong learning" ]
https://openreview.net/pdf?id=00SnKBGTsz
https://openreview.net/forum?id=00SnKBGTsz
m1iUqPHpwk
official_comment
1,732,143,509,313
i3QgWgrJff
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission11063/Authors" ]
ICLR.cc/2025/Conference
2025
title: Response to Reviewer rVo8 comment: Thank you for stating that we make a good contribution to automated data generation and quality feedback\! **W1: We clarify that our focus is on synthetic data generation for training purposes**. We have added and highlighted text to the introduction in L050-051 in the revised PDF that clarifies our focus is on data generation for training purposes. **W2: Related works.** Thanks for providing the additional related works that fit into our section focused on simulations/games with a fixed set of actions and skills. We have cited them and discussed them in Section 4 under the paragraph “Training Environment Generation” (L503-506 in the revised PDF). **W3: We truncate experiments when performance decreases**. This is not a typo — we truncate when performance begins to saturate. This is a choice we made to speed up experiments, but it is certainly possible to run environments for longer. **W4: We add repeated runs of experiments to characterize variance**. We repeated the open-ended experiments 3x for each domain. The open-ended environment is the least constrained so we expect the highest variance here. The overall improvement is higher than the variance in each case. | | Multimodal (GQA) | MATH | LiveCodeBench | |---|---|---|---| | Before Teaching | 44.18 | 15.78 | 16.50 | | Open-Ended (3 runs) | 53.25 $\\pm$ 1.97 | 21.55 $\\pm$ 1.42 | 18.55 $\\pm$ 0.27 | **Q1: How does the performance of the data generation agents change over longer interactions?** It differs by environment. In the MATH and LiveCodeBench environments, the performance saturates with increased training. In the GQA environment, the performance seems to continue increasing up to 56%, but becomes more unstable (fluctuations up and down). **Q2: Is the total training data fixed in each allocation, or does it vary dynamically?** We set a maximum budget for an experiment and terminate the experiment when the budget is exhausted or the student saturates, whichever happens earlier. It is up to the policy to decide how it wants to allocate the budget across skills and iterations. In the baseline policies, we leave this decision up to the LLM except for the skill-tree environment, where we allocate data uniformly across skills and subskills because it is a reasonable baseline.
00SnKBGTsz
DataEnvGym: Data Generation Agents in Teacher Environments with Student Feedback
[]
The process of creating training data to teach models is currently driven by humans, who manually analyze model weaknesses and plan how to create data that improves a student model. Recent approaches using large language models (LLMs) as annotators reduce human annotation effort, but still require humans to interpret feedback from evaluations and control the LLM to produce data the student needs. Automating this labor-intensive process by creating autonomous data generation agents – or teachers – is desirable, but requires environments that can simulate the feedback-driven, iterative, closed loop of data creation. To enable rapid and scalable testing for such agents and their modules, we introduce DataEnvGym, a testbed of teacher environments for data generation agents. DataEnvGym frames data generation as a sequential decision-making task, involving an agent consisting of a data generation policy (which generates a plan for creating training data) and a data generation engine (which transforms the plan into data), inside an environment that provides feedback from a student. The agent’s end goal is to improve student model performance. Students are iteratively trained and evaluated on generated data, with their feedback (in the form of errors or weak skills) being reported to the agent after each iteration. As a general-purpose testbed, DataEnvGym includes multiple instantiations of teacher environments across three levels of structure in the state representation and action space, with varying levels of scaffolding support. More structured environments are based on automatically-inferred skills and offer a higher degree of interpretability and control over the curriculum. We support developing and testing data generation agents in three diverse tasks covering both text and images (mathematics, programming, and visual question answering) and test multiple student models. We find that example agents in our teaching environments can iteratively improve students across diverse tasks and settings. Moreover, we show that environments can teach different skill levels and can be used to test variants of key modules, pointing to directions of future work in improving data generation agents, engines, and feedback mechanisms. We will publicly release our code and leaderboard.
[ "iterative data generation", "llm agent", "lifelong learning" ]
https://openreview.net/pdf?id=00SnKBGTsz
https://openreview.net/forum?id=00SnKBGTsz
la5jPwJU4g
official_comment
1,732,475,842,141
H2h2K6a8x5
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission11063/Authors" ]
ICLR.cc/2025/Conference
2025
title: Followup to Reviewer rVo8 comment: Thanks for the great suggestions and continued engagement\! We've redone Fig. 5 on training dynamics in the style you've suggested: 1. We've run extended experiments instead of truncating them. 2. We now show entire training curves to illustrate what the longer training progression would look like and add visual elements to show where the truncation occurred. 3. We've added error bands based on our re-runs to characterize variance. 4. We’ve rephrased Lines 460 \- 465 as per your suggestions.
00SnKBGTsz
DataEnvGym: Data Generation Agents in Teacher Environments with Student Feedback
[]
The process of creating training data to teach models is currently driven by humans, who manually analyze model weaknesses and plan how to create data that improves a student model. Recent approaches using large language models (LLMs) as annotators reduce human annotation effort, but still require humans to interpret feedback from evaluations and control the LLM to produce data the student needs. Automating this labor-intensive process by creating autonomous data generation agents – or teachers – is desirable, but requires environments that can simulate the feedback-driven, iterative, closed loop of data creation. To enable rapid and scalable testing for such agents and their modules, we introduce DataEnvGym, a testbed of teacher environments for data generation agents. DataEnvGym frames data generation as a sequential decision-making task, involving an agent consisting of a data generation policy (which generates a plan for creating training data) and a data generation engine (which transforms the plan into data), inside an environment that provides feedback from a student. The agent’s end goal is to improve student model performance. Students are iteratively trained and evaluated on generated data, with their feedback (in the form of errors or weak skills) being reported to the agent after each iteration. As a general-purpose testbed, DataEnvGym includes multiple instantiations of teacher environments across three levels of structure in the state representation and action space, with varying levels of scaffolding support. More structured environments are based on automatically-inferred skills and offer a higher degree of interpretability and control over the curriculum. We support developing and testing data generation agents in three diverse tasks covering both text and images (mathematics, programming, and visual question answering) and test multiple student models. We find that example agents in our teaching environments can iteratively improve students across diverse tasks and settings. Moreover, we show that environments can teach different skill levels and can be used to test variants of key modules, pointing to directions of future work in improving data generation agents, engines, and feedback mechanisms. We will publicly release our code and leaderboard.
[ "iterative data generation", "llm agent", "lifelong learning" ]
https://openreview.net/pdf?id=00SnKBGTsz
https://openreview.net/forum?id=00SnKBGTsz
i3QgWgrJff
official_review
1,730,687,830,645
00SnKBGTsz
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission11063/Reviewer_rVo8" ]
ICLR.cc/2025/Conference
2025
summary: The paper presents DataEnvGym, a framework designed to simulate environments for data generation agents. These agents iteratively generate synthetic data to address weaknesses in student models, aiming to improve model performance across tasks like mathematics, programming, and visual question answering. DataEnvGym provides various structured environments (Open-Ended, Skill-List, and Skill-Tree) where data generation agents create targeted training examples based on feedback from the student model, offering a dynamic approach to automated model improvement. soundness: 2 presentation: 3 contribution: 3 strengths: - Good contribution to automated data generation for model improvement. - Clearly written with structured sections explaining each environment type and experimental results. weaknesses: - The paper should clarify early on that the focus is on synthetic data generation for training purposes, as this underpins the motivation for the approach. - Important related works on algorithms using feedback from training to generate the next training environments are missing [1, 2, 3, 4]. - Lines 460 - 465, I believe there is a typo whereby it says that “each experiment is truncated once the performance consistently decreases for multiple iterations”. Should it be “increases”? - Repeated runs of experiments without confidence intervals will be valuable, especially since the variance of performance seems to be very high. [1] Sudhakaran, S., González-Duque, M., Freiberger, M., Glanois, C., Najarro, E., & Risi, S. (2024). Mariogpt: Open-ended text2level generation through large language models. Advances in Neural Information Processing Systems, 36. [2] Todd, G., Earle, S., Nasir, M. U., Green, M. C., & Togelius, J. (2023, April). Level generation through large language models. In Proceedings of the 18th International Conference on the Foundations of Digital Games (pp. 1-8). [3] Zhang, J., Lehman, J., Stanley, K., & Clune, J. (2023). Omni: Open-endedness via models of human notions of interestingness. arXiv preprint arXiv:2306.01711. [4] Faldor, M., Zhang, J., Cully, A., & Clune, J. (2024). OMNI-EPIC: Open-endedness via Models of human Notions of Interestingness with Environments Programmed in Code. arXiv preprint arXiv:2405.15568. questions: - How does the performance of the data generation agents change over longer iterations? The paper truncates experiments when performance increases, but it would be insightful to explore whether performance plateaus or continuously increase over extended training. - Is the total training data allocation fixed in each environment, or does it vary dynamically? The methodology mentions rebalancing but lacks clarity on how these allocations adjust adaptively based on feedback. flag_for_ethics_review: ['No ethics review needed.'] rating: 6 confidence: 4 code_of_conduct: Yes
00SnKBGTsz
DataEnvGym: Data Generation Agents in Teacher Environments with Student Feedback
[]
The process of creating training data to teach models is currently driven by humans, who manually analyze model weaknesses and plan how to create data that improves a student model. Recent approaches using large language models (LLMs) as annotators reduce human annotation effort, but still require humans to interpret feedback from evaluations and control the LLM to produce data the student needs. Automating this labor-intensive process by creating autonomous data generation agents – or teachers – is desirable, but requires environments that can simulate the feedback-driven, iterative, closed loop of data creation. To enable rapid and scalable testing for such agents and their modules, we introduce DataEnvGym, a testbed of teacher environments for data generation agents. DataEnvGym frames data generation as a sequential decision-making task, involving an agent consisting of a data generation policy (which generates a plan for creating training data) and a data generation engine (which transforms the plan into data), inside an environment that provides feedback from a student. The agent’s end goal is to improve student model performance. Students are iteratively trained and evaluated on generated data, with their feedback (in the form of errors or weak skills) being reported to the agent after each iteration. As a general-purpose testbed, DataEnvGym includes multiple instantiations of teacher environments across three levels of structure in the state representation and action space, with varying levels of scaffolding support. More structured environments are based on automatically-inferred skills and offer a higher degree of interpretability and control over the curriculum. We support developing and testing data generation agents in three diverse tasks covering both text and images (mathematics, programming, and visual question answering) and test multiple student models. We find that example agents in our teaching environments can iteratively improve students across diverse tasks and settings. Moreover, we show that environments can teach different skill levels and can be used to test variants of key modules, pointing to directions of future work in improving data generation agents, engines, and feedback mechanisms. We will publicly release our code and leaderboard.
[ "iterative data generation", "llm agent", "lifelong learning" ]
https://openreview.net/pdf?id=00SnKBGTsz
https://openreview.net/forum?id=00SnKBGTsz
hWat8aFBRw
official_comment
1,732,707,243,013
66buacQmRe
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission11063/Reviewer_wuGW" ]
ICLR.cc/2025/Conference
2025
comment: Thank you so much for looking into my feedback and working on it. I am in the process of reviewing the updated manuscript and will let you know but so far you have pretty much addressed my concerns. Cheers!
00SnKBGTsz
DataEnvGym: Data Generation Agents in Teacher Environments with Student Feedback
[]
The process of creating training data to teach models is currently driven by humans, who manually analyze model weaknesses and plan how to create data that improves a student model. Recent approaches using large language models (LLMs) as annotators reduce human annotation effort, but still require humans to interpret feedback from evaluations and control the LLM to produce data the student needs. Automating this labor-intensive process by creating autonomous data generation agents – or teachers – is desirable, but requires environments that can simulate the feedback-driven, iterative, closed loop of data creation. To enable rapid and scalable testing for such agents and their modules, we introduce DataEnvGym, a testbed of teacher environments for data generation agents. DataEnvGym frames data generation as a sequential decision-making task, involving an agent consisting of a data generation policy (which generates a plan for creating training data) and a data generation engine (which transforms the plan into data), inside an environment that provides feedback from a student. The agent’s end goal is to improve student model performance. Students are iteratively trained and evaluated on generated data, with their feedback (in the form of errors or weak skills) being reported to the agent after each iteration. As a general-purpose testbed, DataEnvGym includes multiple instantiations of teacher environments across three levels of structure in the state representation and action space, with varying levels of scaffolding support. More structured environments are based on automatically-inferred skills and offer a higher degree of interpretability and control over the curriculum. We support developing and testing data generation agents in three diverse tasks covering both text and images (mathematics, programming, and visual question answering) and test multiple student models. We find that example agents in our teaching environments can iteratively improve students across diverse tasks and settings. Moreover, we show that environments can teach different skill levels and can be used to test variants of key modules, pointing to directions of future work in improving data generation agents, engines, and feedback mechanisms. We will publicly release our code and leaderboard.
[ "iterative data generation", "llm agent", "lifelong learning" ]
https://openreview.net/pdf?id=00SnKBGTsz
https://openreview.net/forum?id=00SnKBGTsz
h1qvpjhRP3
official_comment
1,732,143,067,624
r8ZflFk3T7
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission11063/Authors" ]
ICLR.cc/2025/Conference
2025
title: Response to Reviewer VQ9Y (Part 1/2) comment: Thank you for the quality feedback and for noticing our contributions to open-source infrastructure\! **W1: We have added a new Figure 6 in Appendix B (L864-884), guiding the reader through a concrete task example.** The figure walks a reader through a round of data generation for the multimodal task using GQA as an example. **W2-1: The data generation engine is swappable and not required for all domains**. The data generation engine is only fixed for the multimodal setting, where it relies on an off-the-shelf T2I model to generate images. For the code generation and math settings, the data generation policy directly produces the data in an end-to-end manner. **W2-2: What strategies exist for modifying the data generation engine?** Our framework easily allows updating the data generation agent (policy \+ engine) when using an open source LLM. For example, we could update the parameters of the data generation policy and data generation engine using experiences from multiple rounds of data generation through reinforcement learning. **W3: How can the teacher take into account the weaknesses of the data generation engine or itself?** We have designed DataEnvGym as an RL-style setting so the policy can learn over subsequent iterations what the data generation engine’s capabilities are. Our position is that the capabilities of the data generation engine should be discovered by the policy through a process of experimentation. The Skill-Tree environment explicitly provides a mechanism for this. Our framework supports policy learning of what the teaching capabilities of the agent are. For example, after allocating data for a skill and observing a lack of improvement, the policy can infer that the data generation engine has trouble with generating data for the skill and avoid data generation for that skill in subsequent iterations. **W4: Experiments can be run for fewer than \<$1 and under 10h on a single GPU.** On a single A6000, the total training time for our most computationally expensive setting (multimodal) is 6h, or about 1.5h/iteration. Most other settings are much faster. Environments are fully parallelizable using Ray and can be scaled up to multiple GPUs and even multiple nodes. We’ve added a table showing the token and time costs (Appendix B.4, Table 4, L918-931), which we summarize below. Additionally, we conduct experiments with a cheaper teacher, GPT-4o-mini, showing that it can be used as a cheaper alternative for GPT-4o. We’ve added these results in Appendix B.4 (Table 5, L972-986) of the revised PDF. | Domain | Environment | Num Tokens | \$ Cost (GPT-4o-mini) | \$ Cost (GPT-4o) | GPU Minutes / Iteration | |---|---|---|---|---|---| | Math | Open-Ended | 173234 | 0.10 | 1.73 | 24 | | Math | Skill-List | 318528 | 0.19 | 3.19 | 24 | | Math | Skill-Tree | 355033 | 0.21 | 3.55 | 16 | | Coding | Open-Ended | 279304 | 0.17 | 2.79 | 16 | | Coding | Skill-List | 497787 | 0.30 | 4.98 | 16 | | Coding | Skill-Tree | 967610 | 0.58 | 9.68 | 16 | | Multimodal | Open-Ended | 25073 | 0.02 | 0.25 | 37 | | Multimodal | Skill-List | 82419 | 0.05 | 0.82 | 134 | | Multimodal | Skill-Tree | 33991 | 0.02 | 0.34 | 78 |
00SnKBGTsz
DataEnvGym: Data Generation Agents in Teacher Environments with Student Feedback
[]
The process of creating training data to teach models is currently driven by humans, who manually analyze model weaknesses and plan how to create data that improves a student model. Recent approaches using large language models (LLMs) as annotators reduce human annotation effort, but still require humans to interpret feedback from evaluations and control the LLM to produce data the student needs. Automating this labor-intensive process by creating autonomous data generation agents – or teachers – is desirable, but requires environments that can simulate the feedback-driven, iterative, closed loop of data creation. To enable rapid and scalable testing for such agents and their modules, we introduce DataEnvGym, a testbed of teacher environments for data generation agents. DataEnvGym frames data generation as a sequential decision-making task, involving an agent consisting of a data generation policy (which generates a plan for creating training data) and a data generation engine (which transforms the plan into data), inside an environment that provides feedback from a student. The agent’s end goal is to improve student model performance. Students are iteratively trained and evaluated on generated data, with their feedback (in the form of errors or weak skills) being reported to the agent after each iteration. As a general-purpose testbed, DataEnvGym includes multiple instantiations of teacher environments across three levels of structure in the state representation and action space, with varying levels of scaffolding support. More structured environments are based on automatically-inferred skills and offer a higher degree of interpretability and control over the curriculum. We support developing and testing data generation agents in three diverse tasks covering both text and images (mathematics, programming, and visual question answering) and test multiple student models. We find that example agents in our teaching environments can iteratively improve students across diverse tasks and settings. Moreover, we show that environments can teach different skill levels and can be used to test variants of key modules, pointing to directions of future work in improving data generation agents, engines, and feedback mechanisms. We will publicly release our code and leaderboard.
[ "iterative data generation", "llm agent", "lifelong learning" ]
https://openreview.net/pdf?id=00SnKBGTsz
https://openreview.net/forum?id=00SnKBGTsz
ZqwAYtcmhv
official_comment
1,732,563,943,240
DjVKsUoFN2
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission11063/Authors" ]
ICLR.cc/2025/Conference
2025
comment: Thank you, reviewer rVo8! We sincerely appreciate your thoughtfulness and engagement.
00SnKBGTsz
DataEnvGym: Data Generation Agents in Teacher Environments with Student Feedback
[]
The process of creating training data to teach models is currently driven by humans, who manually analyze model weaknesses and plan how to create data that improves a student model. Recent approaches using large language models (LLMs) as annotators reduce human annotation effort, but still require humans to interpret feedback from evaluations and control the LLM to produce data the student needs. Automating this labor-intensive process by creating autonomous data generation agents – or teachers – is desirable, but requires environments that can simulate the feedback-driven, iterative, closed loop of data creation. To enable rapid and scalable testing for such agents and their modules, we introduce DataEnvGym, a testbed of teacher environments for data generation agents. DataEnvGym frames data generation as a sequential decision-making task, involving an agent consisting of a data generation policy (which generates a plan for creating training data) and a data generation engine (which transforms the plan into data), inside an environment that provides feedback from a student. The agent’s end goal is to improve student model performance. Students are iteratively trained and evaluated on generated data, with their feedback (in the form of errors or weak skills) being reported to the agent after each iteration. As a general-purpose testbed, DataEnvGym includes multiple instantiations of teacher environments across three levels of structure in the state representation and action space, with varying levels of scaffolding support. More structured environments are based on automatically-inferred skills and offer a higher degree of interpretability and control over the curriculum. We support developing and testing data generation agents in three diverse tasks covering both text and images (mathematics, programming, and visual question answering) and test multiple student models. We find that example agents in our teaching environments can iteratively improve students across diverse tasks and settings. Moreover, we show that environments can teach different skill levels and can be used to test variants of key modules, pointing to directions of future work in improving data generation agents, engines, and feedback mechanisms. We will publicly release our code and leaderboard.
[ "iterative data generation", "llm agent", "lifelong learning" ]
https://openreview.net/pdf?id=00SnKBGTsz
https://openreview.net/forum?id=00SnKBGTsz
NEsxOTkkIV
official_comment
1,732,651,200,753
Aq2tBtB0lt
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission11063/Reviewer_c5nB" ]
ICLR.cc/2025/Conference
2025
comment: Dear authors, Thank you for conducting the additional experiments and incorporating the results and findings. This addresses three of my major concerns: W1: Figure 12 demonstrates that students' performance improves when evaluated on test sets which incorporates the newly generated data points. Although the evaluation was only conducted on the multimodal and MATH environments, and not on the coding environment due to technical difficulties, I believe this set of experiments is well-designed and sound. Q1: Appendix E (Table 6) proves that the student performance is increased due to the added data rather than insufficient training initially. The result is valid and sound. Q2: The added figures (Figure 10), combined with the existing ones, provide a good coverage of both qualitative and quantitative results for skill discovery. Taking these into account, I have raised the score to 8.
00SnKBGTsz
DataEnvGym: Data Generation Agents in Teacher Environments with Student Feedback
[]
The process of creating training data to teach models is currently driven by humans, who manually analyze model weaknesses and plan how to create data that improves a student model. Recent approaches using large language models (LLMs) as annotators reduce human annotation effort, but still require humans to interpret feedback from evaluations and control the LLM to produce data the student needs. Automating this labor-intensive process by creating autonomous data generation agents – or teachers – is desirable, but requires environments that can simulate the feedback-driven, iterative, closed loop of data creation. To enable rapid and scalable testing for such agents and their modules, we introduce DataEnvGym, a testbed of teacher environments for data generation agents. DataEnvGym frames data generation as a sequential decision-making task, involving an agent consisting of a data generation policy (which generates a plan for creating training data) and a data generation engine (which transforms the plan into data), inside an environment that provides feedback from a student. The agent’s end goal is to improve student model performance. Students are iteratively trained and evaluated on generated data, with their feedback (in the form of errors or weak skills) being reported to the agent after each iteration. As a general-purpose testbed, DataEnvGym includes multiple instantiations of teacher environments across three levels of structure in the state representation and action space, with varying levels of scaffolding support. More structured environments are based on automatically-inferred skills and offer a higher degree of interpretability and control over the curriculum. We support developing and testing data generation agents in three diverse tasks covering both text and images (mathematics, programming, and visual question answering) and test multiple student models. We find that example agents in our teaching environments can iteratively improve students across diverse tasks and settings. Moreover, we show that environments can teach different skill levels and can be used to test variants of key modules, pointing to directions of future work in improving data generation agents, engines, and feedback mechanisms. We will publicly release our code and leaderboard.
[ "iterative data generation", "llm agent", "lifelong learning" ]
https://openreview.net/pdf?id=00SnKBGTsz
https://openreview.net/forum?id=00SnKBGTsz
H2h2K6a8x5
official_comment
1,732,356,298,896
i3QgWgrJff
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission11063/Reviewer_rVo8" ]
ICLR.cc/2025/Conference
2025
comment: I thank the authors for the new experiments and clarifications. > This is not a typo — we truncate when performance begins to saturate. This is a choice we made to speed up experiments, but it is certainly possible to run environments for longer. That does not mean that the performance decreases. Decreases mean that the accuracy is dropping. Also, it is not clear in Figure 5 if the performance increase saturated. > We repeated the open-ended experiments 3x for each domain. The open-ended environment is the least constrained so we expect the highest variance here. The overall improvement is higher than the variance in each case. Why not include it in Figure 5? > It differs by environment. In the MATH and LiveCodeBench environments, the performance saturates with increased training. In the GQA environment, the performance seems to continue increasing up to 56%, but becomes more unstable (fluctuations up and down). Given this, why not include the full training progression in Figure 5 instead of truncating it? Providing more clarification on the decision to truncate would be helpful. Alternatively, adding an indicator on the figure to show where the truncation occurred and illustrating what the longer training progression would look like could address this.
00SnKBGTsz
DataEnvGym: Data Generation Agents in Teacher Environments with Student Feedback
[]
The process of creating training data to teach models is currently driven by humans, who manually analyze model weaknesses and plan how to create data that improves a student model. Recent approaches using large language models (LLMs) as annotators reduce human annotation effort, but still require humans to interpret feedback from evaluations and control the LLM to produce data the student needs. Automating this labor-intensive process by creating autonomous data generation agents – or teachers – is desirable, but requires environments that can simulate the feedback-driven, iterative, closed loop of data creation. To enable rapid and scalable testing for such agents and their modules, we introduce DataEnvGym, a testbed of teacher environments for data generation agents. DataEnvGym frames data generation as a sequential decision-making task, involving an agent consisting of a data generation policy (which generates a plan for creating training data) and a data generation engine (which transforms the plan into data), inside an environment that provides feedback from a student. The agent’s end goal is to improve student model performance. Students are iteratively trained and evaluated on generated data, with their feedback (in the form of errors or weak skills) being reported to the agent after each iteration. As a general-purpose testbed, DataEnvGym includes multiple instantiations of teacher environments across three levels of structure in the state representation and action space, with varying levels of scaffolding support. More structured environments are based on automatically-inferred skills and offer a higher degree of interpretability and control over the curriculum. We support developing and testing data generation agents in three diverse tasks covering both text and images (mathematics, programming, and visual question answering) and test multiple student models. We find that example agents in our teaching environments can iteratively improve students across diverse tasks and settings. Moreover, we show that environments can teach different skill levels and can be used to test variants of key modules, pointing to directions of future work in improving data generation agents, engines, and feedback mechanisms. We will publicly release our code and leaderboard.
[ "iterative data generation", "llm agent", "lifelong learning" ]
https://openreview.net/pdf?id=00SnKBGTsz
https://openreview.net/forum?id=00SnKBGTsz
GMsjHLXdOx
official_review
1,730,714,354,937
00SnKBGTsz
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission11063/Reviewer_c5nB" ]
ICLR.cc/2025/Conference
2025
summary: This paper presents a modular system for automated data generation, designed to minimize the need for human annotations. The proposed approach employs a reinforcement learning-inspired methodology, decomposing the process into a sequence of action predictions (data generation policy) based on state information (feedback from model errors) in an iterative manner. The effectiveness of this approach is demonstrated through three diverse tasks, encompassing text, image, and code generation across different modalities. soundness: 4 presentation: 3 contribution: 3 strengths: This paper presents a novel and insightful perspective on the autonomous data generation problem, leveraging principles from reinforcement learning to conceptualize it as a sequential decision-making process. The authors provide a thorough explanation of this approach, the motivations behind and the underlying mechanics. This paper proposed a modular framework/testbed that can be easily adapted to various tasks, showcasing its versatility and potential for widespread applicability. The authors demonstrate the effectiveness of their approach through experiments on 3 tasks of multiple modalities, including text, image, and code generation, yielding promising early results. weaknesses: The experiment part should be conducted more thoroughly: specifically, creating a test set that incorporates newly generated data points from the data generation agent and reporting evaluation results for each retrained model over successive iterations would provide more comprehensive insights into the system's performance. questions: In the Experiments section, the authors mention that the baseline student model should not have been heavily post-trained so that there are rooms for further improvements. However, it would be beneficial to provide additional evidence and details to support the claim that the student's performance is improved due to the added data points rather than insufficient training. For instance, the training protocol involved a fixed 10-epoch training period; it remains unclear whether the model had reached convergence within this timeframe or if the introduction of new data points accelerated convergence. Further clarification on this aspect would enhance the overall validity of the results. Also the result would be more sound if more quantitative and qualitative results for skill discovery is reported in this paper. flag_for_ethics_review: ['No ethics review needed.'] details_of_ethics_concerns: N/A rating: 8 confidence: 3 code_of_conduct: Yes
00SnKBGTsz
DataEnvGym: Data Generation Agents in Teacher Environments with Student Feedback
[]
The process of creating training data to teach models is currently driven by humans, who manually analyze model weaknesses and plan how to create data that improves a student model. Recent approaches using large language models (LLMs) as annotators reduce human annotation effort, but still require humans to interpret feedback from evaluations and control the LLM to produce data the student needs. Automating this labor-intensive process by creating autonomous data generation agents – or teachers – is desirable, but requires environments that can simulate the feedback-driven, iterative, closed loop of data creation. To enable rapid and scalable testing for such agents and their modules, we introduce DataEnvGym, a testbed of teacher environments for data generation agents. DataEnvGym frames data generation as a sequential decision-making task, involving an agent consisting of a data generation policy (which generates a plan for creating training data) and a data generation engine (which transforms the plan into data), inside an environment that provides feedback from a student. The agent’s end goal is to improve student model performance. Students are iteratively trained and evaluated on generated data, with their feedback (in the form of errors or weak skills) being reported to the agent after each iteration. As a general-purpose testbed, DataEnvGym includes multiple instantiations of teacher environments across three levels of structure in the state representation and action space, with varying levels of scaffolding support. More structured environments are based on automatically-inferred skills and offer a higher degree of interpretability and control over the curriculum. We support developing and testing data generation agents in three diverse tasks covering both text and images (mathematics, programming, and visual question answering) and test multiple student models. We find that example agents in our teaching environments can iteratively improve students across diverse tasks and settings. Moreover, we show that environments can teach different skill levels and can be used to test variants of key modules, pointing to directions of future work in improving data generation agents, engines, and feedback mechanisms. We will publicly release our code and leaderboard.
[ "iterative data generation", "llm agent", "lifelong learning" ]
https://openreview.net/pdf?id=00SnKBGTsz
https://openreview.net/forum?id=00SnKBGTsz
DjVKsUoFN2
official_comment
1,732,507,218,015
la5jPwJU4g
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission11063/Reviewer_rVo8" ]
ICLR.cc/2025/Conference
2025
comment: Thank you for the additional experiments and explanation. I have updated my score accordingly.
00SnKBGTsz
DataEnvGym: Data Generation Agents in Teacher Environments with Student Feedback
[]
The process of creating training data to teach models is currently driven by humans, who manually analyze model weaknesses and plan how to create data that improves a student model. Recent approaches using large language models (LLMs) as annotators reduce human annotation effort, but still require humans to interpret feedback from evaluations and control the LLM to produce data the student needs. Automating this labor-intensive process by creating autonomous data generation agents – or teachers – is desirable, but requires environments that can simulate the feedback-driven, iterative, closed loop of data creation. To enable rapid and scalable testing for such agents and their modules, we introduce DataEnvGym, a testbed of teacher environments for data generation agents. DataEnvGym frames data generation as a sequential decision-making task, involving an agent consisting of a data generation policy (which generates a plan for creating training data) and a data generation engine (which transforms the plan into data), inside an environment that provides feedback from a student. The agent’s end goal is to improve student model performance. Students are iteratively trained and evaluated on generated data, with their feedback (in the form of errors or weak skills) being reported to the agent after each iteration. As a general-purpose testbed, DataEnvGym includes multiple instantiations of teacher environments across three levels of structure in the state representation and action space, with varying levels of scaffolding support. More structured environments are based on automatically-inferred skills and offer a higher degree of interpretability and control over the curriculum. We support developing and testing data generation agents in three diverse tasks covering both text and images (mathematics, programming, and visual question answering) and test multiple student models. We find that example agents in our teaching environments can iteratively improve students across diverse tasks and settings. Moreover, we show that environments can teach different skill levels and can be used to test variants of key modules, pointing to directions of future work in improving data generation agents, engines, and feedback mechanisms. We will publicly release our code and leaderboard.
[ "iterative data generation", "llm agent", "lifelong learning" ]
https://openreview.net/pdf?id=00SnKBGTsz
https://openreview.net/forum?id=00SnKBGTsz
C3MhCuKhTf
official_comment
1,732,563,878,904
Aq2tBtB0lt
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission11063/Authors" ]
ICLR.cc/2025/Conference
2025
title: Follow up to reviewer c5nB comment: Given that there is only one day remaining in the rebuttal period, **we wanted to gently check in whether our rebuttal addressed all your questions or we are happy to address any remaining questions.** We’ve added experiments to address your questions about (a) test sets that incorporate generated data (b) whether added data or training is responsible for performance increases and we also add more qualitative results on skill discovery. We hope that these additional results and answers will allow you to revisit your score — otherwise, we are happy to engage further!
00SnKBGTsz
DataEnvGym: Data Generation Agents in Teacher Environments with Student Feedback
[]
The process of creating training data to teach models is currently driven by humans, who manually analyze model weaknesses and plan how to create data that improves a student model. Recent approaches using large language models (LLMs) as annotators reduce human annotation effort, but still require humans to interpret feedback from evaluations and control the LLM to produce data the student needs. Automating this labor-intensive process by creating autonomous data generation agents – or teachers – is desirable, but requires environments that can simulate the feedback-driven, iterative, closed loop of data creation. To enable rapid and scalable testing for such agents and their modules, we introduce DataEnvGym, a testbed of teacher environments for data generation agents. DataEnvGym frames data generation as a sequential decision-making task, involving an agent consisting of a data generation policy (which generates a plan for creating training data) and a data generation engine (which transforms the plan into data), inside an environment that provides feedback from a student. The agent’s end goal is to improve student model performance. Students are iteratively trained and evaluated on generated data, with their feedback (in the form of errors or weak skills) being reported to the agent after each iteration. As a general-purpose testbed, DataEnvGym includes multiple instantiations of teacher environments across three levels of structure in the state representation and action space, with varying levels of scaffolding support. More structured environments are based on automatically-inferred skills and offer a higher degree of interpretability and control over the curriculum. We support developing and testing data generation agents in three diverse tasks covering both text and images (mathematics, programming, and visual question answering) and test multiple student models. We find that example agents in our teaching environments can iteratively improve students across diverse tasks and settings. Moreover, we show that environments can teach different skill levels and can be used to test variants of key modules, pointing to directions of future work in improving data generation agents, engines, and feedback mechanisms. We will publicly release our code and leaderboard.
[ "iterative data generation", "llm agent", "lifelong learning" ]
https://openreview.net/pdf?id=00SnKBGTsz
https://openreview.net/forum?id=00SnKBGTsz
Bgr7Ol90m7
official_comment
1,732,317,066,420
i3QgWgrJff
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission11063/Authors" ]
ICLR.cc/2025/Conference
2025
comment: Thank you once again for your valuable feedback! We hope our response has addressed all of your questions and will allow you to revisit your score. We would be happy to engage further and address any further questions you might have in the remaining few days of the discussion period.
00SnKBGTsz
DataEnvGym: Data Generation Agents in Teacher Environments with Student Feedback
[]
The process of creating training data to teach models is currently driven by humans, who manually analyze model weaknesses and plan how to create data that improves a student model. Recent approaches using large language models (LLMs) as annotators reduce human annotation effort, but still require humans to interpret feedback from evaluations and control the LLM to produce data the student needs. Automating this labor-intensive process by creating autonomous data generation agents – or teachers – is desirable, but requires environments that can simulate the feedback-driven, iterative, closed loop of data creation. To enable rapid and scalable testing for such agents and their modules, we introduce DataEnvGym, a testbed of teacher environments for data generation agents. DataEnvGym frames data generation as a sequential decision-making task, involving an agent consisting of a data generation policy (which generates a plan for creating training data) and a data generation engine (which transforms the plan into data), inside an environment that provides feedback from a student. The agent’s end goal is to improve student model performance. Students are iteratively trained and evaluated on generated data, with their feedback (in the form of errors or weak skills) being reported to the agent after each iteration. As a general-purpose testbed, DataEnvGym includes multiple instantiations of teacher environments across three levels of structure in the state representation and action space, with varying levels of scaffolding support. More structured environments are based on automatically-inferred skills and offer a higher degree of interpretability and control over the curriculum. We support developing and testing data generation agents in three diverse tasks covering both text and images (mathematics, programming, and visual question answering) and test multiple student models. We find that example agents in our teaching environments can iteratively improve students across diverse tasks and settings. Moreover, we show that environments can teach different skill levels and can be used to test variants of key modules, pointing to directions of future work in improving data generation agents, engines, and feedback mechanisms. We will publicly release our code and leaderboard.
[ "iterative data generation", "llm agent", "lifelong learning" ]
https://openreview.net/pdf?id=00SnKBGTsz
https://openreview.net/forum?id=00SnKBGTsz
Aq2tBtB0lt
official_comment
1,732,143,438,536
GMsjHLXdOx
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission11063/Authors" ]
ICLR.cc/2025/Conference
2025
title: Response to Reviewer c5nB comment: We’re glad you find DataEnvGym novel and insightful\! **W1: The student improves on generated test sets over successive iterations.** Following your suggestion, we conducted experiments with generated test sets. We summarize the results/findings below and have added them to Appendix D (L1173-1182) and Figure 12 (L1188,1199) in our revised PDF. For each setting, we show the performance of the student on test sets that incorporate newly generated data points over successive iterations. Concretely, we evaluate the performance of a student from iteration n on a test set created from data generated in iteration n+1 (unseen training data). This is only easily possible in the multimodal and MATH environments — the coding environment accuracy is determined by unit tests, which we do not currently generate. In all cases, the student improves on the generated test sets over successive iterations, and accuracy on the generated test set is higher in the last iteration than in the first. | Iteration | Accuracy (Generated Math Data) | Accuracy (Generated Multimodal Data) | |---|----|-----| | 0 | 29.25 | 45.52 | | 1 | 21.18 | 54.71 | | 2 | 29.41 | 53.85 | | 3 | 41.56 | 60.09 | | 4 | 41.03 | 57.66 | | 5 | 57.53 | N/A | | 6 | 46.15 | N/A | | 7 | 50 | N/A | | 8 | 65.22 | N/A | | 9 | 67.06 | N/A | Note that the multimodal environments were only run for half the iterations of the mathematics environments. **Q1: The students' performance increases due to added data points rather than insufficient training.** To substantiate the claim that student performance is increased due to added data points rather than insufficient training, we take a subset of the data and increase the number of epochs such that the student receives a fraction of the added data, but an equivalent number of epochs as training on the full data. For example, if a student is normally trained for 10 epochs with 1000 generated training data, we take the data from the first data generation iteration (let’s say it contains 200 training data) and train an alternative student for $\\frac{1000}{200}\\times10=50$ epochs to isolate the effect of the generated training data vs the added training epochs. In each case, training with less data but for more epochs produces significantly smaller improvements than training with more data for fewer epochs, showing that *data* is responsible for increased performance rather than more training. In fact, extending training without additional data typically hurts performance — fresh data is essential. This highlights the importance of studying data generation as we do in our paper, as data generation is one of the few ways to get fresh data. We have added these results in Appendix E (Table 6\) in L1184-1223 in the revised PDF. | | Data | Epochs | Accuracy (GQA) | |----|---|---|---| | Before Teaching | \- | \- | 44.18 | | Less Data / Longer Training | 20% | 15 | 42.79 | | More Data / Standard Training | 100% | 3 | **47.9** | | | Data | Epochs | Accuracy (MATH) | |----|---|---|---| | Before Teaching | \- | \- | 15.78 | | Less Data / Longer Training | 10% | 30 | 13.98 | | More Data / Standard Training | 100% | 3 | **23.44** | | | Data | Epochs | Accuracy (LiveCodeBench) | |----|---|---|---| | Before Teaching | \- | \- | 16.5 | | Less Data / Longer Training | 20% | 15 | 15 | | More Data / Standard Training | 100% | 3 | **18.91** | **Q2: We add more qualitative results for skill discovery.** Following your suggestion, we have added an additional figure showing a full list (Figure 10, L1115-1132 in the revised PDF) of discovered skills for MATH, GQA, and LiveCodeBench in the SKILL-LIST environments in Appendix C. We have also added another figure showing qualitative examples of skill-errors that were fixed by training on synthetic data in Appendix C, highlighting the utility of skills in our framework. In summary, we now have 5 figures showing qualitative examples of skill discovery and one quantitative analysis of skill discovery in Appendix C.
00SnKBGTsz
DataEnvGym: Data Generation Agents in Teacher Environments with Student Feedback
[]
The process of creating training data to teach models is currently driven by humans, who manually analyze model weaknesses and plan how to create data that improves a student model. Recent approaches using large language models (LLMs) as annotators reduce human annotation effort, but still require humans to interpret feedback from evaluations and control the LLM to produce data the student needs. Automating this labor-intensive process by creating autonomous data generation agents – or teachers – is desirable, but requires environments that can simulate the feedback-driven, iterative, closed loop of data creation. To enable rapid and scalable testing for such agents and their modules, we introduce DataEnvGym, a testbed of teacher environments for data generation agents. DataEnvGym frames data generation as a sequential decision-making task, involving an agent consisting of a data generation policy (which generates a plan for creating training data) and a data generation engine (which transforms the plan into data), inside an environment that provides feedback from a student. The agent’s end goal is to improve student model performance. Students are iteratively trained and evaluated on generated data, with their feedback (in the form of errors or weak skills) being reported to the agent after each iteration. As a general-purpose testbed, DataEnvGym includes multiple instantiations of teacher environments across three levels of structure in the state representation and action space, with varying levels of scaffolding support. More structured environments are based on automatically-inferred skills and offer a higher degree of interpretability and control over the curriculum. We support developing and testing data generation agents in three diverse tasks covering both text and images (mathematics, programming, and visual question answering) and test multiple student models. We find that example agents in our teaching environments can iteratively improve students across diverse tasks and settings. Moreover, we show that environments can teach different skill levels and can be used to test variants of key modules, pointing to directions of future work in improving data generation agents, engines, and feedback mechanisms. We will publicly release our code and leaderboard.
[ "iterative data generation", "llm agent", "lifelong learning" ]
https://openreview.net/pdf?id=00SnKBGTsz
https://openreview.net/forum?id=00SnKBGTsz
9OQJoesINr
official_comment
1,732,565,271,093
pOR42YNLtU
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission11063/Reviewer_VQ9Y" ]
ICLR.cc/2025/Conference
2025
title: Thank you comment: Thanks for the tremedous effort put into the response. It addressed most of my concerns so I raise the score to 8.
00SnKBGTsz
DataEnvGym: Data Generation Agents in Teacher Environments with Student Feedback
[]
The process of creating training data to teach models is currently driven by humans, who manually analyze model weaknesses and plan how to create data that improves a student model. Recent approaches using large language models (LLMs) as annotators reduce human annotation effort, but still require humans to interpret feedback from evaluations and control the LLM to produce data the student needs. Automating this labor-intensive process by creating autonomous data generation agents – or teachers – is desirable, but requires environments that can simulate the feedback-driven, iterative, closed loop of data creation. To enable rapid and scalable testing for such agents and their modules, we introduce DataEnvGym, a testbed of teacher environments for data generation agents. DataEnvGym frames data generation as a sequential decision-making task, involving an agent consisting of a data generation policy (which generates a plan for creating training data) and a data generation engine (which transforms the plan into data), inside an environment that provides feedback from a student. The agent’s end goal is to improve student model performance. Students are iteratively trained and evaluated on generated data, with their feedback (in the form of errors or weak skills) being reported to the agent after each iteration. As a general-purpose testbed, DataEnvGym includes multiple instantiations of teacher environments across three levels of structure in the state representation and action space, with varying levels of scaffolding support. More structured environments are based on automatically-inferred skills and offer a higher degree of interpretability and control over the curriculum. We support developing and testing data generation agents in three diverse tasks covering both text and images (mathematics, programming, and visual question answering) and test multiple student models. We find that example agents in our teaching environments can iteratively improve students across diverse tasks and settings. Moreover, we show that environments can teach different skill levels and can be used to test variants of key modules, pointing to directions of future work in improving data generation agents, engines, and feedback mechanisms. We will publicly release our code and leaderboard.
[ "iterative data generation", "llm agent", "lifelong learning" ]
https://openreview.net/pdf?id=00SnKBGTsz
https://openreview.net/forum?id=00SnKBGTsz
7XT4kLWV2f
official_review
1,730,472,742,428
00SnKBGTsz
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission11063/Reviewer_wuGW" ]
ICLR.cc/2025/Conference
2025
summary: This paper introduces DataEnvGym, a novel testbed of teacher environments for developing data generation agents that iteratively improve student models by generating targeted training data. DataEnvGym frames data generation as a sequential decision-making task where an agent, comprising a data generation policy and engine, interacts with an environment that provides feedback from a student model. The agent's goal is to improve student model performance by generating training data based on student feedback (errors or weak skills). DataEnvGym offers multiple instantiations of teacher environments across three levels of structure: open-ended, skill-list, and skill-tree, each with varying levels of scaffolding support. Experiments across text and image-based tasks (mathematics, programming, and visual question answering) demonstrate that example agents within DataEnvGym can iteratively improve student model performance. Furthermore, the authors analyze the impact of state information, environment structure, and skill discovery quality on agent performance and student learning. The paper concludes that DataEnvGym, with its modular design and support for diverse tasks and student models, provides a valuable platform for developing and evaluating data generation agents, engines, and feedback mechanisms for automated model improvement. The code and leaderboard will be publicly released. soundness: 3 presentation: 2 contribution: 3 strengths: Novel Problem: Automating data generation to improve models is a significant challenge with practical applications. This work directly addresses this problem with a novel approach. Well-Defined Framework: DataEnvGym is presented as a well-defined framework with clear components (trainer, evaluator, data generation policy, data generation engine) and different levels of structure (open-ended, skill-list, skill-tree). This structure makes the problem tractable and facilitates modular development and testing. Multiple Tasks and Domains: The inclusion of experiments across diverse tasks (mathematics, programming, visual question answering) and with different student models demonstrates the generalizability of the framework. Promising Results: The initial results showing improved student model performance across tasks and environments are encouraging and suggest the potential of this approach. The analysis of difficulty/rarity and training dynamics adds value. Open-Source Release: The commitment to publicly releasing the code and leaderboard promotes reproducibility and encourages further research in this area. weaknesses: Limited Evaluation of Agent Architectures: The focus is primarily on the environment itself, with less emphasis on the architecture and training of the data generation agents. While baseline agents are provided, more sophisticated agent designs (e.g., reinforcement learning agents, agents leveraging larger language models) and their systematic evaluation would significantly strengthen the paper. How do different agent architectures compare in terms of effectiveness and efficiency? Are there specific architectural choices that are particularly well-suited for this task? Over-Reliance on LLMs for Data Generation: While using LLMs for data generation is a reasonable starting point, it raises concerns about the quality and diversity of the generated data. Exploring alternative data generation methods (e.g., data augmentation techniques, programmatic data generation) and comparing their effectiveness with LLM-based generation would be valuable. How robust is the framework to the quality of the generated data? Limited Analysis of Skill Discovery Quality: While the paper briefly touches upon the impact of skill discovery quality, a more thorough investigation is needed. How does the quality of the discovered skills affect the performance of the data generation agents and the student models? What are the limitations of the current skill discovery method, and how can it be improved? Quantitative analysis of skill quality (e.g., measuring coherence, coverage, and relevance) would strengthen the paper. Lack of Comparison with Existing Methods: While related work on knowledge distillation and model weakness discovery is discussed, there's no direct comparison with existing methods for model improvement. How does DataEnvGym compare to techniques like curriculum learning or active learning in terms of effectiveness and efficiency? Including such comparisons would better contextualize the contributions and highlight the advantages of the proposed approach. Limited Discussion of Scalability: The experiments are conducted with relatively small datasets and models. How does DataEnvGym scale to larger datasets and more complex models? What are the computational challenges associated with training data generation agents in more realistic settings? Addressing these scalability concerns is crucial for practical applications. questions: Limited Evaluation of Agent Architectures: The paper primarily focuses on introducing the DataEnvGym environment, but the evaluation of data generation agents is limited to relatively simple baseline policies. Exploring more sophisticated agent architectures, such as reinforcement learning agents (e.g., using policy gradient methods, Q-learning) or agents incorporating larger language models for planning and decision-making (similar to the approaches used in Shimabucoro et al. (2024), would substantially strengthen the paper. A systematic comparison of different agent architectures in terms of their effectiveness in improving student models, their sample efficiency, and their computational cost would provide valuable insights and contribute to a better understanding of the challenges and opportunities in automated data generation. Limited Analysis of Skill Discovery Quality: The paper briefly discusses the impact of oracle skills on student performance but doesn't delve deeply into the quality of the skills discovered by the proposed LLM-based method. A more thorough analysis is needed to understand the strengths and limitations of the skill discovery module. This could involve quantitative measures of skill quality, such as measuring their coherence, coverage, and relevance to the target task, or qualitative analysis by human experts. Investigating how the quality of the discovered skills affects the performance of the data generation agents and the resulting student models would strengthen the paper's contribution. Exploring alternative skill discovery methods (e.g., clustering-based approaches, topic modeling) and comparing their effectiveness with the proposed method would further enhance the analysis. Lack of Comparison with Existing Methods: The paper positions DataEnvGym as a novel approach for model improvement, but it lacks a direct comparison with existing methods like curriculum learning (Bengio et al., 2009) or active learning (Settles, 2009). Evaluating how DataEnvGym compares to these established techniques in terms of student model performance, data efficiency, and computational cost would provide valuable context and highlight the advantages of the proposed framework. This would also clarify the specific niche and contribution of DataEnvGym within the broader landscape of model improvement techniques. Limited Discussion of Scalability: The experiments in the paper are conducted with relatively small datasets and models. It's essential to address the scalability of DataEnvGym to more realistic scenarios involving larger datasets, more complex models, and a broader range of skills. Discussing the computational challenges and potential optimizations for scaling the framework to more demanding settings would strengthen the paper's practical relevance. For instance, how can the computational cost of LLM-based data generation be reduced while maintaining data quality? How can the skill discovery and agent training processes be optimized for larger datasets? Addressing these questions would provide valuable insights for future research and practical applications. flag_for_ethics_review: ['Yes, Discrimination / bias / fairness concerns', 'Yes, Potentially harmful insights, methodologies and applications'] rating: 8 confidence: 4 code_of_conduct: Yes
00SnKBGTsz
DataEnvGym: Data Generation Agents in Teacher Environments with Student Feedback
[]
The process of creating training data to teach models is currently driven by humans, who manually analyze model weaknesses and plan how to create data that improves a student model. Recent approaches using large language models (LLMs) as annotators reduce human annotation effort, but still require humans to interpret feedback from evaluations and control the LLM to produce data the student needs. Automating this labor-intensive process by creating autonomous data generation agents – or teachers – is desirable, but requires environments that can simulate the feedback-driven, iterative, closed loop of data creation. To enable rapid and scalable testing for such agents and their modules, we introduce DataEnvGym, a testbed of teacher environments for data generation agents. DataEnvGym frames data generation as a sequential decision-making task, involving an agent consisting of a data generation policy (which generates a plan for creating training data) and a data generation engine (which transforms the plan into data), inside an environment that provides feedback from a student. The agent’s end goal is to improve student model performance. Students are iteratively trained and evaluated on generated data, with their feedback (in the form of errors or weak skills) being reported to the agent after each iteration. As a general-purpose testbed, DataEnvGym includes multiple instantiations of teacher environments across three levels of structure in the state representation and action space, with varying levels of scaffolding support. More structured environments are based on automatically-inferred skills and offer a higher degree of interpretability and control over the curriculum. We support developing and testing data generation agents in three diverse tasks covering both text and images (mathematics, programming, and visual question answering) and test multiple student models. We find that example agents in our teaching environments can iteratively improve students across diverse tasks and settings. Moreover, we show that environments can teach different skill levels and can be used to test variants of key modules, pointing to directions of future work in improving data generation agents, engines, and feedback mechanisms. We will publicly release our code and leaderboard.
[ "iterative data generation", "llm agent", "lifelong learning" ]
https://openreview.net/pdf?id=00SnKBGTsz
https://openreview.net/forum?id=00SnKBGTsz
66buacQmRe
official_comment
1,732,143,741,634
7XT4kLWV2f
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission11063/Authors" ]
ICLR.cc/2025/Conference
2025
title: Response to Reviewer wuGW comment: Thank you for recognizing the value of DataEnvGym and pointing out the “_novel problem_” we address as well as our “_well-defined framework_” and “_promising results_”. **W1/Q1: We include multiple agent architectures and experiment with two additional teacher LLMs**. Each environment requires a different agent architecture, so we have 3 in total. We also try with different teacher LLMs: GPT-4o (Table 2, L378-393) and GPT-4o-mini (Table 5, L972-986 in the revised PDF). **W2: Data is generated by several components working together.** In addition to generating data via LLM, we experiment with multimodal grounding datasets. In these cases, the data is generated by a text-to-image model. In all cases, the LLM is only a component of a pipeline that involves many modules, such as skill discovery and a data generation engine. For example, in the SKILL-TREE environment, the policy making decisions about what skills to generate data for are not LLMs and can be classical controllers. **W3/Q2: We have added additional analysis of the learned skills.** Following your suggestion, we have added an additional figure showing a full list (Figure 10, L1115-1132 in the revised PDF) of discovered skills for MATH, GQA, and LiveCodeBench in the SKILL-LIST environments in Appendix C. We have also added another figure showing qualitative examples of skill-errors that were fixed by training on synthetic data in Appendix C, highlighting the utility of skills in our framework. In summary, we now have 5 figures showing qualitative examples of skill discovery and one quantitative analysis of skill discovery in Appendix C. **W4/Q3: We compare with active learning.** We implement data selection using prototypicality scores \[A\] which are standard for active learning. Similar to the random selection baseline, it is hard to improve a well-post-trained LLM like Llama3 or Gemma2 by using readily available data pools — it is much easier to improve them using generated data. Even using the full training dataset cannot improve the student. This motivates our choice to tackle data generation rather than data selection. The training of open-source frontier models (Llama3, for example) includes significant post-training that subsumes publicly available data sources \[B, $\\S$4.2, C$\\S$4\], making it hard to improve them with any amount of already existing data. | | Before Training | Data Selection (Prototypicality) | Full Training Dataset | Data Generation (Open-Ended) | |---|---|-----|---|----| | MATH Accuracy | 15.78 | 16.01 | 15.18 | **23.44** | **W5/Q4: DataEnvGym has been designed for scalability.** On a single A6000, the total training time for our most computationally expensive setting (multimodal) is 6h, or about 1.5h/iteration. Most other settings are much faster. Environments are fully parallelizable using Ray and can be scaled up to multiple GPUs and even multiple nodes. We’ve added a full accounting of token and GPU costs in Table 4, L918-931 of the revised PDF. \[A\] Sorscher et al., Beyond neural scaling laws: beating power law scaling via data pruning, NeurIPS 2022 Outstanding Paper Award \[B\] Llama Team, AI @ Meta, The Llama 3 Herd of Models, arXiv 2024 \[C\] Gemma Team, Google Deepmind, Gemma 2: Improving Open Language Models at a Practical Size, arXiv 2024
00SnKBGTsz
DataEnvGym: Data Generation Agents in Teacher Environments with Student Feedback
[]
The process of creating training data to teach models is currently driven by humans, who manually analyze model weaknesses and plan how to create data that improves a student model. Recent approaches using large language models (LLMs) as annotators reduce human annotation effort, but still require humans to interpret feedback from evaluations and control the LLM to produce data the student needs. Automating this labor-intensive process by creating autonomous data generation agents – or teachers – is desirable, but requires environments that can simulate the feedback-driven, iterative, closed loop of data creation. To enable rapid and scalable testing for such agents and their modules, we introduce DataEnvGym, a testbed of teacher environments for data generation agents. DataEnvGym frames data generation as a sequential decision-making task, involving an agent consisting of a data generation policy (which generates a plan for creating training data) and a data generation engine (which transforms the plan into data), inside an environment that provides feedback from a student. The agent’s end goal is to improve student model performance. Students are iteratively trained and evaluated on generated data, with their feedback (in the form of errors or weak skills) being reported to the agent after each iteration. As a general-purpose testbed, DataEnvGym includes multiple instantiations of teacher environments across three levels of structure in the state representation and action space, with varying levels of scaffolding support. More structured environments are based on automatically-inferred skills and offer a higher degree of interpretability and control over the curriculum. We support developing and testing data generation agents in three diverse tasks covering both text and images (mathematics, programming, and visual question answering) and test multiple student models. We find that example agents in our teaching environments can iteratively improve students across diverse tasks and settings. Moreover, we show that environments can teach different skill levels and can be used to test variants of key modules, pointing to directions of future work in improving data generation agents, engines, and feedback mechanisms. We will publicly release our code and leaderboard.
[ "iterative data generation", "llm agent", "lifelong learning" ]
https://openreview.net/pdf?id=00SnKBGTsz
https://openreview.net/forum?id=00SnKBGTsz
4CnQpVCYkF
official_comment
1,732,142,922,643
00SnKBGTsz
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission11063/Authors" ]
ICLR.cc/2025/Conference
2025
title: General Response comment: Reviewers believe we tackle “*a timely and interesting problem*” (VQ9Y) with a “*novel and insightful perspective on the autonomous data generation problem*” (c5nB), making a “*good contribution to automated data generation for model improvement*” (rVo8). The potential impact of our work in making a challenging problem accessible is noted by several reviewers: “*necessary infrastructure for the community to study the problem*” (VQ9Y), that our “*structure makes the problem tractable*” (wuGW) and has “*potential for widespread applicability*” (c5nB). We show that experiments can be run in half a day with limited compute resources (1x A6000) for under $1 of OpenAI API credits, making it an accessible testbed for developing data generation agents. **We thank all reviewers for their valuable feedback and suggestions**. We have provided responses to all of the reviewer questions in the rebuttals in the individual responses and the revised PDF (updated text is in blue).
00SnKBGTsz
DataEnvGym: Data Generation Agents in Teacher Environments with Student Feedback
[]
The process of creating training data to teach models is currently driven by humans, who manually analyze model weaknesses and plan how to create data that improves a student model. Recent approaches using large language models (LLMs) as annotators reduce human annotation effort, but still require humans to interpret feedback from evaluations and control the LLM to produce data the student needs. Automating this labor-intensive process by creating autonomous data generation agents – or teachers – is desirable, but requires environments that can simulate the feedback-driven, iterative, closed loop of data creation. To enable rapid and scalable testing for such agents and their modules, we introduce DataEnvGym, a testbed of teacher environments for data generation agents. DataEnvGym frames data generation as a sequential decision-making task, involving an agent consisting of a data generation policy (which generates a plan for creating training data) and a data generation engine (which transforms the plan into data), inside an environment that provides feedback from a student. The agent’s end goal is to improve student model performance. Students are iteratively trained and evaluated on generated data, with their feedback (in the form of errors or weak skills) being reported to the agent after each iteration. As a general-purpose testbed, DataEnvGym includes multiple instantiations of teacher environments across three levels of structure in the state representation and action space, with varying levels of scaffolding support. More structured environments are based on automatically-inferred skills and offer a higher degree of interpretability and control over the curriculum. We support developing and testing data generation agents in three diverse tasks covering both text and images (mathematics, programming, and visual question answering) and test multiple student models. We find that example agents in our teaching environments can iteratively improve students across diverse tasks and settings. Moreover, we show that environments can teach different skill levels and can be used to test variants of key modules, pointing to directions of future work in improving data generation agents, engines, and feedback mechanisms. We will publicly release our code and leaderboard.
[ "iterative data generation", "llm agent", "lifelong learning" ]
https://openreview.net/pdf?id=00SnKBGTsz
https://openreview.net/forum?id=00SnKBGTsz
13mj0Rtn5W
official_comment
1,732,728,465,105
hWat8aFBRw
[ "everyone" ]
[ "ICLR.cc/2025/Conference/Submission11063/Authors" ]
ICLR.cc/2025/Conference
2025
comment: Thank you Reviewer wuGW for your feedback/engagement and positive appraisal of our work! We're glad our rebuttal was able to address your questions.
ziTWHPiYij
Blessing or a Curse. Discussing Security Concerns of Diagnostic Models in Radiological Assessment
[]
Radiology is increasingly adopting AI-based workflows, which provide promise but also introduce new security concerns. The goal of this research is to enhance the security of these workflows by evaluating the risks of data poisoning attacks using the Fast Gradient Sign Method (FGSM) and Carlini-Wagner (C\&W) techniques. The dataset utilized is from the 2017 RSNA Pediatric Bone Age Challenge. Detection methods commonly employed in financial fraud are evaluated to assess their effectiveness in this context. Knowledge distillation will also be explored as a defense mechanism against data poisoning, offering a potential mitigation strategy. By conducting these evaluations and proposing defenses, this research aims to contribute to the more robust deployment of AI systems in real-world radiology applications.
[ "Deep Learning", "computer vision", "radiology" ]
https://openreview.net/pdf?id=ziTWHPiYij
https://openreview.net/forum?id=ziTWHPiYij
qexymGgwPT
official_review
1,728,561,385,763
ziTWHPiYij
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission10/Reviewer_484T" ]
NLDL.org/2025/Conference
2025
title: Artificial Intelligence: A Blessing or a Curse. Discussing Security Concerns of Diagnostic Models in Radiological Assessment summary: This paper looks at the effect of poisoning the continuous training of an AI with data samples generated with two different techniques. strengths: The problem deserves analysis and attention. weaknesses: The results are anecdotal and no firm conclusions can be made Continuous training of radiological AIs is far from the reality, as such models will not be regulatorical cleared The paper is not written authoritatively The two models of attack are simple Only one data set has been used confidence: 4 justification: See weaknesses
ziTWHPiYij
Blessing or a Curse. Discussing Security Concerns of Diagnostic Models in Radiological Assessment
[]
Radiology is increasingly adopting AI-based workflows, which provide promise but also introduce new security concerns. The goal of this research is to enhance the security of these workflows by evaluating the risks of data poisoning attacks using the Fast Gradient Sign Method (FGSM) and Carlini-Wagner (C\&W) techniques. The dataset utilized is from the 2017 RSNA Pediatric Bone Age Challenge. Detection methods commonly employed in financial fraud are evaluated to assess their effectiveness in this context. Knowledge distillation will also be explored as a defense mechanism against data poisoning, offering a potential mitigation strategy. By conducting these evaluations and proposing defenses, this research aims to contribute to the more robust deployment of AI systems in real-world radiology applications.
[ "Deep Learning", "computer vision", "radiology" ]
https://openreview.net/pdf?id=ziTWHPiYij
https://openreview.net/forum?id=ziTWHPiYij
nkoGWa32cf
official_review
1,728,515,451,038
ziTWHPiYij
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission10/Reviewer_CVvV" ]
NLDL.org/2025/Conference
2025
title: Discussion on vulnerability of YOLO model to data perturbations in radiology summary: The paper addresses the topic of vulnerability of radiology AI to malicious data perturbations. Overall, the paper is more of a discussion than a study, but it does present a case study with two different artificially generated perturbations for radiology images and the impact on YOLO detection performance, as well and a method for detecting the data fraud. While the study addresses a valuable research question, it falls short of providing thorough analysis or being useful for practical applications. strengths: The topic is timely and worth addressing: robustness towards changes in the data domain, caused either by intrinsic shift or changes in sample characteristics, or by malicious manipulation, threat the trustworthiness of AI. weaknesses: The current study reads more like a discussion rather than a thorough analysis of the topic. The presented cases with YOLO detector and presence of two kinds of manipulations is not very convincing. The experimental part should be significantly strengthened. The article is not very well organized. It appears more useful as a discussion around the theme, but as such it is not well suited for this forum. confidence: 4 justification: Based on the experimental results, the paper does not provide very valuable information for any practical use. As a discussion, it is somewhat valuable, but strengthening the experimental part is recommended.
ziTWHPiYij
Blessing or a Curse. Discussing Security Concerns of Diagnostic Models in Radiological Assessment
[]
Radiology is increasingly adopting AI-based workflows, which provide promise but also introduce new security concerns. The goal of this research is to enhance the security of these workflows by evaluating the risks of data poisoning attacks using the Fast Gradient Sign Method (FGSM) and Carlini-Wagner (C\&W) techniques. The dataset utilized is from the 2017 RSNA Pediatric Bone Age Challenge. Detection methods commonly employed in financial fraud are evaluated to assess their effectiveness in this context. Knowledge distillation will also be explored as a defense mechanism against data poisoning, offering a potential mitigation strategy. By conducting these evaluations and proposing defenses, this research aims to contribute to the more robust deployment of AI systems in real-world radiology applications.
[ "Deep Learning", "computer vision", "radiology" ]
https://openreview.net/pdf?id=ziTWHPiYij
https://openreview.net/forum?id=ziTWHPiYij
jNkfJB7PCC
official_review
1,728,500,180,292
ziTWHPiYij
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission10/Reviewer_5fNc" ]
NLDL.org/2025/Conference
2025
title: Review of Paper 10 summary: This paper analyses the security risks of AI in radiological assessment. Specifically, the authors evaluate the risks of data poisoning attacks using Fast Gradient method and Carlini Wagner method on YOLO model. Further, attack detection using Benford's Law and mitigation via defensive distillation are also analysed. strengths: * The paper targets an important field of research i.e analysing the risks of AI in healthcare diagnostics. * The experimentation for attack detection as well as defense is good enough to understand the impact of both for the specific model (YOLO) and the dataset (RSNA Pediatric Bone Age Challenge) used in the work. weaknesses: * The structuring as well as writing of the paper is ambiguous and difficult to follow. For example, the background section contains the methodology used in the work. Section 2.1 is basically a reference to the YOLO model, which is a methodology used in this work, instead of the background. Similarly, Section 2.2-2.5 also mention the methods used in this work but fail to do a literature review of related works. * The motive behind using models such as YOLO as well FSGM and C&W are not described in the paper. * There are several errors in the text, for example: 1) "the" written as "he" in line 22, 2) defining abbreviations multiple times, such as FGSM in line 311 and 367, 3) redefining C&W abbreviation as CW, 4) not referring to the figures when talking about them in the text (e.g line 253). * Why are 500 and 1000 the number of adversarial examples generated for C&W and FSGM, respectively? * As pointed by the authors, the YOLO model has only been trained for 10 epochs which might not be sufficient to evaluate the model against these attacks as well defenses. * All experiments are based on only one dataset as well as model. More experimentation on a new dataset as well as model would lead to more conclusive results. confidence: 4 justification: The idea and methods used in the work are not well motivated by the authors as well as insufficiently experimented on only one dataset as well as model. The work also needs restructuring as well as rewriting.
ziTWHPiYij
Blessing or a Curse. Discussing Security Concerns of Diagnostic Models in Radiological Assessment
[]
Radiology is increasingly adopting AI-based workflows, which provide promise but also introduce new security concerns. The goal of this research is to enhance the security of these workflows by evaluating the risks of data poisoning attacks using the Fast Gradient Sign Method (FGSM) and Carlini-Wagner (C\&W) techniques. The dataset utilized is from the 2017 RSNA Pediatric Bone Age Challenge. Detection methods commonly employed in financial fraud are evaluated to assess their effectiveness in this context. Knowledge distillation will also be explored as a defense mechanism against data poisoning, offering a potential mitigation strategy. By conducting these evaluations and proposing defenses, this research aims to contribute to the more robust deployment of AI systems in real-world radiology applications.
[ "Deep Learning", "computer vision", "radiology" ]
https://openreview.net/pdf?id=ziTWHPiYij
https://openreview.net/forum?id=ziTWHPiYij
DY0Qpo02B5
meta_review
1,730,802,504,696
ziTWHPiYij
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission10/Area_Chair_ZZWh" ]
NLDL.org/2025/Conference
2025
metareview: The submitted paper proposes to study several mitigation methods for data poisoning in the context of medical imaging. The authors make the following claims in their introduction - investigate adversarial attacks, focusing on FGSM and C&W attacks in the context of x-ray images classification and GI images segmentation - better understand the applicability of Benford’s Law to medical imaging - make use of distillation and study its impact on adversarial attacks As pointed out by the reviewers, these claims are not backed by the experiments and analyses performed in the paper. The results regarding Benford's law are incomplete: the original digit distribution is missing, rendering conclusions about the usability of this method questionable. Results regarding distillation are inconclusive at best, probably due to the very short training. The choice of the methods studied is also questionable according to the reviewers as more sophisticated attacks exist. In addition, the bibliography is missing from all revisions of the paper. While the subject of the paper is of interest, there are too many structural and experimental issues for the paper to be accepted, I therefore recommend its rejection. recommendation: Reject suggested_changes_to_the_recommendation: 2: I'm certain of the recommendation. It should not be changed confidence: 5: The area chair is absolutely certain
ziTWHPiYij
Blessing or a Curse. Discussing Security Concerns of Diagnostic Models in Radiological Assessment
[]
Radiology is increasingly adopting AI-based workflows, which provide promise but also introduce new security concerns. The goal of this research is to enhance the security of these workflows by evaluating the risks of data poisoning attacks using the Fast Gradient Sign Method (FGSM) and Carlini-Wagner (C\&W) techniques. The dataset utilized is from the 2017 RSNA Pediatric Bone Age Challenge. Detection methods commonly employed in financial fraud are evaluated to assess their effectiveness in this context. Knowledge distillation will also be explored as a defense mechanism against data poisoning, offering a potential mitigation strategy. By conducting these evaluations and proposing defenses, this research aims to contribute to the more robust deployment of AI systems in real-world radiology applications.
[ "Deep Learning", "computer vision", "radiology" ]
https://openreview.net/pdf?id=ziTWHPiYij
https://openreview.net/forum?id=ziTWHPiYij
2Rkdq6cz7Q
decision
1,730,901,554,555
ziTWHPiYij
[ "everyone" ]
[ "NLDL.org/2025/Conference/Program_Chairs" ]
NLDL.org/2025/Conference
2025
title: Paper Decision decision: Reject
x5e9iP8K8q
Human Aligned Reward Modeling for Automated Transfer Function Generation of 3D Rendering of Medical Image Data
[]
In recent years, the quality of medical image data, such as computed tomography or magnetic resonance tomography, has continued to improve and the resolution and detection of the smallest structures has become increasingly accurate. Along with these developments, new techniques for three-dimensional visualization using volume rendering techniques are emerging, enabling extremely realistic visualization of medical images. This helps to improve patient communication, diagnosis, and treatment planning. An extremely critical step in the development of a realistic rendering is the design of a suitable transfer function. However, this requires a high level of experience and manual fine-tuning to the given image data. To automatize this process, we propose to train a reinforcement learning agent that extracts a two-dimensional transfer function from the given joint histograms of the image data. The focus of this study is primarily on the development of a suitable reward model, which is critical for the reinforcement learning framework, incorporating human feedback.
[ "Direct Volume Rendering", "Transfer Function", "Reinforcement Learning from Human Feedback" ]
https://openreview.net/pdf?id=x5e9iP8K8q
https://openreview.net/forum?id=x5e9iP8K8q
vJD8oG4BvN
decision
1,730,901,554,866
x5e9iP8K8q
[ "everyone" ]
[ "NLDL.org/2025/Conference/Program_Chairs" ]
NLDL.org/2025/Conference
2025
title: Paper Decision decision: Reject
x5e9iP8K8q
Human Aligned Reward Modeling for Automated Transfer Function Generation of 3D Rendering of Medical Image Data
[]
In recent years, the quality of medical image data, such as computed tomography or magnetic resonance tomography, has continued to improve and the resolution and detection of the smallest structures has become increasingly accurate. Along with these developments, new techniques for three-dimensional visualization using volume rendering techniques are emerging, enabling extremely realistic visualization of medical images. This helps to improve patient communication, diagnosis, and treatment planning. An extremely critical step in the development of a realistic rendering is the design of a suitable transfer function. However, this requires a high level of experience and manual fine-tuning to the given image data. To automatize this process, we propose to train a reinforcement learning agent that extracts a two-dimensional transfer function from the given joint histograms of the image data. The focus of this study is primarily on the development of a suitable reward model, which is critical for the reinforcement learning framework, incorporating human feedback.
[ "Direct Volume Rendering", "Transfer Function", "Reinforcement Learning from Human Feedback" ]
https://openreview.net/pdf?id=x5e9iP8K8q
https://openreview.net/forum?id=x5e9iP8K8q
rpWIVVAGIQ
official_review
1,728,488,375,823
x5e9iP8K8q
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission15/Reviewer_7RXT" ]
NLDL.org/2025/Conference
2025
title: The paper titled discusses an innovative approach to automate the design of transfer functions (TF) for direct volume rendering (DVR) of medical images like CT scans. summary: In this paper the authors focus on carrying out RLHF in a traditional sense for the medical image data. They learn a reward model from human feedback to address the complexities of manually designing transfer functions for direct volume rendering of medical images, such as CT scans. This learned reward model is then used to train a reinforcement learning agent, which is capable of automatically generating 2D transfer functions that meet visual expectations and requirements. strengths: 1) The paper is an interesting use case for applied research - i.e., the use of RLHF for automated generation of TFs which is novel. This approach can potentially reduce the 2) The authors detail the entire pipeline - such as the reward model training, preference collection - which shows clear description of the methods carried out. Figure 1 is also a great visual to show how the method integrates into standard RLHF paradigm 3) The paper also compares how the reward prediction is affected by different loss function i.e cross-entropy vs NLL loss which shows the reliability in results. weaknesses: 1) Its unclear if there are some sort of baselines that already tackle the TF generation process? The paper could be strengthened by including comparisons to those methods if some exist in literature. 2) Had some experiments been carried out for datasets other than the CBCT ones? to see if such results could be generalized to other CT scans (other than head scans) - this would be interesting as its an applied project (would show us the extent of practicality of the approach) 3) The authors seem to have a plan for future work i.e training the reward model on a broader range of scenes and actions, potentially incorporating more complex feature extractors like Variational Autoencoders or Vision Transformers to refine the model's ability to generalize across different medical imaging scenarios. These would make the work more robust (which could be one of the minor weaknesses in the current draft) confidence: 4 justification: Overall its an interesting work of applying RLHF for learning reward functions from preferences in medical imaging data such as CT scans and is an interesting addition as applied work in the domain. final_rebuttal_confidence: 4 final_rebuttal_justification: Would have liked authors respond to the comments and provide more justifications to other review comments as well (which are questions I have now as well)
x5e9iP8K8q
Human Aligned Reward Modeling for Automated Transfer Function Generation of 3D Rendering of Medical Image Data
[]
In recent years, the quality of medical image data, such as computed tomography or magnetic resonance tomography, has continued to improve and the resolution and detection of the smallest structures has become increasingly accurate. Along with these developments, new techniques for three-dimensional visualization using volume rendering techniques are emerging, enabling extremely realistic visualization of medical images. This helps to improve patient communication, diagnosis, and treatment planning. An extremely critical step in the development of a realistic rendering is the design of a suitable transfer function. However, this requires a high level of experience and manual fine-tuning to the given image data. To automatize this process, we propose to train a reinforcement learning agent that extracts a two-dimensional transfer function from the given joint histograms of the image data. The focus of this study is primarily on the development of a suitable reward model, which is critical for the reinforcement learning framework, incorporating human feedback.
[ "Direct Volume Rendering", "Transfer Function", "Reinforcement Learning from Human Feedback" ]
https://openreview.net/pdf?id=x5e9iP8K8q
https://openreview.net/forum?id=x5e9iP8K8q
gWnTyAhOJs
meta_review
1,730,492,341,784
x5e9iP8K8q
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission15/Area_Chair_pJB1" ]
NLDL.org/2025/Conference
2025
metareview: The reviewers agree that the problem of learning an RL-based reward model for a 2D transfer function is relevant. There is, however, also agreement that the paper does not sufficiently relate to state-of-the-art. There is a missing description of related work, there is no comparison to other methods, and the description of how the data labels were created is insufficient. Further, results are shown on one type of data, and therefore it is difficult to tell how the method would generalize. Finally, the concerns raised by the reviewers were not sufficiently addressed in the rebuttal. Since the reviewers agree on their concerns, the recommendation is to reject the paper. recommendation: Reject suggested_changes_to_the_recommendation: 2: I'm certain of the recommendation. It should not be changed confidence: 5: The area chair is absolutely certain
x5e9iP8K8q
Human Aligned Reward Modeling for Automated Transfer Function Generation of 3D Rendering of Medical Image Data
[]
In recent years, the quality of medical image data, such as computed tomography or magnetic resonance tomography, has continued to improve and the resolution and detection of the smallest structures has become increasingly accurate. Along with these developments, new techniques for three-dimensional visualization using volume rendering techniques are emerging, enabling extremely realistic visualization of medical images. This helps to improve patient communication, diagnosis, and treatment planning. An extremely critical step in the development of a realistic rendering is the design of a suitable transfer function. However, this requires a high level of experience and manual fine-tuning to the given image data. To automatize this process, we propose to train a reinforcement learning agent that extracts a two-dimensional transfer function from the given joint histograms of the image data. The focus of this study is primarily on the development of a suitable reward model, which is critical for the reinforcement learning framework, incorporating human feedback.
[ "Direct Volume Rendering", "Transfer Function", "Reinforcement Learning from Human Feedback" ]
https://openreview.net/pdf?id=x5e9iP8K8q
https://openreview.net/forum?id=x5e9iP8K8q
gOSjYCIT3i
official_review
1,727,895,607,384
x5e9iP8K8q
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission15/Reviewer_H1t4" ]
NLDL.org/2025/Conference
2025
title: General Review summary: The work focuses on developing an advanced reward model to leverage RF studies. The concept is applied to 3D medical image rendering, where an automatic transfer function is proposed, reducing the dependence on human agents. The authors tested their method on a proprietary dataset using a simple convolutional architecture and evaluating two loss functions. They visually inspected the rendered images to assess the trade-off between reward level and quality. strengths: The paper suggests a way to automate the creation of transfer functions using reinforcement learning. This method can benefit the RF area and speed up other fields, such as the medical field. Overall, the text is well-written, clear, and straightforward. The images are self-explanatory. The dataset is original. weaknesses: It is unclear how this work compares to others of its kind. In other words, it would be interesting to benchmark it against similar approaches. Furthermore, are there other datasets to which the method could be applied to test its generalization capacity? The novelty should be compared with the state of the art. In addition, the following minor points deserve attention: 1 - I suggest shortening the title to: "Reward Modeling for 3D Medical Image Rendering Transfer Functions" 2 - In the abstract, there are abbreviations without the full terms. 3 - The authors compared different loss functions, but what is the rationale behind choosing the CNN architecture (2D or 3D)? Were others considered, such as Transformers? 4 - How were the hyperparameters adjusted? How was the data split for training, cross-validation, and testing? 5 - Is the dataset proprietary? Is it available? Is there a benchmark for this problem and dataset? How does the work compare to other studies of this type? 6 - The conclusion is vague. What worked and what went wrong? What lessons have been learned? confidence: 4 justification: Authors need to compare the method with the state of the art and test it on other datasets to attest to its generalization capacity. final_rebuttal_confidence: 4 final_rebuttal_justification: Most of my comments have not been responded to. There is room for improvement and more studies should be done.
x5e9iP8K8q
Human Aligned Reward Modeling for Automated Transfer Function Generation of 3D Rendering of Medical Image Data
[]
In recent years, the quality of medical image data, such as computed tomography or magnetic resonance tomography, has continued to improve and the resolution and detection of the smallest structures has become increasingly accurate. Along with these developments, new techniques for three-dimensional visualization using volume rendering techniques are emerging, enabling extremely realistic visualization of medical images. This helps to improve patient communication, diagnosis, and treatment planning. An extremely critical step in the development of a realistic rendering is the design of a suitable transfer function. However, this requires a high level of experience and manual fine-tuning to the given image data. To automatize this process, we propose to train a reinforcement learning agent that extracts a two-dimensional transfer function from the given joint histograms of the image data. The focus of this study is primarily on the development of a suitable reward model, which is critical for the reinforcement learning framework, incorporating human feedback.
[ "Direct Volume Rendering", "Transfer Function", "Reinforcement Learning from Human Feedback" ]
https://openreview.net/pdf?id=x5e9iP8K8q
https://openreview.net/forum?id=x5e9iP8K8q
eLQfcFcpo2
official_review
1,728,354,807,668
x5e9iP8K8q
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission15/Reviewer_YUYt" ]
NLDL.org/2025/Conference
2025
title: Official Review by Reviewer YUYt summary: This paper presents a new method that utilizes RLHF to help the 3D rendering of medical image data. The key idea is to learn an automatic transfer function generation model that is guided by human preference. Experiments demonstrate that the proposed method's promising performance. strengths: - The idea of using RLHF to help the learning of TF generation is well-motivated and reasonable to me. - The method is quite simple and easy to follow. - The illustrations in the paper are helpful. weaknesses: - There are no details on how the human inspectors produce the preference data. - The evaluation is based on a few data samples, so it might not be very convincing that the method can be generalizable to any medical image data. - The code is not released. confidence: 2 justification: See my comments in weakness. final_rebuttal_confidence: 4 final_rebuttal_justification: I still have a major concern for the generalization of the method.
rIQCKH0He8
Toward Learning Distributions of Distributions
[ "Moritz Wohlstein", "Ulf Brefeld" ]
We propose a novel generative deep learning architecture based on generative moment matching networks. The objective of our model is to learn a distribution over distributions and generate new sample distributions following the (possibly complex) distribution of training data. We derive a custom loss function for our model based on the maximum mean discrepancy test. Our model is evaluated on different datasets where we investigate the influence of hyperparameters on performance.
[ "MMD", "distribution embedding", "hypernetwork", "kernel embedding", "GMMN" ]
https://openreview.net/pdf?id=rIQCKH0He8
https://openreview.net/forum?id=rIQCKH0He8
zNYfKuEYlz
meta_review
1,730,538,848,613
rIQCKH0He8
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission39/Area_Chair_kpGC" ]
NLDL.org/2025/Conference
2025
metareview: This paper considers estimating distributions over data objects that are, themselves, distributions. This is an important problem that e.g. lies at the heart of information geometry, and has also been studied in various applications before, e.g. covering: - Distributions of normal distributions, which have appeared in diffusion tensors in diffusion MRI (describing estimated probability distributions over white matter fiber orientations), covariance descriptors in computer vision (describing local feature distributions) and more - Distributions of Gaussian processes describing e.g. uncertain estimates of white matter bundle trajectories or uncertain estimates of yearly temperature curves In the last bullet, the interesting uncertainty is aleatoric, or irreducible -- and these examples are therefore special cases of the problem of incorporating irreducible data uncertainty into the downstream analysis. Distributions over the aleatoric distributions could be a useful tool in this regard. The reviewers largely appreciate the paper as an interesting and clear proof of concept study, which is well suited to a conference. The most important highlighted concern being the relevance of the problem. The authors are encouraged to stress the motivation for the problem in their final study, as well as incorporate any concerns, questions and useful suggestions made by the reviewers. recommendation: Accept (Poster) suggested_changes_to_the_recommendation: 2: I'm certain of the recommendation. It should not be changed confidence: 5: The area chair is absolutely certain
rIQCKH0He8
Toward Learning Distributions of Distributions
[ "Moritz Wohlstein", "Ulf Brefeld" ]
We propose a novel generative deep learning architecture based on generative moment matching networks. The objective of our model is to learn a distribution over distributions and generate new sample distributions following the (possibly complex) distribution of training data. We derive a custom loss function for our model based on the maximum mean discrepancy test. Our model is evaluated on different datasets where we investigate the influence of hyperparameters on performance.
[ "MMD", "distribution embedding", "hypernetwork", "kernel embedding", "GMMN" ]
https://openreview.net/pdf?id=rIQCKH0He8
https://openreview.net/forum?id=rIQCKH0He8
vb8aPnNi1R
official_review
1,728,450,485,880
rIQCKH0He8
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission39/Reviewer_yoAQ" ]
NLDL.org/2025/Conference
2025
title: Sound approach to generative modeling of distributions summary: This paper considers the generative moment matching network (GMMN), the GAN framework with its discriminator defined by the maximum mean discrepancy (MMD). The authors propose the GMMN for generating distributions of distributions by developing the MMD between the distributions of distributions basically correctly. strengths: The proposed method is sound and promising and its preliminary demonstration is illustrative. weaknesses: The derivation might have a minor error on the discussion around eq.(8). confidence: 3 justification: The proposed approach is sound and is basically correctly derived. - It might be nicer to explain an intuitive meaning of $\lambda$ in eq.(5), that is, what happens to the behavior of the proposed method when $\lambda$ is changed. - Although the two terms in eq.(8) are summed into a single integral, I don’t think this is true while each of the two terms in fact goes to zero as $L\rightarrow \infty$. - It might be nicer to discuss the difference as a methodology between the proposed method and a direct modeling of distributions of distributions like the Dirichlet process. minor: p.3, l.230: ``parameterized’’ is misspelled. p.4, l.258: The mean of the Gaussian for $\log (\sigma)$ should be $\mu_{\sigma}$ (not $\sigma_{\mu}$). p.4, l.296: The lower endpoint of the uniform distribution of $z_i^1$ may be $-0.1$.
rIQCKH0He8
Toward Learning Distributions of Distributions
[ "Moritz Wohlstein", "Ulf Brefeld" ]
We propose a novel generative deep learning architecture based on generative moment matching networks. The objective of our model is to learn a distribution over distributions and generate new sample distributions following the (possibly complex) distribution of training data. We derive a custom loss function for our model based on the maximum mean discrepancy test. Our model is evaluated on different datasets where we investigate the influence of hyperparameters on performance.
[ "MMD", "distribution embedding", "hypernetwork", "kernel embedding", "GMMN" ]
https://openreview.net/pdf?id=rIQCKH0He8
https://openreview.net/forum?id=rIQCKH0He8
nIQgi4QbzW
decision
1,730,901,556,280
rIQCKH0He8
[ "everyone" ]
[ "NLDL.org/2025/Conference/Program_Chairs" ]
NLDL.org/2025/Conference
2025
title: Paper Decision decision: Accept (Oral) comment: We recommend an oral and a poster presentation given the AC and reviewers recommendations.
rIQCKH0He8
Toward Learning Distributions of Distributions
[ "Moritz Wohlstein", "Ulf Brefeld" ]
We propose a novel generative deep learning architecture based on generative moment matching networks. The objective of our model is to learn a distribution over distributions and generate new sample distributions following the (possibly complex) distribution of training data. We derive a custom loss function for our model based on the maximum mean discrepancy test. Our model is evaluated on different datasets where we investigate the influence of hyperparameters on performance.
[ "MMD", "distribution embedding", "hypernetwork", "kernel embedding", "GMMN" ]
https://openreview.net/pdf?id=rIQCKH0He8
https://openreview.net/forum?id=rIQCKH0He8
XBJBw9EOMg
official_review
1,728,450,852,993
rIQCKH0He8
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission39/Reviewer_Xwtg" ]
NLDL.org/2025/Conference
2025
title: Promising Architecture for Generating Distributions of Distributions, but Limited Experimental Validation and Contextualization summary: This paper proposes a novel generative model based on the generative moment matching network (GMMN) architecture, designed to generate distributions over distributions. The framework employs a Hypernetwork to generate model parameters, with each parameter set corresponding to a sampled distribution. The paper derives a loss function based on the Maximum Mean Discrepancy (MMD) test, enabling the networks to learn by comparing sets of datasets. The model’s effectiveness is illustrated through experiments on two small-scale toy datasets. strengths: * The use of a Hypernetwork to generate parameters corresponding to distinct distributions is a straightforward approach. This design intuitively aligns with the task of learning a distribution over distributions. * The visual representation of the model architecture is clear and effectively aids in understanding the proposed method, providing valuable insight into the model’s structure and the role of each component. weaknesses: * The experiment section lacks robust evidence to convincingly demonstrate the proposed method’s performance: * The paper only includes experiments on two toy datasets, without any real-world datasets, which limits the generalizability and practical relevance of the results. * In Section 3.2, only one out of 42 tested hyperparameter pairs produces a distribution of distributions similar to the training set, calling into question the method’s robustness. * The paper does not adequately situate the proposed method within the broader context of existing methods that model distributions over distributions. It is unclear where the method’s innovations lie, and the paper lacks a performance comparison with relevant baselines, such as the Dirichlet process. * The paper does not sufficiently motivate the importance of modeling distributions over distributions. The transition in Lines 68-73 feels abrupt, and the lack of real-world examples or applications makes it difficult to appreciate the method’s broader relevance. * Some minor issues: * Line 222, $(P_i, Q_j)$ instead of $(P_i, Q_i)$ * Line 243: it’s unclear which network the parameter $\\phi$ belongs to, since we have $\\theta$ for the Hypernetwork, and $w_k$’s for the main networks. * Line 258: $\\mu_{\\sigma}$ for the mean of $\\log(\\sigma)$ confidence: 3 justification: Although the model architecture design effectively aligns with the goal of modeling distributions over distributions, the paper does not provide sufficient experimental evidence to convincingly demonstrate the efficacy of the proposed approach. The experiments are limited to two toy datasets, with no real-world examples, and the method’s robustness is questionable given that only one out of 42 hyperparameter pairs yielded results similar to the training set. Furthermore, the paper lacks contextualization within the existing landscape of methods for modeling distributions over distributions, offering no comparisons to established techniques like the Dirichlet process. Additionally, the paper does not adequately motivate the importance of the problem or provide real-life applications, which makes it difficult to assess the broader impact and innovation of the approach. final_rebuttal_confidence: 3 final_rebuttal_justification: The paper appears incomplete in its current form: * The study only includes toy examples as "proof of concept" demonstrations. * There is limited exploration of the effects of hyperparameters on training stability and generalization to the test set, leaving the experimental results unconvincing in demonstrating the proposed method's effectiveness in learning distributions of distributions. * The absence of baseline models limits the contextual understanding of the proposed method's performance. * The lack of real-world applications weakens the motivation for the method and its potential impact. Although the authors have responded to these concerns, I intend to maintain my original rating based on the paper's present state.
rIQCKH0He8
Toward Learning Distributions of Distributions
[ "Moritz Wohlstein", "Ulf Brefeld" ]
We propose a novel generative deep learning architecture based on generative moment matching networks. The objective of our model is to learn a distribution over distributions and generate new sample distributions following the (possibly complex) distribution of training data. We derive a custom loss function for our model based on the maximum mean discrepancy test. Our model is evaluated on different datasets where we investigate the influence of hyperparameters on performance.
[ "MMD", "distribution embedding", "hypernetwork", "kernel embedding", "GMMN" ]
https://openreview.net/pdf?id=rIQCKH0He8
https://openreview.net/forum?id=rIQCKH0He8
6dglfb3PKH
official_review
1,727,989,766,961
rIQCKH0He8
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission39/Reviewer_y1z6" ]
NLDL.org/2025/Conference
2025
title: Generating distributions using GANs and MMD summary: The authors tackle the problem of generating and sampling distributions (rather than the more vanilla problem of generating and sampling points from distributions). The generated distributions can then be used as generative models themselves. A GAN-like architecture is used with a discriminator and generator, where the discriminator loss is a squared estimate of an MMD. The MMD operates on distributions, and as such requires an appropriate notion of a kernel over the space of distributions. There are a few relatively minor errors, and some odd descriptions of standard results (described below), however it is possible to see past these errors and broadly understand what the authors are doing. What is more challenging is understanding why the authors are tackling this particular problem. Only synthetic data is considered, and no real life use cases are tried or even discussed. My concern is that on any non-synthetic problem, the method would fail to perform well due to the difficult nature of the problem (in a sample complexity sense). strengths: - The overall pipeline seems to be correct. Estimate a density using a sum of Dirac measures, compute its Fourier transform, and then pass this into an appropriate kernel over the space of probability measures. This kernel is used to estimate a squared MMD, which is used as the loss for the discriminator network. - The method is mostly clear, with some isolated instances of inaccuracies (discussed below). - To the best of my knowledge, the problem tackled is novel. I have not seen other works solve this problem, but this is also not my area of expertise. weaknesses: - I am not sure what the first equality in equation (4) is saying (the second equality looks okay, though). Perhaps $\widetilde{P}(z)$ is missing from the integrand, or perhaps the measure $\mu$ is supposed to contain $\widetilde{P}$. Either way, it is not clear, as I can't easily see how the integrand depends on $y_i$. - Equation (6) is unnecessarily long. As far as I understand, each of the integrals is just a Fourier transform. The text between (6) and line 211, first column, is even more confusing. A long description of the Fourier transform of a Gaussian measure is given, using what appears to be very complicated techniques, but the Fourier transform of a Gaussian measure is a very standard result (and is famously Gaussian itself, as the authors correctly derive). The whole pipeline is quite straightforward and the current set of equations hinder rather than help: Integrate the sum of Dirac masses against the complex exponential to obtain a sum of complex exponentials. Integrate these against a Gaussian measure to obtain a sum of Gaussian measures. These become arguments to a squared exponential kernel (which itself resembles a Gaussian measure). - It is incorrectly stated that equation (10) is "the MMD". Equation (10) is a (biased) empirical estimate of a **squared** MMD between **empirical distributions**. The squared is important, but it is even more important to try and disambiguate the nature of "empirical distributions". Usually we estimate the MMD given samples from a distribution. But here you are first sampling an empirical distribution (which is itself an approximation), and then estimating the squared MMD on top of that using the usual technique. A few sentences would be helpful. - There appears to be an index error in the second sum in equation (10) - $j$ is missing. - Equation on line 258 appears to use nonstandard notation, which is confusing. Usually the second parameter of $\mathcal{N}$ is a variance, not a standard deviation. - I don't see an open discussion of the limitations of the work, as described in the reviewer guidelines. - I am not convinced about the experimental rigour. It appears as though for both experiments, one seed of random hyperparameters is tried for each hyperparameter setting. Minor: - wrong punctuation for "fool" on line 045. Should be ``fool''. Same for other quotation marks - "MMD fulfils the properties of a metric [9], that is, (1)". This is correct, but potentially poorly worded. With a universal kernel, MMD fulfills all of the properties of a metric (there are more than one). - Some unusual grammar line 158 - 160, second column. Maybe a full stop should be a comma? - The imaginary unit $i$ clashes with some index notation. Consider using $j$ for such indices. - Figure 1 is too small to read without zooming in a lot. All figure texts and legends are actually too small. Questions: - Figure 2. Did you try and run the training process for longer to see if the last two parameters ever reach closer to their target? - Why not use a better estimator for the empirical distribution? Dirac measures are known to be poor estimates - what about a kernel density estimator with a Gaussian measure? This should still admit a closed-form expression, because you will just be essentially still computing Fourier transforms of Gaussian densities. confidence: 3 justification: There are issues with correctness, which I would be willing to overlook and think the authors could address in a modified manuscript. What is more challenging is why the problem the authors consider is of interest. Right now only synthetic examples are considered, and it is not clear whether this approach would actually work in nonsynthetic data. My worry is that the sample complexity would render this approach intractable.
qTYZ614oF2
Graph convolutional neural networks with uncertainty modelling applied to edge detection in mammograms
[]
This paper addresses the challenge of accurately identifying the border of the pectoral muscle in mammograms, a critical step in the evaluation of image quality in breast cancer screening. The focus is on the medio-lateral oblique (MLO) view, where the pectoral muscle often appears in the top medial part of the image. The variability in muscle visibility across images introduces significant uncertainty, which this work seeks to address. Our main contribution is a novel modification of a deep graph convolutional network (GCN) that not only locates key points along the muscle boundary but also provides uncertainty estimates, which are useful for selecting images that must be evaluated by a human. We introduce a novel approach to estimate both aleatoric and epistemic uncertainties using a GCN framework. Aleatoric uncertainty captures variability in ground truth due to annotator differences, while epistemic uncertainty accounts for the model’s inherent limitations. Our method was tested on in-house annotated mammograms and the external InBreast dataset, demonstrating comparable accuracy to human annotators and robustness in the presence of domain shifts. The uncertainty estimates were found to be highly accurate, confirming their potential for identifying cases that require human review.
[ "Graph convolutional neural networks", "uncertainty modelling", "aleatory uncertainty", "epistemic uncertainty", "mammogram" ]
https://openreview.net/pdf?id=qTYZ614oF2
https://openreview.net/forum?id=qTYZ614oF2
sXNklXUI9O
official_review
1,728,656,796,866
qTYZ614oF2
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission41/Reviewer_6uuY" ]
NLDL.org/2025/Conference
2025
title: Graph convolutional neural networks with uncertainty modelling applied to edge detection in mammograms summary: The paper addresses the problem of delineating the border of pectoral muscle in mammograms. The authors present a modification of the graph convolutional network that locates points along this boundary and also provides uncertainty estimates. They test the method on one publically available dataset and one in-house dateset. From the presented results, it is difficult to assess the value of the proposed approach. strengths: The problem is relevant, and the approach using graph convolutional networks is well-motivated. The writing is clear, and the work is well presented. weaknesses: A large part of the paper deals with uncertainty estimation. Almost all text in the Method section is about uncertainty. However, the results of the uncertainty estimation are not convincing. Maybe this is just the way the results are presented, but it is difficult to see the value of the suggested method. To write just some concerns regarding this, here are the questions about Figure 4: The red dots are averages of two annotations. It would help to understand the benefits of the method, if we could see those two annotations. Why are the dots not in the middle of the confidence areas? Why are all ellipses axis-aligned and either elongated in the x or y direction? Are we not mostly interested in uncertainty in the normal direction, as the displacement in the tangential direction only changes parametrization, but not the resulting curve? In Figure 5, why is the ground truth outside the uncertainty band for the left image? In general, the benefits of the method are not evident. We hear nothing about what the alternatives are. So, despite mentioning some earlier works, the proposed method is not placed in the state of the art. There is no quantification of the method, and no comparison. The discussion and conclusion simply ignore those problems and provide no explanation. They claim to produce accurate results — but it’s unclear what accurate is in this context. They also claim that uncertainty estimates can help identify difficult cases, but it is unclear to see, how this should work. confidence: 4 justification: The paper proposes an approach that yields some results. However, there is no evidence for the quality of these results. final_rebuttal_confidence: 4 final_rebuttal_justification: I appreciate the rebuttal, and I also recognize that whether something is convincing, or not convincing, is somewhat subjective. My most important concern was the lack of clarity: How good are the results? What do the results enable? What can I accomplish using this method, which I could not accomplish without it? The rebuttal says that this is obvious because histograms and curves fit well. But what does 'fit well' mean in this context? What would "fit badly" be? I understand that you have achieved a fit of a certain quality. But I still don't know how is this quality better than anything else. I do understand that the method is somewhat unique. Still, the authors should be able to produce a naive baseline. Or naive translation of segmentation results into curve-based results.
qTYZ614oF2
Graph convolutional neural networks with uncertainty modelling applied to edge detection in mammograms
[]
This paper addresses the challenge of accurately identifying the border of the pectoral muscle in mammograms, a critical step in the evaluation of image quality in breast cancer screening. The focus is on the medio-lateral oblique (MLO) view, where the pectoral muscle often appears in the top medial part of the image. The variability in muscle visibility across images introduces significant uncertainty, which this work seeks to address. Our main contribution is a novel modification of a deep graph convolutional network (GCN) that not only locates key points along the muscle boundary but also provides uncertainty estimates, which are useful for selecting images that must be evaluated by a human. We introduce a novel approach to estimate both aleatoric and epistemic uncertainties using a GCN framework. Aleatoric uncertainty captures variability in ground truth due to annotator differences, while epistemic uncertainty accounts for the model’s inherent limitations. Our method was tested on in-house annotated mammograms and the external InBreast dataset, demonstrating comparable accuracy to human annotators and robustness in the presence of domain shifts. The uncertainty estimates were found to be highly accurate, confirming their potential for identifying cases that require human review.
[ "Graph convolutional neural networks", "uncertainty modelling", "aleatory uncertainty", "epistemic uncertainty", "mammogram" ]
https://openreview.net/pdf?id=qTYZ614oF2
https://openreview.net/forum?id=qTYZ614oF2
oYlTrp6Y1W
official_review
1,728,499,766,823
qTYZ614oF2
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission41/Reviewer_ZQmc" ]
NLDL.org/2025/Conference
2025
title: Review of paper on GCNs with uncertainty modelling applied to pectoral muscle detection in mammograms summary: This paper presents a method for identification and uncertainty estimation of pectoral muscle border in X-ray mammogram images using graph convolutional neural networks. The problem of identifying the border of pectoral muscle on X-ray mammogram images is a crucial step in identification of potential breast cancer indicators, but due to the variability in muscle visibility across images, significant uncertainty exists, which this paper aims to address. The proposed method was trained on in-house data, and was tested on both the in-house data annotated by two radiographers as well on the external data, and has shown promising initial results. The uncertainty estimations provide a potential method for identification of X-ray mammogram images which require additional human review/intervention. strengths: Paper has the following strengths: - The paper is written in clear and concise manner and was easy to follow. Aleatoric and epistemic uncertainty estimation (as main focuses of the paper) were explained in detail, while the background of the problem was simple to understand and did not require medical expertise. Additionally, visualization of images, along with pectoral muscle border predictions and uncertainty estimations, made the paper even more straightforward to understand. - A novel method for identification and uncertainty estimation of pectoral muscle border in X-ray mammogram images using graph convolutional neural networks was introduced, and has shown promising initial results. - Method's generalizability was tested on external data. weaknesses: The following weaknesses of the paper should be addressed: - $\textbf{Introduction}$. Introduction should contain more detail on clinical applications of pectoral muscle identification: Why is it important (e.g., because it can overlap with fibroglandular tissue and thus needs to be excluded from quantitative analysis of breast parenchyma)? How is it being performed in hospitals nowadays, and what are the drawbacks? What is the radiologist/radiographers performance? How does your method address current issues? Additionally, more references for first paragraph of introduction are required. - $\textbf{Introduction Cont'd}$. There have been numerous studies (especially over the last five years) in deep learning application to mammography screening, and particularly in pectoral muscle segmentation (e.g., On Segmentation of Pectoral Muscle in Digital Mammograms by Means of Deep Learning by H. Soleimani et al., Deep learning based pectoral muscle segmentation on Mammographic Image Analysis Society (MIAS) mammograms by Y. J. Kim, et al.), while you reference only two. How does your work differ from the previous methods? - $\textbf{Method}$. More detailed description of GCN method from Li et al. is required (e.g., what is the architecture of HR-Net, overview of DAG, etc.). Why do you skip the global step? How can you assume that all components (images and key points) are independent when computing likelihood of Laplace distribution? Have you tried using Normal distribution (a comparison of results between Laplace and Normal distribution of key points would have been a nice addition)? - $\textbf{Data sets}$ Significantly more details are required when describing the internal data set: Where was it acquired, and how? Demographic information, scanner information, annotation protocol? Why wasn't the training data annotated by experienced radiologists/radiographers? Why didn't you make use of other available X-ray mammography open-source datasets (e.g., the NYU breast cancer screening dataset, CSAW-CC, VinDr-Mammo, etc.) to either enrich your training data (as you are working in significantly low data regime), or to perform additional testing of your method? - $\textbf{Experimental setup}$ Additional justification on some hyper-parameters would be beneficial: Why did you select T=3 for the total number of iterations (did you try higher values of T, and perhaps notice no significant performance gains)? Why did you select n=10 for the number of points? - $\textbf{Results}$ You could further describe the meaning of error values on test data (e.g., error of 0.012 corresponds to average predicted point coordinate being 512*0.012=6.144 pixels away from the ground truth point coordinate. How can being over 6 pixels in both x and y dimension away from pectoral muscle on average be considered a good performance?). A large drawback of this paper is missing subsection on comparison of your method to state-of-the-art methods, especially for InBreast data which has been used extensively. Is your method comparable (or non-inferior) in identification of pectoral muscle to state-of-the-art ones, with an addition of uncertainty estimation? Or is the identification of pectoral muscle significantly worse? - $\textbf{Conclusion}$ How can this method be used in clinical settings? How would radiologists/radiographers benefit? Standard practice nowadays is for at least one medical expert (and more often, two) to review mammogram images. How does your method affect this? How will you evaluate your method even further? confidence: 4 justification: The paper has introduced a novel method for identification and uncertainty estimation of pectoral muscle in X-ray mammogram images using graph convolutional neural networks. While the main idea of the paper was clearly described and the method has shown promising initial results, there are a number of drawbacks which need to be addressed prior to paper publication. While it was a well-thought-out idea and approach, it still requires additional work: - more detail on the clinical application and impact of the method, - more detail on the background on the problem and discussion of the state-of-the-art methods, - more detailed description of the method, and not just referring to the paper which introduced it, - more details on the in-house data, and additional testing and discussion of the results. If the above-mentioned drawbacks were to be addressed in the future, the paper would be accepted. However, for the time being, it is rejected. final_rebuttal_confidence: 4 final_rebuttal_justification: After the rebuttal period, the updated version of the manuscript, as well as other reviewers' and author's comments, the rating and the main sentiment of the previous revision remains. The paper has introduced a novel method for identification and uncertainty estimation of pectoral muscle in X-ray mammogram images using graph convolutional neural networks. While the main idea of the paper was clearly described and the method has shown somewhat promising initial results on the in-house test data, there are a number of drawbacks which need to be addressed. While it was a well-thought-out idea and approach, it still requires additional work: - more details and discussions on the clinical impact of the method, - more details and discussions on the state-of-the-art methods, as well as on other approaches, - more details and discussions on the in-house data, as well as additional testing.
qTYZ614oF2
Graph convolutional neural networks with uncertainty modelling applied to edge detection in mammograms
[]
This paper addresses the challenge of accurately identifying the border of the pectoral muscle in mammograms, a critical step in the evaluation of image quality in breast cancer screening. The focus is on the medio-lateral oblique (MLO) view, where the pectoral muscle often appears in the top medial part of the image. The variability in muscle visibility across images introduces significant uncertainty, which this work seeks to address. Our main contribution is a novel modification of a deep graph convolutional network (GCN) that not only locates key points along the muscle boundary but also provides uncertainty estimates, which are useful for selecting images that must be evaluated by a human. We introduce a novel approach to estimate both aleatoric and epistemic uncertainties using a GCN framework. Aleatoric uncertainty captures variability in ground truth due to annotator differences, while epistemic uncertainty accounts for the model’s inherent limitations. Our method was tested on in-house annotated mammograms and the external InBreast dataset, demonstrating comparable accuracy to human annotators and robustness in the presence of domain shifts. The uncertainty estimates were found to be highly accurate, confirming their potential for identifying cases that require human review.
[ "Graph convolutional neural networks", "uncertainty modelling", "aleatory uncertainty", "epistemic uncertainty", "mammogram" ]
https://openreview.net/pdf?id=qTYZ614oF2
https://openreview.net/forum?id=qTYZ614oF2
knhcwInl39
decision
1,730,901,556,403
qTYZ614oF2
[ "everyone" ]
[ "NLDL.org/2025/Conference/Program_Chairs" ]
NLDL.org/2025/Conference
2025
title: Paper Decision decision: Reject
qTYZ614oF2
Graph convolutional neural networks with uncertainty modelling applied to edge detection in mammograms
[]
This paper addresses the challenge of accurately identifying the border of the pectoral muscle in mammograms, a critical step in the evaluation of image quality in breast cancer screening. The focus is on the medio-lateral oblique (MLO) view, where the pectoral muscle often appears in the top medial part of the image. The variability in muscle visibility across images introduces significant uncertainty, which this work seeks to address. Our main contribution is a novel modification of a deep graph convolutional network (GCN) that not only locates key points along the muscle boundary but also provides uncertainty estimates, which are useful for selecting images that must be evaluated by a human. We introduce a novel approach to estimate both aleatoric and epistemic uncertainties using a GCN framework. Aleatoric uncertainty captures variability in ground truth due to annotator differences, while epistemic uncertainty accounts for the model’s inherent limitations. Our method was tested on in-house annotated mammograms and the external InBreast dataset, demonstrating comparable accuracy to human annotators and robustness in the presence of domain shifts. The uncertainty estimates were found to be highly accurate, confirming their potential for identifying cases that require human review.
[ "Graph convolutional neural networks", "uncertainty modelling", "aleatory uncertainty", "epistemic uncertainty", "mammogram" ]
https://openreview.net/pdf?id=qTYZ614oF2
https://openreview.net/forum?id=qTYZ614oF2
jFZurt1lpd
meta_review
1,730,539,358,973
qTYZ614oF2
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission41/Area_Chair_mNFS" ]
NLDL.org/2025/Conference
2025
metareview: This paper addresses the problem of uncertainty quantification for delineation of the pectoral muscle in mammograms. This is a highly interesting problem, and the reviewers find the paper well written and the problem well motivated, and they appreciate the external datasets used to evaluate the method. However, the concerns voiced by several reviewers regarding the experimental validation and to some degree also the motivation for the choice of method make it hard to accept the paper in its current form. Therefore, unfortunately, I cannot recommend acceptance. However, I highly encourage the authors to further develop the experimental validation, especially of the UQ part. To this end, you might find the recent paper [1] below helpful, which suggests different strategies for validating different types of segmentation uncertainty, which is highly related to the proposed problem. [1] Kahl, Kim-Celine, et al. "ValUES: A Framework for Systematic Validation of Uncertainty Estimation in Semantic Segmentation." The Twelfth International Conference on Learning Representations. recommendation: Reject suggested_changes_to_the_recommendation: 3: I agree that the recommendation could be moved up confidence: 4: The area chair is confident but not absolutely certain
qTYZ614oF2
Graph convolutional neural networks with uncertainty modelling applied to edge detection in mammograms
[]
This paper addresses the challenge of accurately identifying the border of the pectoral muscle in mammograms, a critical step in the evaluation of image quality in breast cancer screening. The focus is on the medio-lateral oblique (MLO) view, where the pectoral muscle often appears in the top medial part of the image. The variability in muscle visibility across images introduces significant uncertainty, which this work seeks to address. Our main contribution is a novel modification of a deep graph convolutional network (GCN) that not only locates key points along the muscle boundary but also provides uncertainty estimates, which are useful for selecting images that must be evaluated by a human. We introduce a novel approach to estimate both aleatoric and epistemic uncertainties using a GCN framework. Aleatoric uncertainty captures variability in ground truth due to annotator differences, while epistemic uncertainty accounts for the model’s inherent limitations. Our method was tested on in-house annotated mammograms and the external InBreast dataset, demonstrating comparable accuracy to human annotators and robustness in the presence of domain shifts. The uncertainty estimates were found to be highly accurate, confirming their potential for identifying cases that require human review.
[ "Graph convolutional neural networks", "uncertainty modelling", "aleatory uncertainty", "epistemic uncertainty", "mammogram" ]
https://openreview.net/pdf?id=qTYZ614oF2
https://openreview.net/forum?id=qTYZ614oF2
EP1UVqPHYb
official_review
1,727,158,207,151
qTYZ614oF2
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission41/Reviewer_jouS" ]
NLDL.org/2025/Conference
2025
title: Well-written paper, clear contribtions summary: The paper proposes an uncertainty modeling approach based on graphc convolutional networks (GCN) on the application of idefntifying borders of the pectoral muscle in mammograms (in the medio-lateral oblique (MLO) view). The approach modifies a GCN to not only locate key points but also uncertainty estimates. The approach is benchmarked on an in-house dataset including different annotators and a public dataset. strengths: - The paper is well-written and easy to follow along - The application is well motivated and the approach addresses a relevant area of research - The methods is benchmarked on a publicly available data set weaknesses: - The experimental section confuses a little bit as the central motivation is about a reliable estimation of uncertainties - that's why the authors overfit the models. However, then also the section talks about the predictive and relating it to the human performance. It is unclear at certain points which models and/or configurations the authors talk about. Suggestion: make this clear from the beginning and introduce maybe some identifiers for the models that are being used. - The in-house dataset was annotated by a non-expert. I cannot judge (in the light of the contirbution of the paper) how critical that actually is. confidence: 3 justification: First of all, I am not expert in this field. However, I think the paper presents some novelty worth sharing. Questions: - For the training the in-house data set was used - what are the results when you use the public data set? - Lines 216-218: a bactch size of 4 seems to be quite low. Can you elaborate more on that design decision? - Lines 241-250: this discussion seems a bit superficial. Is the overfitted model used for this discussion? And the test results are based on the training set? I think it is a bit sketchy to compare the predictive performance based on this experimental setup (moreover: "quite acceptable" - is there some king of guidance being available from medical doctors to justify such a statement?) Some minor points: - Line 55: Yu et al. appears twice - Line 122: point after Li et al[.] - Line 127: missing workd betwenn 'takes' and 'input' - Line 129: \times instead of $x$
qTYZ614oF2
Graph convolutional neural networks with uncertainty modelling applied to edge detection in mammograms
[]
This paper addresses the challenge of accurately identifying the border of the pectoral muscle in mammograms, a critical step in the evaluation of image quality in breast cancer screening. The focus is on the medio-lateral oblique (MLO) view, where the pectoral muscle often appears in the top medial part of the image. The variability in muscle visibility across images introduces significant uncertainty, which this work seeks to address. Our main contribution is a novel modification of a deep graph convolutional network (GCN) that not only locates key points along the muscle boundary but also provides uncertainty estimates, which are useful for selecting images that must be evaluated by a human. We introduce a novel approach to estimate both aleatoric and epistemic uncertainties using a GCN framework. Aleatoric uncertainty captures variability in ground truth due to annotator differences, while epistemic uncertainty accounts for the model’s inherent limitations. Our method was tested on in-house annotated mammograms and the external InBreast dataset, demonstrating comparable accuracy to human annotators and robustness in the presence of domain shifts. The uncertainty estimates were found to be highly accurate, confirming their potential for identifying cases that require human review.
[ "Graph convolutional neural networks", "uncertainty modelling", "aleatory uncertainty", "epistemic uncertainty", "mammogram" ]
https://openreview.net/pdf?id=qTYZ614oF2
https://openreview.net/forum?id=qTYZ614oF2
6ANQ1nAKIf
official_review
1,728,420,491,823
qTYZ614oF2
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission41/Reviewer_zV6X" ]
NLDL.org/2025/Conference
2025
title: Interesting paper, needs more elaboration summary: The paper presents a novel approach to uncertainty estimation for the detection of pectoral muscle boundaries in mammograms. Pectoral muscle can fail to contrast strongly with breast tissue in mammograms, leading to uncertainty among medical experts about the ground-truth delineation. This raises the question of how to represent uncertainty for the task. The novelty comes from the application of graph convolutional neural networks not only to landmark detection but also to the representation of aleatoric and epistemic uncertainty for the landmark locations. The paper views landmark data as inherently random (i.e. aleatoric) and thus seeks to model it as such. Two GCNs serve this purpose: one which locates landmark points and another which models aleatoric uncertainty using a product of Laplace distributions, one for each coordinate of each landmark point in each image. They view this product as a likelihood function and optimize the aleatoric GCN model model via maximum likelihood estimation on the parameters. Further, the paper uses an ensemble of models, presumably also GCNs, to model the epistemic uncertainty. Experiments were performed on in-house mammogram data and validated on publicly available mammogram data. An ensemble of GCNs was produced via five distinct training runs on a five-fold split of the data. The ensemble produces an average error close to the average error between radiographers on a test set held out from the in-house mammogram data. The average error on the publicly available mammogram data was double that of the error on the in-house test set. Model uncertainties were validated by comparing the empirical distribution of test scores against a one-dimensional Laplace distribution and by visual inspection. strengths: The task is clearly explained, and the motivation of the paper is almost immediately apparent, providing this paper with clear context for a reading. The figures give a good illustration of the problem, which is to localise the muscle even when an acquired image does not clearly resolve the muscle. The use of uncertainty ellipses provides, in my opinion, an intuitive representation of uncertainty about the location of landmarks which delineate the pectoral music boundary, and I speculate that this would be useful in the clinical setting. The methods section is the nicest part of the paper, with well-chosen references to aid further understanding. weaknesses: The introduction does not adequately motivate the use of graph convolutional neural networks for the problem. Skimming the methods section shows that the motivation appears there. This should be moved to the introduction, or at least hinted at, with further elaboration in the methods section. The paper would certainly benefit from a figure which depicts the model pipeline. There are multiple GCNs which are applied to the task, which is not obvious when skimming the paper. The data section is too short and is unclear. The paper is very short on details about the training data. In particular, it's not clear what the training data are, aside that they are mammograms. I assume that the 94 images annotated by radiographers comprise the test set. If so, this should be stated explicitly. Moreover, the use of an exclusively private dataset for training poses problems for reproducibility. The experiment section is also unclear and even contradictory in places. For example, when describing the development of the model ensemble using the five-fold split, one reads “The last one was not used for monitoring the log-likelihood and parameters that gave the highest value on the validation fold was saved.“ Obviously, you can’t save model parameters based on the performance of the validation set unless the loss function is monitored on the validation set. I also would not use the term "cross-validation", since the held-out folds are used for parameter selection to produce the model ensemble. As stated in the results section, landmark coordinates were rescaled to the unit interval, giving their L1 differences an interpretation as percentages of the height and width of the images. This is meaningful only if the relative area occupied by the pectoral muscle has low variance across all images. However, the paper does not address this. Are there images where the foreground is large and other images where the foreground is small? Were the data processed to control for this? Since the paper states that the image pre-processing itself was minimal, this is a concern. I would not assert that the model’s performance is on par with human experts just because the errors are roughly the same as the average between-radiographer error on the internal test set. The numbers on the external test set show clearly that this is not the case, and the assertion is therefore misleading. “… we standardize the model errors by divided them with the predicted standard deviation.” I think the authors meant that they subtract the mean and then divide by the standard deviation. Just dividing by the standard deviation gives numerical values greater than or equal to zero, which is not consistent with the depictions in Figure 2 and Figure 3. The authors compute the Kolmogorov–Smirnov statistic when comparing the empirical cumulative distribution function to the theoretical standard Laplace distribution function. However, they don’t use this statistic in a hypothesis test, namely for the Kolmogorov-Smirnov test. Why not? confidence: 3 justification: This paper seems to make a novel contribution, but it requires revision to communicate this contribution more clearly.
maUaYMbmYX
Towards Biologically Plausible Learning By Stacking Circular Autoencoders
[]
Training deep neural networks in biological systems is faced with major challenges such as scarce labeled data and obstacles for propagating error signals in the absence of symmetric connections. We introduce Tourbillon, a new architecture that uses circular autoencoders trained with various recirculation algorithms in a self-supervised mode, with an optional top layer for classification or regression. Tourbillon is designed to address biological learning constraints rather than enhance existing engineering applications. Preliminary experiments on small benchmark datasets (MNIST, Fashion MNIST, CIFAR10) show that Tourbillon performs comparably to models trained with backpropagation and may outperform other biologically plausible approaches. The code and models are available at \url{https://anonymous.4open.science/r/Circular-Learning-4E1F}.
[ "biologically plausible architectures", "self-supervised learning", "autoencoders", "recirculation", "local learning", "tourbillon", "feedback alignment", "forward forward", "target", "target propagation" ]
https://openreview.net/pdf?id=maUaYMbmYX
https://openreview.net/forum?id=maUaYMbmYX
XuFvlIE5pX
official_review
1,727,080,674,244
maUaYMbmYX
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission14/Reviewer_pejn" ]
NLDL.org/2025/Conference
2025
title: Towards Biologically Plausible Learning by Stacking Circular Autoencoders summary: The paper introduces Tourbillon, a neural network architecture designed to tackle challenges in biological learning, like limited data and error propagation without symmetric connections. It uses circular autoencoders and recirculation algorithms in a self-supervised manner, with an optional classification layer. Initial tests on small/tiny datasets show that Tourbillon performs comparably to traditional backpropagation models and may outperform other biologically plausible approaches. strengths: The paper introduces Tourbillon, a deep learning architecture specifically designed to address challenges associated with training neural networks in biological systems, such as spiking neural networks. Key issues include limited labeled data and difficulties with error signal propagation due to the absence of symmetric connections. Therefore, the authors used tourbillon to use circular autoencoders and utilize different recirculation algorithms in a self-supervised manner. It also includes an optional top layer for classification or regression tasks. The architecture is tailored to meet biological learning constraints, distinguishing it from traditional models that enhance existing engineering applications. Preliminary results from experiments with benchmark datasets like MNIST, Fashion MNIST, and CIFAR10 suggest that Tourbillon performs similarly to models trained with backpropagation and may even outperform other biologically plausible approaches. However, the novelty without proper comparison is not justifiable. weaknesses: Major: 1- The results are based on preliminary experiments with small benchmark datasets, which may limit the generalizability of the findings to more complex or real-world scenarios. Additionally, there is a concern regarding how the authors adapt continuous-form datasets for use with the biological plausibility concept. Typically, biological plausibility means SNN requires dataset preprocessing or spike train-based datasets. For example, the N-MNIST dataset is often used for training biological plausibility-based neural architecture. It would be useful to know how the authors handle this aspect. 2- While the paper claims that Tourbillon performs comparably to models using backpropagation and may outperform other biologically plausible approaches, it lacks detailed comparisons and analyses of these models. This raises concerns about the validity of the authors' claims without thorough comparative evidence. The NLDL conference is reputable, and for a full publication, it is essential that the authors provide detailed results rather than a general explanation of the SNN methodology. Given the limited page constraints, the authors should focus on presenting their own findings in detail to justify their claims. 3- How were the parameters for training the Circular Autoencoders (CAEs) determined? Are there specific reasons why certain parameter choices (e.g., CAE size, number of cycles) were preferred over others? The authors should provide the exact parametric values used in their experiments, as these details are very important for other researchers attempting to replicate or build upon their findings. 4- The paper mentions various training dynamics, including different learning rules and their effects on reconstruction loss. Are there detailed comparisons and justifications for why certain rules performed better? How do these findings align with the goals of demonstrating biological plausibility? 5- The authors demonstrate in the experiment section that recirculation methods achieved comparable or superior reconstruction errors compared to backpropagation and other methods. Are the comparisons thorough and statistically significant? How do the results hold up across different datasets and architectures? 6- Section 4.2- The section notes that Tourbillon successfully captures crucial information of the input images. Can the authors provide more detail on the quality of these reconstructions? How do the reconstructed images compare to those produced by other methods in terms of fidelity and accuracy? 7- Section 4.3- How does the conversion to a Tourbillon-like version affect the overall architecture and functionality of the original neural networks (e.g., U-Net and feed-forward architecture)? Are there any qualitative differences observed in the performance or behavior of the converted models? 8- On page 5, right column, last paragraph- The authors discussed ImageNet as a real-world dataset that would be important for scaling the Tourbillon architecture in research. Why was ImageNet not used in the current study? What are the specific challenges or limitations that prevented its use, and how do the authors plan to address these issues in future work? Minor Mistakes ● Authors need to cite recent state-of-the-art research studies. confidence: 4 justification: The paper introduces the Tourbillon architecture, which aims to address challenges in training neural networks within biological systems using circular autoencoders and self-supervised learning. While this approach is innovative but didn’t show promise in preliminary experiments with benchmark datasets, the paper has significant limitations that undermine its contributions. Firstly, the results are based on small benchmark datasets, which raises concerns about the generalizability of the findings to more complex or real-world scenarios. Additionally, the adaptation of continuous-form datasets to the concept of biological plausibility, typically requiring spike train-based datasets, is not adequately addressed. The paper also lacks detailed comparative analyses with existing methods and specific parameter values used in training which makes it difficult to fully validate the authors' claims. Moreover, the discussion on training dynamics and learning rules is insufficiently detailed, and the quality of reconstructions needs further elaboration. The impact of converting existing architectures to Tourbillon-like versions is not clearly demonstrated, and the absence of real-world datasets like ImageNet, mentioned as important for future work, is unexplained. Given these substantial issues, including insufficient comparative evidence and lack of detail in critical areas, the paper does not meet the standards required for publication at this stage.
maUaYMbmYX
Towards Biologically Plausible Learning By Stacking Circular Autoencoders
[]
Training deep neural networks in biological systems is faced with major challenges such as scarce labeled data and obstacles for propagating error signals in the absence of symmetric connections. We introduce Tourbillon, a new architecture that uses circular autoencoders trained with various recirculation algorithms in a self-supervised mode, with an optional top layer for classification or regression. Tourbillon is designed to address biological learning constraints rather than enhance existing engineering applications. Preliminary experiments on small benchmark datasets (MNIST, Fashion MNIST, CIFAR10) show that Tourbillon performs comparably to models trained with backpropagation and may outperform other biologically plausible approaches. The code and models are available at \url{https://anonymous.4open.science/r/Circular-Learning-4E1F}.
[ "biologically plausible architectures", "self-supervised learning", "autoencoders", "recirculation", "local learning", "tourbillon", "feedback alignment", "forward forward", "target", "target propagation" ]
https://openreview.net/pdf?id=maUaYMbmYX
https://openreview.net/forum?id=maUaYMbmYX
S93o2UysE1
official_review
1,728,495,275,886
maUaYMbmYX
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission14/Reviewer_sdnL" ]
NLDL.org/2025/Conference
2025
title: Interesting proposal but with serious issues summary: In this manuscript, the authors introduce a self-supervised architecture consisting of hierarchically stacked autoencoders trained with a recirculation algorithm, a model which they argue can address a large number of the incompatibilities of (approximate) backpropagation with biological constraints. The manuscript first surveys the issues in the implementation of backprop in biological networks, then presents the Tourbillon architecture as a combination of the recirculation idea with a modular hierarchy. Experiments on small-scale models are performed to explore variations of the learning algorithm, model composition, and comparing against backprop and feedback alignment. strengths: **1) Good summary of backprop vs. biological architecture.** The identification of the 8 reasons in sect. 1 is useful and goes beyond the most often cited issues, by including such points as clocked computation and developmental modularity. **2) Interesting architecture proposed** The Tourbillon architecture as a stack of circular autoencoders (each of which can constist of multiple layers) is to my knowledge novel and differs from existing proposals of bio-plausible models. It draws heavily on the recirculation idea of Hinton & McClelland (1988), who already speculated about hierarchical versions of their model, however here a concrete and trainable form is given to this idea. I found interesting about the model that unlike deep belief networks, which at first sight seem similar apart from the learning rule, the Tourbillon model does not seem to require recurrent equilibration of the network state across the hierarchy, making learning modular. Nonetheless, I had serious questions about the viability of the architecture, see weaknesses. **3) Generally good and concise summary of other bio-plausible learning rules in the appendix** Appendix sect. A.1 was instructive and good to read (except for A.1.2, see below). weaknesses: ## Major issues **1) Text and figures do not seem to be consistent in several places, and some figures seem incorrect** - line 258-261: Contrary to the text, in Table 2 more than 1 cycle seems to be (slightly) better. - Fig 3 and Fig A.3: Is it plausible that CAEs are better than BP? This seems surprising. Especially in Fig. A.3, why are the BP (train red) lines so unstable while CAE and feedback alignment are nicely smooth? This points to a problem with the learning rate or possibly an error with the colors in these plots (note also that it seems that red and green may have been interchanged in the test and train panels?) - line 290-291: How do I see this in Table 3? This only seems to be the case for the CIFAR-10 model. - Do tables 2 and 3 show the train or the test reconstruction loss? - lines 301-303: This is not what Fig 4 shows, here Tourbillon seems clearly less accurate, as would be expected since during self-supervised training the goal is reconstruction, not to preserve features which would be useful for classification. - Table 3: The numbers highlighted in bold are not the lowest numbers. Instead, it seems that using less CAEs in the stack is better - this would be consistent with issue 3) below. Also, the caption says "CAEs with different depths" while likely this refers to depth in the sense of the number of CAEs in the stack? **2) The model and learning rule are not clearly defined** The Tourbillon model is the main contribution of the manuscript, but section 3 does not give a clear and full definition of model and algorithm. Things which are not described, neither in the main text nor the appendix: - What the concrete weight update in the presence of multiple cycles of recirculation is. - What the procedure of reconstruction is at test time (Is it that the test sample is propagated forward to the top of the stack, then propagated down to the output? This would be quite different from the procedure at training time of the lower CAEs, and require discussion). - The choice and comparison of sequential and asynchronous training, and of learning rules b) and c) in addition to a) are not motivated. - What is the motivation of the authors to propose a doubly-deep model, where each of the stacked CAEs can itself be a deep network? My subjective recommendation would be to give a clear and full definition in the main text and move some of the empirical discussion of sect.4 to the appendix instead. **3) The upper CAEs may be detrimental for reconstruction if there is no noise** My interpretation is that at test time, the uncorrupted input sample is propagated forward to the top of the model via the encoder channel, then propagated backward along the separate decoder channel to yield the reconstructed sample. If this is the case, the lowest level CAE would decode the image from a hidden image representation which is not its own encoder representation, but instead a representation which only approximates its encoder representation (this is what the CAE above was trained to do) - therefore introducing an additional error. It is then to be expected that the test loss using this procedure is always worse than the test loss using only the lowest CAE in isolation, where the encoded image is handed directly to the decoder as during training. This would explain why in Table 3 the smaller depth of stack have better loss than deeper stacks, and this should also be true for depth 1. This would likely not mean that stacks of more than one CAE are useless, since if the input image is corrupted by noise the more compressed representation of the upper layers could help in denoising and yield a better reconstruction. However, if correct some experiments may need to be redone, and this would be a relevant issue to discuss (in a sense, the model would have no mechanism to add details to its reconstruction based on the sensory input, as e.g. a U-net with horizontal connections can, or a hierarchical predictive coding network where a compromise beween bottom-up and top-down information is created). ## Minor issues - The Tourbillon still seems to require clocking of the propagation: During training only certain parts of the network are activated for propagation and recirculation happens only in one CAE at a time. Also, it seems that the propagation procedure is different at train and at test times, while a biological architecture would likely not to distinguish these. - point 5) Labeling, in sect.1: Self-supervised learning is also normally done by backpropagation, and this is explicitly compared to in the manuscript. This is therefore not a reason why BP is not compatible with biology. - An important point which I was missing from the discussion of biologically plausible learning is that biological networks typically deal with time-dependent input sequences. This is a complication which the proposed Tourbillon model does not take into account. ### Small questions and typos - line 160 (and several other places): The eqref goes to the redundant eq.7 in the appendix, not to eq.1 in the main text. - line 170-171: Why is recirculation in the top layer identical to BP? - The question of distance plausibility was not clear to me. E.g. line 242, why are CAEs with less hidden layers more plausible? - line 327-333: Is the resulting model tourbillon-like or exactly a tourbillon model? - Sect. 4.3 and Table 4: Does the converted U-net have horizontal connections between the compressive and expansive branches as the original model? - Fig. 5: The t-SNE plots show that the representation has not lost the class structure. But to conclude that the model has improved the clustering a comparison to the input representation would be needed. (But in general shape and distances in t-SNE visualizations are hard to interpret) - Refs [5] and [6] are redundant. - Appendix sect. A.1.2: The explanation and discussion in lines 570-579 was unclear to me. - eq. 7 (and 1): It is not explained why versions b) and c) could be interesting. Is a) proposed here or is it the existing recirculation rule? - lines 641-645: It was unclear to me why the difference between activations from two sequential passes can be interpreted as a rate. In my understanding STDP depends on the relative timing of spikes in addition to the spike rate. So maybe a comparison to (Anti-)Hebbian plasticity can be made but I did not see how spike-time dependence arises here. - In Alg. A.1, while likely referring to encoder and decoder, E and D are not defined - Alg. A.2 in the def of circular_ae, the inverse of $L_i$ may not be defined, and if it exists the output of the CAE would perfectly match the input. confidence: 4 justification: While an interesting proposal is made in the manuscript, currently there seem to be factual errors in the figures and text (weakness 1) and the model may behave differently than expected (weakness 3). Therefore I recommend to reject the current version of the manuscript, as long as these issues can not be clarified as misunderstandings or corrected.
maUaYMbmYX
Towards Biologically Plausible Learning By Stacking Circular Autoencoders
[]
Training deep neural networks in biological systems is faced with major challenges such as scarce labeled data and obstacles for propagating error signals in the absence of symmetric connections. We introduce Tourbillon, a new architecture that uses circular autoencoders trained with various recirculation algorithms in a self-supervised mode, with an optional top layer for classification or regression. Tourbillon is designed to address biological learning constraints rather than enhance existing engineering applications. Preliminary experiments on small benchmark datasets (MNIST, Fashion MNIST, CIFAR10) show that Tourbillon performs comparably to models trained with backpropagation and may outperform other biologically plausible approaches. The code and models are available at \url{https://anonymous.4open.science/r/Circular-Learning-4E1F}.
[ "biologically plausible architectures", "self-supervised learning", "autoencoders", "recirculation", "local learning", "tourbillon", "feedback alignment", "forward forward", "target", "target propagation" ]
https://openreview.net/pdf?id=maUaYMbmYX
https://openreview.net/forum?id=maUaYMbmYX
N2yiwbDdCV
decision
1,730,901,554,862
maUaYMbmYX
[ "everyone" ]
[ "NLDL.org/2025/Conference/Program_Chairs" ]
NLDL.org/2025/Conference
2025
title: Paper Decision decision: Reject
maUaYMbmYX
Towards Biologically Plausible Learning By Stacking Circular Autoencoders
[]
Training deep neural networks in biological systems is faced with major challenges such as scarce labeled data and obstacles for propagating error signals in the absence of symmetric connections. We introduce Tourbillon, a new architecture that uses circular autoencoders trained with various recirculation algorithms in a self-supervised mode, with an optional top layer for classification or regression. Tourbillon is designed to address biological learning constraints rather than enhance existing engineering applications. Preliminary experiments on small benchmark datasets (MNIST, Fashion MNIST, CIFAR10) show that Tourbillon performs comparably to models trained with backpropagation and may outperform other biologically plausible approaches. The code and models are available at \url{https://anonymous.4open.science/r/Circular-Learning-4E1F}.
[ "biologically plausible architectures", "self-supervised learning", "autoencoders", "recirculation", "local learning", "tourbillon", "feedback alignment", "forward forward", "target", "target propagation" ]
https://openreview.net/pdf?id=maUaYMbmYX
https://openreview.net/forum?id=maUaYMbmYX
KPm5YEAoob
official_review
1,728,404,057,641
maUaYMbmYX
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission14/Reviewer_PEcT" ]
NLDL.org/2025/Conference
2025
title: Nearly biologically plausible stacked circular autoencoders summary: Artificial deep neural networks while they have demonstrated success across most machine learning applications, they lack the biological constrains biological neural networks face. Biological constraints such as limited examples, local synaptic or weight learning or directionality. To bridge the gap between those constraints and artificial deep networks, the authors introduce a new architecture with circular autoencoders as backbone stacked into a multilayer structure. This architecture and self-supervised training meet more biological constrains than existing methods. They test the model in three datasets providing competitive performance to alternative solutions. strengths: The manuscript is presented clearly, and it is technically sound. The model was tested on three publicly available datasets used to benchmark machine learning models. They compared the performance results to existing solutions showing comparable results. The performed ablation studies to test mode components relevance. They also clearly address some of the limitations of the model. weaknesses: Biological constraints are relevant for the study of biological systems, like the brain. However, the authors only benched marked their model using datasets that test engineer applications. Since the authors failed to provide for a clear advantage to alternative models (e.g. higher performance or lower data demands), the impact of the solution is limited. To fully assess the potential, one would have to validate the model against state-of-the-art solutions (not off-the-shelf methods like shown) to illustrate the technological advantages; or one could test it on biological data to drive academic insights. The authors mention the critical limitations of the model, and it would be important to include them in future work. The readability of the figures must be improved. All models should be used for all comparisons and results. confidence: 4 justification: The research was performed adequately and provides a new solution to bringing together biological and artificial neural networks, with the associated implications to neuroscience and AI.
maUaYMbmYX
Towards Biologically Plausible Learning By Stacking Circular Autoencoders
[]
Training deep neural networks in biological systems is faced with major challenges such as scarce labeled data and obstacles for propagating error signals in the absence of symmetric connections. We introduce Tourbillon, a new architecture that uses circular autoencoders trained with various recirculation algorithms in a self-supervised mode, with an optional top layer for classification or regression. Tourbillon is designed to address biological learning constraints rather than enhance existing engineering applications. Preliminary experiments on small benchmark datasets (MNIST, Fashion MNIST, CIFAR10) show that Tourbillon performs comparably to models trained with backpropagation and may outperform other biologically plausible approaches. The code and models are available at \url{https://anonymous.4open.science/r/Circular-Learning-4E1F}.
[ "biologically plausible architectures", "self-supervised learning", "autoencoders", "recirculation", "local learning", "tourbillon", "feedback alignment", "forward forward", "target", "target propagation" ]
https://openreview.net/pdf?id=maUaYMbmYX
https://openreview.net/forum?id=maUaYMbmYX
ITWldChl5E
official_review
1,728,405,088,927
maUaYMbmYX
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission14/Reviewer_jicL" ]
NLDL.org/2025/Conference
2025
title: Novel idea with potential summary: This research paper proposes a biologically plausible deep-learning architecture called Tourbillon, which is a stack of circular autoencoders trained using recirculation algorithms. Tourbillon aims to overcome the challenges of training deep neural networks as biologically plausible systems. The paper highlights the limitations of existing biologically plausible approaches like Feedback Alignment, Difference Target Propagation, and Stacked Autoencoders. It demonstrates how Tourbillon addresses these limitations through its novel architecture and training algorithms. The authors present preliminary experiments on MNIST, Fashion MNIST, and CIFAR10 datasets, showing that Tourbillon achieves comparable performance to models trained with backpropagation and potentially outperforms other approaches. The foundations of the architecture and experiments are sound, but many aspects of the benchmarking of a novel architecture have been neglected (parameter size, training resources, limitations) strengths: - Fairly novel architectural design with unique claims, though a combination of different ideas has been employed. - Extensive experiments, including variable depth and architectural designs, have been conducted. - The explanations for obstacles and architecture are clear, with sufficient figures and tables. - Clear comparisons in functionality of the proposed model with others using a table. Furthermore, the claims have been sufficiently substantiated by the experiments. - Implementation and codes made available weaknesses: - Experiments limited to small datasets. - Though mentioned in different parts of the paper, there is no concise related works section, especially regarding previous works and citations in biological plausibility in NNs. There is a lack of mention of other optimization techniques, such as the Adjoint Sensitivity method in NDEs or many other DE Solvers suitable to different systems and conditions. - Some terms or phrases like ‘recirculating activities,’ ‘postsynaptic,’ and ‘presynaptic,’ etc., could use more explicit definitions. - Though the solution offered claims to address all the issues mentioned in the introduction, the offered explanation does not seem sufficient for some problems like: - symmetry of connections - forward non-linearities - Clocked Computation etc. - The memory, time, and other aspects of training, performance, and inference aren’t mentioned in the paper. . Questions: - The performance drop in the classification task for MNIST, one of the simpler tasks in vision, is concerning. Do you think applications on larger and more complicated data (which is available to biological systems) would be lacking more? - By spike in a biological system, do you mean neurons communicate by electrical pulses? aren’t they analogous to the binary form of numbers (as in a series of pulses)? - Does this architecture training have memory or any other advantages over the existing algorithms? Were there any limitations (resource-wise or general) faced during the training or inference? (I am curious as to whether it is practically advantageous in any one aspect) - Stacked AEs are the closest architecture (as far as intuition goes) among the mentioned baselines, so I don't see a performance or loss comparison between them. Is there any reason why it wasn't mentioned? - Though the objective is to create biologically plausible systems, if the resulting architecture doesn’t have advantages over the existing one (even if they aren’t biologically plausible), what is the motivation to pursue these constraints? (personally curious) confidence: 4 justification: Though more experiments are needed to explore the extent of biological plausibility, the paper puts forward a fairly novel architecture combining the advantages of many. Hence, the decision. final_rebuttal_confidence: 4 final_rebuttal_justification: The authors’ response and revision have effectively addressed several areas that were initially lacking. e updates to performance parameters for training and inference, along with the comparison to stacked autoencoders, were particularly helpful in addressing my concerns. Despite improvements, scalability remains a concern. The model’s struggle with large datasets limits its real-world applicability, questioning the practicality of a biologically plausible approach if it lacks clear advantages. Addressing this seems essential to unlock any potential in brain-inspired learning where such large databases play a crucial role. As previously noted, the paper introduces a novel approach to biological plausibility and reveals hidden potential through diverse experiments. I will, therefore, maintain my initial judgment.
maUaYMbmYX
Towards Biologically Plausible Learning By Stacking Circular Autoencoders
[]
Training deep neural networks in biological systems is faced with major challenges such as scarce labeled data and obstacles for propagating error signals in the absence of symmetric connections. We introduce Tourbillon, a new architecture that uses circular autoencoders trained with various recirculation algorithms in a self-supervised mode, with an optional top layer for classification or regression. Tourbillon is designed to address biological learning constraints rather than enhance existing engineering applications. Preliminary experiments on small benchmark datasets (MNIST, Fashion MNIST, CIFAR10) show that Tourbillon performs comparably to models trained with backpropagation and may outperform other biologically plausible approaches. The code and models are available at \url{https://anonymous.4open.science/r/Circular-Learning-4E1F}.
[ "biologically plausible architectures", "self-supervised learning", "autoencoders", "recirculation", "local learning", "tourbillon", "feedback alignment", "forward forward", "target", "target propagation" ]
https://openreview.net/pdf?id=maUaYMbmYX
https://openreview.net/forum?id=maUaYMbmYX
2ONISKYsa5
meta_review
1,730,458,868,049
maUaYMbmYX
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission14/Area_Chair_VCcH" ]
NLDL.org/2025/Conference
2025
metareview: This work introduces Tourbillon, a novel self-supervised stack of circular auto-encoders trained via recirculation algorithms. The authors propose this method to tackle core challenges in biologically inspired learning, particularly in error signal propagation across symmetric connections. The approach is innovative, and all reviewers express interest in its potential, noting that it is the first practical implementation of such a concept inspired by previous methods. However, there are notable concerns regarding the robustness of the experimental evaluation. Review sdnL highlights issues related to the upper CAE reconstruction with noise, questioning the method’s alignment with biologically inspired learning principles—a concern also echoed in review pejn. Additionally, the work lacks significant comparative analyses that could better contextualise its advantages and limitations, thereby diminishing the rigor of its experimental foundation. While the strengths and novelty of the approach are clear, there is strong consensus among reviewers that the current evaluation lacks the depth needed to establish the method’s generalisability, scalability, and practical viability. Although the authors provide some discussion on scalability challenges, further analysis would enhance confidence in the method’s robustness. Notably, several reviewers emphasise the need for performance or computational improvements to substantiate the method’s significance. In line with the reviewers’ feedback, I recommend that further revisions focus on strengthening the evaluation and providing a more comprehensive justification for the conclusions drawn. recommendation: Reject suggested_changes_to_the_recommendation: 3: I agree that the recommendation could be moved up confidence: 4: The area chair is confident but not absolutely certain
fgs9wdfLkn
YOLOv8++ with Weights Pruning for Road Object Detection in Rainy Environment
[]
Object detection on roadways is crucial for autonomous driving and advanced driver assistance systems. However, adverse weather conditions, particularly rain, significantly degrade the performance of these systems. This paper presents a novel approach to enhance road object detection in rainy weather scenarios by applying a modified YOLOv8 model. The proposed YOLOv8++ model includes specialized data augmentation techniques to simulate rainy conditions, adjustments in the network architecture to improve robustness against rain-induced noise, and optimized training strategies to enhance model performance. The study leverages BDD100K, Cityscapes and DAWN-Rainy datasets consisting of various road scenarios under different intensities of rain. We systematically augment these datasets to ensure the model learns to identify objects obscured by rain streaks and reflections. Our YOLOv8++ model introduces enhancements in the feature extraction layers, enabling better handling of occlusions and reduced visibility. Extensive experiments demonstrate that our model outperforms the baseline YOLOv8 and other state-of-the-art object detection models in terms of mean Average Precision (mAP) under rainy conditions. Additionally, to ensure the model's efficiency and suitability for real-time applications, we apply a network pruning technique, which reduces the model size and computational requirements without sacrificing performance. This research contributes to the field of autonomous driving by providing a more reliable object detection system for adverse weather conditions, enhancing overall road safety.
[ "Autonomous Driving", "Object Detection", "YOLOv8", "Weight Pruning" ]
https://openreview.net/pdf?id=fgs9wdfLkn
https://openreview.net/forum?id=fgs9wdfLkn
oNOqZtAcAw
official_review
1,728,212,290,601
fgs9wdfLkn
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission35/Reviewer_vnBQ" ]
NLDL.org/2025/Conference
2025
title: A very good idea and topic, but unfortunately the work is too lacking summary: The paper proposes a modified version of the YOLOv8 model, called YOLOv8++, which is designed to improve object detection under adverse weather conditions, and specifically rain. The proposed modification can be categorised in 3 groups: (1) Integration of specialised layer an modules designed to handle noise and distortion more effectively; (2) Advanced data augmentation techniques to simulate rain streaks, droplets, fog, and glare; and (3) weight pruning to ease complexity and reduce inference time. All these proposed modification are perfectly sound, and have the potential to substantially improve performance; however, from the paper it is not clear whether they have actually been applied. There is no mention, other than in the introduction, of any specialised layer or module being added. If such layers/modules are present, their properties are not described anywhere, nor there is any explanation as to why or how they would be specialised to handle noise and distortion, as claimed. The claim of "advanced data augmentation simulating rain streaks, droplets, fog, and glare" put in the introduction seems also to be unsatisfied, as in the Datasets Description section actually states "we applied basic data augmentation techniques [such as] horizontal flips, scaling, brightness adjustment, and blurring". The authors claim these modification can mimic the effect of rain, which is clearly incorrect. The weight pruning is indeed applied correctly, but it's clear from Table 2 of the results, that applying pruning reduces the performance of the proposed model back to being equal to the original model without modifications, de facto nullifying any possible improvement. Even without pruning, the improvements of the proposed model shown in Table 1 seem very minor, being restricted mostly to the third decimal of an already very low precision of 50% (toss of a coin). I actually question whether these difference in the average precision are statistically relevant, but no statistical test is presented. The topic of object detection is definitely and important area of research, especially in the context of driving in adverse weather conditions, and the ideas the authors' propose in the points (1)--(3) above are definitely sound, but their application in the paper is either incorrect, not explained, or appear to not have been done. The results are too weak, and statistical significance tests are not performed, so it is unclear whether there is any actual improvement brought by the proposed method. strengths: The paper touches upon the important and actual topic of object recognition in driving under adverse weather conditions. This is a safety-critical application for which additional research is necessary and essential to ensure robust and safe deployment of ML models, either to fully-autonomous driving vehicles or just to assisted driving. In the introduction, the authors describe their proposed model in terms of three modification to the YOLOv8 model: (1) Integration of specialised layer an modules designed to handle noise and distortion more effectively; (2) Advanced data augmentation techniques to simulate rain streaks, droplets, fog, and glare; and (3) weight pruning to ease complexity and reduce inference time. These ideas as presented in the introduction gave the impression of an exceptionally sound and strong work. I have no doubts that there is a lot of potential in investigating these three avenues, and especially points (1) and (2). weaknesses: Unfortunately, despite the impressive claims and indeas from the introduction, I struggle to find in the paper whether these things were actually accomplished or not. I will divide the weaknesses in major and minors: * Major weakness 1: The claim of "Integration of specialised layer an modules designed to handle noise and distortion more effectively", arguably the most important modification from a technical/research point of view, is never detailed in the paper. Image 1 shows the proposed architecture, but there is no explanation regarding which are the new layers/modules introduced. The fact that they should be "specialised to handle noise and distortion" is an interesting idea but its implementation and functioning need to be explained in detail. How are these components specialised, and why such specialisation should make them handle noise/distortion better? There is no discussion on why particular layers/modules were chosen, and how they influence the final results respect to the original. Note that the Compound Scaling technique cannot be equated to the claim of introducing new layers and moduels specialised as described above. Compound Scaling is a hyperparameter tuning technique which, while surely useful, is not particularly novel or groundbreaking. * Major weakness 2: The authors describe at length how simulation of rain, droplets, fog, glare, etc. can be applied to augment the dataset and consequently make the model robust against those interferences. However, in the Dataset Description section, the authors only state that they apply "basic data augmentation techniques" (line 353), such as flipping, scaling, brightness and blurring adjustments. This is in direct contrast to the introductory claim of "advanced data augmentation techniques" (line 095). The claim that these augmentations would mimic the effect of rain is clearly not believable and incorrect. Perhaps only blurring could, somehow, mimic reduced visibility thanks to fog or rain, but even so, it depends greatly on the type of blurring applied, which is not described. If any of the important data augmentation techniques (actual simulations of rain, droplets, fog, etc. either via a physics engine or other methods) are applied, it is not evident from the paper. * Major weakness 3: I disagree with the claim at line 384-385 that the modification led to substantial improvement. The final performance of YOLOv8++ is only very marginally better than the original YOLOv8. No statistical test is performed to check whether the difference shown in the table is significant. I am also unsure about the validity of calculating the average, at the bottom of Table 1, in that way, basically taking the average mAP score of the 5 datasets. Each dataset contains a different number of images, and perhaps a relative weighting should be applied. In addition, and connected to the point 2 above regarding augmentation, as far as I understand the only dataset that natively contains rainy images is DAWN-rainy. Therefore, I think this dataset should be the one used for comparison, as it is a faithful example of real-life rainy conditions, and not a synthetised one. From Table 1, it is clear that performance on DAWN-rainy is equal between the original and proposed method. * Major weakness 4: There is no mention of the train/test split or other essential components of training, for instance how the Compound Scaling was perormed (and how the corresponding validation dataset is chosen). My suggestion would be to train the model on BDD100K and Cityscapes (with and without augmentations), and use DAWN-rainy as test dataset only. This would give a better estimate of real-world performance of the proposed method. * Minor weakness 1: The authors spend an enormous amount of space (1 full page, 20% of the paper!) to explain weight pruning, which is an extremely basic and well-known concept. I suggest to summarise weight pruning in no more than two sentences, and use the space for more important and interesting topics, like the explanation of the additional specialised layers/modules. * Minor weakness 2: The authors base their work on the YOLOv8 model, which seems to be 2 versions behind the current latest YOLO (see line 053). There is no explanation as to why they chose an outdated version. Even assuming that for some reason, the v8 version was the most appropriate, then performance of YOLOv8++ should be compared with that of YOLOv10. * Minor weakness 3: The claims at lines 400-401 are unsubstantiated, as the authors do not provide a measure of improved robustness or efficiency. The claim of improved accuracy is disputed by the points I raise above. The claims in lines 411-421 are likewise unsubstantiated as no comparison of efficiency (for instance by meausing training time, number of parameters, or memory footprint) is provided between the original and proposed model. confidence: 4 justification: The ideas of the paper are interesting, and the field of application is both important and actual, but it is not evident from the body of the work that the proposed ideas were actually carried out as presented in the introduction. Several claims are unsubstantiated or wrong. What could be the main technical/research contributions of the work, are not discussed or detailed at a sufficient level. The improvement in the performance appear to be so marginal, that I doubt their statistical relevance. The level of detail is such that the work is not reproducible or verifiable even for an experienced researcher. final_rebuttal_confidence: 4 final_rebuttal_justification: The author do not implement the ideas they put forward. They claim to integrate "specialized layers and modules that are designed to handle noise and distortions more effectively", but these "layers and modules" are only 1 CNN layer, which is neither specialised not designed. This would have been the one novel, and important, contribution of this paper, but it is not done. The rest of the techniques are interesting, but straightforward application of well established concepts (e.g. weight pruning), which yield results well within expectations. One technique that is perhaps more interesting than the others, namely the generation of rain patterns in the datasets. But no test is performed for other weather conditions for which the authors also claim their method to work, for example fog. All in all, this paper contains good ideas, but most of the claims are not substantiated or verified, and the major point that could be novel is actually not done.
fgs9wdfLkn
YOLOv8++ with Weights Pruning for Road Object Detection in Rainy Environment
[]
Object detection on roadways is crucial for autonomous driving and advanced driver assistance systems. However, adverse weather conditions, particularly rain, significantly degrade the performance of these systems. This paper presents a novel approach to enhance road object detection in rainy weather scenarios by applying a modified YOLOv8 model. The proposed YOLOv8++ model includes specialized data augmentation techniques to simulate rainy conditions, adjustments in the network architecture to improve robustness against rain-induced noise, and optimized training strategies to enhance model performance. The study leverages BDD100K, Cityscapes and DAWN-Rainy datasets consisting of various road scenarios under different intensities of rain. We systematically augment these datasets to ensure the model learns to identify objects obscured by rain streaks and reflections. Our YOLOv8++ model introduces enhancements in the feature extraction layers, enabling better handling of occlusions and reduced visibility. Extensive experiments demonstrate that our model outperforms the baseline YOLOv8 and other state-of-the-art object detection models in terms of mean Average Precision (mAP) under rainy conditions. Additionally, to ensure the model's efficiency and suitability for real-time applications, we apply a network pruning technique, which reduces the model size and computational requirements without sacrificing performance. This research contributes to the field of autonomous driving by providing a more reliable object detection system for adverse weather conditions, enhancing overall road safety.
[ "Autonomous Driving", "Object Detection", "YOLOv8", "Weight Pruning" ]
https://openreview.net/pdf?id=fgs9wdfLkn
https://openreview.net/forum?id=fgs9wdfLkn
edHzSBCKkS
decision
1,730,901,556,084
fgs9wdfLkn
[ "everyone" ]
[ "NLDL.org/2025/Conference/Program_Chairs" ]
NLDL.org/2025/Conference
2025
title: Paper Decision decision: Reject
fgs9wdfLkn
YOLOv8++ with Weights Pruning for Road Object Detection in Rainy Environment
[]
Object detection on roadways is crucial for autonomous driving and advanced driver assistance systems. However, adverse weather conditions, particularly rain, significantly degrade the performance of these systems. This paper presents a novel approach to enhance road object detection in rainy weather scenarios by applying a modified YOLOv8 model. The proposed YOLOv8++ model includes specialized data augmentation techniques to simulate rainy conditions, adjustments in the network architecture to improve robustness against rain-induced noise, and optimized training strategies to enhance model performance. The study leverages BDD100K, Cityscapes and DAWN-Rainy datasets consisting of various road scenarios under different intensities of rain. We systematically augment these datasets to ensure the model learns to identify objects obscured by rain streaks and reflections. Our YOLOv8++ model introduces enhancements in the feature extraction layers, enabling better handling of occlusions and reduced visibility. Extensive experiments demonstrate that our model outperforms the baseline YOLOv8 and other state-of-the-art object detection models in terms of mean Average Precision (mAP) under rainy conditions. Additionally, to ensure the model's efficiency and suitability for real-time applications, we apply a network pruning technique, which reduces the model size and computational requirements without sacrificing performance. This research contributes to the field of autonomous driving by providing a more reliable object detection system for adverse weather conditions, enhancing overall road safety.
[ "Autonomous Driving", "Object Detection", "YOLOv8", "Weight Pruning" ]
https://openreview.net/pdf?id=fgs9wdfLkn
https://openreview.net/forum?id=fgs9wdfLkn
QV7o6VQTXD
meta_review
1,730,468,366,240
fgs9wdfLkn
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission35/Area_Chair_YF74" ]
NLDL.org/2025/Conference
2025
metareview: This paper deals with improving YOLOv8 object detector to perform road object detection under rainy conditions. Several positive aspects (motivation and importance of the problem, clarity and simplicity of the model, use of public datasets) were identified by the reviewers. However, more numerous negative aspects were also raised. While the authors provide a systematic rebuttal to reviewers' comments and their additional details were judged valuable, there are still some major concerns that remain after the rebuttal, among which are the lack of novelty, and the lack of detail/justification of the methodology and in the experiments. For instance, adding a convolutional layer is not original nor specialized to handle noise or distortion. Weight pruning has been extensively studied. Comparison with rainy-compliant models is limited. Data augmentations seem basic with respect to the objective of simulating rainy conditions. Based on these weaknesses, all reviewers suggest rejection. recommendation: Reject suggested_changes_to_the_recommendation: 2: I'm certain of the recommendation. It should not be changed confidence: 5: The area chair is absolutely certain
fgs9wdfLkn
YOLOv8++ with Weights Pruning for Road Object Detection in Rainy Environment
[]
Object detection on roadways is crucial for autonomous driving and advanced driver assistance systems. However, adverse weather conditions, particularly rain, significantly degrade the performance of these systems. This paper presents a novel approach to enhance road object detection in rainy weather scenarios by applying a modified YOLOv8 model. The proposed YOLOv8++ model includes specialized data augmentation techniques to simulate rainy conditions, adjustments in the network architecture to improve robustness against rain-induced noise, and optimized training strategies to enhance model performance. The study leverages BDD100K, Cityscapes and DAWN-Rainy datasets consisting of various road scenarios under different intensities of rain. We systematically augment these datasets to ensure the model learns to identify objects obscured by rain streaks and reflections. Our YOLOv8++ model introduces enhancements in the feature extraction layers, enabling better handling of occlusions and reduced visibility. Extensive experiments demonstrate that our model outperforms the baseline YOLOv8 and other state-of-the-art object detection models in terms of mean Average Precision (mAP) under rainy conditions. Additionally, to ensure the model's efficiency and suitability for real-time applications, we apply a network pruning technique, which reduces the model size and computational requirements without sacrificing performance. This research contributes to the field of autonomous driving by providing a more reliable object detection system for adverse weather conditions, enhancing overall road safety.
[ "Autonomous Driving", "Object Detection", "YOLOv8", "Weight Pruning" ]
https://openreview.net/pdf?id=fgs9wdfLkn
https://openreview.net/forum?id=fgs9wdfLkn
QUIlu1K65P
official_review
1,728,321,429,654
fgs9wdfLkn
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission35/Reviewer_zRDL" ]
NLDL.org/2025/Conference
2025
title: YOLOv8++ with Weights Pruning for Road Object Detection in Rainy Environment summary: The authors present a methodology for image object detection, using the YOLO model, to perform well also under rainy conditions using augmentation techniques, in particular given by horizontal flips, image scaling, brightness adjustments and blurring. The authors also present a pruning method to achieve a sparse neural network for efficiency considerations. The work uses known data sets as benchmarks: BDD100K, Cityscapes016 and DAWN-Rainy. Compared to previous YOLO versions, the methodology, called YOLOv8++, outperforms on average with respect to the data sets. The authors moreover show, for the particular datasets, that the pruning technique does not significantly reduce the performance of the model for up to 50 % pruning. strengths: 1: The authors describe their work in simple terms, and the purpose is clearly defined, namely the use of augmentation techniques to improve on object detection under rainy conditions. 2: The authors are using several data sets as benchmark which enables a robust comparison with other methods/models. 3: Ablation study of the pruning technique to see the trade-off between degree of pruning and loss in performance. weaknesses: 1. The methodologies do not present anything particular innovative: The augmentation techniques are well-known. The pruning technique is also well-known. See "Learning both weights and connections for efficient neural networks" by Han et al. published in NIPS'15. This paper should have been cited. 2. The improvement in performance is not particularly large, and whether the method is significantly better than the others could be better inferred if the models where retrained multiple times per data set, and the results where given in mean +- standard deviation. 3. The paper did not include sufficient information on how the augmentation was achieved, and rather the pruning technique took more place. Different augmentation techniques should be considered and compared against each other, for instance in an ablation study. confidence: 4 justification: The assessment is mostly based on the fact that it is not considered as innovative as the techniques presented in the paper do not bring anything new to the table (classic augmentation of images and pruning techniques for neural networks). Moreover, it is unclear whether the model in fact, statistically speaking, significantly outperforms the other models. final_rebuttal_confidence: 4 final_rebuttal_justification: I consider the revised version as improved with the details in algorithms 1 and 2 as well as Figure 2 in terms of explaining the specific details and making the results reproducible. However, importantly novelty in the paper is still missing. It is still the case that "application of AI"-papers can be highly valuable especially for considering the effect of certain AI methodologies in different scenarios. However, in this case, whether there is a true positive effect of their procedure and how large that would be, is unclear to me. Correctness would be improved by presenting the variation in model performance such as in mean +- std. Ablation study of image augmentation is missing which could tell the significance of each augmentation procedure.
fgs9wdfLkn
YOLOv8++ with Weights Pruning for Road Object Detection in Rainy Environment
[]
Object detection on roadways is crucial for autonomous driving and advanced driver assistance systems. However, adverse weather conditions, particularly rain, significantly degrade the performance of these systems. This paper presents a novel approach to enhance road object detection in rainy weather scenarios by applying a modified YOLOv8 model. The proposed YOLOv8++ model includes specialized data augmentation techniques to simulate rainy conditions, adjustments in the network architecture to improve robustness against rain-induced noise, and optimized training strategies to enhance model performance. The study leverages BDD100K, Cityscapes and DAWN-Rainy datasets consisting of various road scenarios under different intensities of rain. We systematically augment these datasets to ensure the model learns to identify objects obscured by rain streaks and reflections. Our YOLOv8++ model introduces enhancements in the feature extraction layers, enabling better handling of occlusions and reduced visibility. Extensive experiments demonstrate that our model outperforms the baseline YOLOv8 and other state-of-the-art object detection models in terms of mean Average Precision (mAP) under rainy conditions. Additionally, to ensure the model's efficiency and suitability for real-time applications, we apply a network pruning technique, which reduces the model size and computational requirements without sacrificing performance. This research contributes to the field of autonomous driving by providing a more reliable object detection system for adverse weather conditions, enhancing overall road safety.
[ "Autonomous Driving", "Object Detection", "YOLOv8", "Weight Pruning" ]
https://openreview.net/pdf?id=fgs9wdfLkn
https://openreview.net/forum?id=fgs9wdfLkn
A4UKMfqyth
official_review
1,727,080,476,959
fgs9wdfLkn
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission35/Reviewer_KSSN" ]
NLDL.org/2025/Conference
2025
title: Lack of novelty and detail in augmentation and architectural adjustments summary: The paper proposes YOLOv8++, a modified version of YOLOv8 aimed at improving road object detection in rainy conditions. It claims to introduce new data augmentation techniques, make architectural adjustments for handling rain-induced noise, and apply weight pruning to optimize the model for real-time applications. The model is tested using several datasets, including BDD100K and DAWN-Rainy, and reportedly outperforms the baseline YOLOv8 and other detection models in terms of mean Average Precision (mAP) under rainy conditions. strengths: Relevant problem: The paper addresses an important issue in autonomous driving: improving object detection in bad weather conditions. Efficiency considerations: The use of weight pruning is a good approach for improving the model’s real-time performance, which is key for practical applications like autonomous driving. weaknesses: Limited detail in augmentation techniques: The paper mentions using advanced data augmentation for rainy conditions but does not give details beyond basic transformations like flipping, color distortion, and blurring. This lack of specific techniques weakens the contribution, especially when other works have explored more rain-specific augmentations. Unclear architectural modifications: The claimed architectural changes are vague. Simply tuning hyperparameters like depth and width does not qualify as meaningful model innovation. Figure 1, intended to explain the improved architecture, does not highlight what specifically sets YOLOv8++ apart from YOLOv8. Additionally, there is no clear explanation of how these hyperparameters were optimized in practice. Missing comparative analysis: The paper does not compare its performance with other models specifically designed for rain, even though it acknowledges the existence of such work. Over-explanation of basic methods: The section on weight pruning is overly long and focuses on basic concepts, offering little new insight. The pruning results do not add much scientific value, as they fail to demonstrate notable improvements or trade-offs beyond what is already known or expected. Limited scientific contribution: The paper's overall contribution is weak. The augmentation techniques and architectural changes are either too simple or poorly explained, while the pruning results are not particularly innovative or useful. confidence: 4 justification: While the paper addresses a relevant and practical problem—improving object detection in rainy conditions—it lacks sufficient depth and novelty to warrant acceptance. The augmentation techniques are not clearly explained beyond basic transformations, and the architectural modifications appear to be limited to simple hyperparameter tuning. Moreover, the pruning approach is standard and does not offer new insights or significant improvements. Overall, the paper's contributions are too incremental, with insufficient detail and innovation, making it hard to consider it a substantial addition to the field.
fgs9wdfLkn
YOLOv8++ with Weights Pruning for Road Object Detection in Rainy Environment
[]
Object detection on roadways is crucial for autonomous driving and advanced driver assistance systems. However, adverse weather conditions, particularly rain, significantly degrade the performance of these systems. This paper presents a novel approach to enhance road object detection in rainy weather scenarios by applying a modified YOLOv8 model. The proposed YOLOv8++ model includes specialized data augmentation techniques to simulate rainy conditions, adjustments in the network architecture to improve robustness against rain-induced noise, and optimized training strategies to enhance model performance. The study leverages BDD100K, Cityscapes and DAWN-Rainy datasets consisting of various road scenarios under different intensities of rain. We systematically augment these datasets to ensure the model learns to identify objects obscured by rain streaks and reflections. Our YOLOv8++ model introduces enhancements in the feature extraction layers, enabling better handling of occlusions and reduced visibility. Extensive experiments demonstrate that our model outperforms the baseline YOLOv8 and other state-of-the-art object detection models in terms of mean Average Precision (mAP) under rainy conditions. Additionally, to ensure the model's efficiency and suitability for real-time applications, we apply a network pruning technique, which reduces the model size and computational requirements without sacrificing performance. This research contributes to the field of autonomous driving by providing a more reliable object detection system for adverse weather conditions, enhancing overall road safety.
[ "Autonomous Driving", "Object Detection", "YOLOv8", "Weight Pruning" ]
https://openreview.net/pdf?id=fgs9wdfLkn
https://openreview.net/forum?id=fgs9wdfLkn
2vW1EvDAFE
official_review
1,728,515,403,904
fgs9wdfLkn
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission35/Reviewer_PW4f" ]
NLDL.org/2025/Conference
2025
title: Interesting and Many Branched Work, but Incomplete summary: The paper attempts to address three areas in the field of object detection: - Object Detection for Adverse Weather (ODAW) - Efficiency through Weight Pruning - Model Changes via Compound Scaling While the project is ambitious and attempts to tackle important challenges, it appears incomplete and lacks clarity in several critical areas. The combination of multiple partially addressed topics obscures the potential valuable contributions. strengths: - Strong Motivation for ODAW: The paper provides a solid rationale for focusing on object detection in adverse weather, highlighting the importance of this challenge. - Clear Weight Pruning Methodology: The approach to weight pruning is well-described, allowing for replication. Interesting Findings on Pruning Impact: Results indicate that pruning does not significantly deteriorate the model performance when using the proposed model, which is a noteworthy contribution although limited by the somewhat lacking description of the model. This would be stronger if using the known YOLOv8 for the pruning experiments. weaknesses: The authors have attempted several improvements to the object detector model YOLOv8. First, they apply data augmentation to improve performance on data from adverse weather conditions. Second, they apply changes to the model architecture based on the success of efficientnet and finally they apply a weight pruning technique to reduce the size of the proposed model. The experiments indicate that the weight pruning does not severly affect the performance of the model on the datasets BDD100K, CityScapes and DAWN-Rainy. However there are many unsupported claims, the methodological section does not properly cover the experiments and components used and the authors have not chosen experiments that isolate variables and thus reveal the effects they seek to understand and so the learning potential is mostly lost. confidence: 4 justification: This section is long, but consider it an effort to provide proper feedback aiming for you to gain and share more knowledge from the work you have done: Overall, the questions being tackled are important and interesting and the methods seem relevant, but the work is incomplete. The authors attempt to address 3 main points: 1. Object detection for Adverse weather (ODAW). 2. Model Efficiency based on weight pruning 3. Model efficinecy based on compound scaling. Let us take each main point at a time: 1. Object detection for adverse weather: 1. Motivation: Good 3. Method: Lacking. For the paper to be informative about the method used, it should be clear to the reader what has actually been done. The authors quickly refer to how such augmentations are typically done and describe the used method in general terms, but without being specific enough for someone to replicate the method. 5. Experimental design and evaluation: Lacking. With benchmark datasets and a well established metric, this part should not need a lot of information and it is not too short. However, there are some unclear details here that are very important. The paper states that the target is to apply data augmentations to improve the model for adverse weather conditions without depending on real data with adverse weather. The authors have used datasets with both adverse weather and not. Have the augmentations been applied for training and real adverse weather data for testing? If so, the paper would benefit greatly from making this point more clear as this would then indicate if the augmentation is similar enough to the real thing to allow us to not need the extra real data. If not, I would suggest setting ut experiments to compare the same model trained with and without the augmentation and then test both on the real adverse weather to see if the augmentation helped. Cross-validation would not reveil the useful effects here since the cross-validation would both train and test on the augmented data. 6. Results: Table 1 relates perhaps more to model comparison than to augmentation in practice. Model comparison we will get back to. 7. Discussion and reflection: Lacking. it is not obvious what we learn related to the augmentation in Table 1. If it was not meant to be tested it might be better to not put so much weight on it in the rest of the paper. 2. Efficiency with weight pruning (overall, better addressed than the augmentation part) 1. Motivation: Good. 3. Method: Good, but with a major problem! It is clear what has been done for the pruning and others could use this work for replication. However, the model part and especially the measure of efficiency is not as clear. To claim anything about efficiency, there should be evaluation based on nr. of parameters or FLOPs required or similar useful measure related to required resources for training or use of the model. Thus it is not clear from the paper if this is a model that has been upscaled in all directions, and then pruned or if it actually downscaled as well. This would be important for the results and should be discussed. 4. Experimental design and evaluation: Small, but reasonably so. The evaluation metric is clearly stated and the table is relatively self-explanatory although it might be helpful to use the term "compression ratio" instead of "pruning" as this is the term you introduce earlier. 5. Results: OK (given the experimental section). The presentation of the results is relatively clear although the number of digits should be lower. The high number of digits makes many interesting results in the table less visible while not giving more useful information as the values appear relatively noisy considering the variability of the numbers. It even appears that the deterioration of the model performance is far from monotone. 6. Discussion and reflection: OK. You show that the pruning does not seem to affect the model performance too much. This is interesting and a good finding to show. 3. Model changes by compound scaling: 1. Motivation and related work: OK. 2. Method: Lacking. The components of the scaling is discussed, but not the actual scaling method apart from the final network. 3. Experimental design and evaluation: Based on the motivation, the target here would be to improve efficiency of the model without a too high penalty on performance. To understand more about the tradeoff between efficiency and performance, the results should again include some measure of size or efficiency of the models. Note that the methodological section does not actually make it clear that the proposed model is more efficient or smaller than the original YOLOv8. The figure might show it for those that know the YOLOv8 intimately, but this should not be assumed. Note that this also influences the takeaways from the weight pruning. 4. Results: Without a better understanding of the model in terms of size or compute requirements it is difficult to make sense of the numbers and thus gain knowledge from the findings. 5. Discussion and reflection: Lacking. The paper addresses important challenges in object detection but falls short due to incomplete methodologies, unclear experimental setups, and insufficient analysis of results. The lack of detailed descriptions prevents replication and understanding of the work's impact. Key metrics and discussions are missing, making it difficult to assess the contributions fully. Recommendation: Not accepted. To strengthen the paper, the authors should: - Provide detailed methodologies for data augmentation in adverse weather and compound scaling. - Clarify experimental setups, making sure they effectively measure the impact of the proposed methods. - Include essential metrics such as model size, number of parameters, and computational requirements for efficiency evaluation. - Reflect on the findings to highlight how each component contributes to new knowledge. By addressing these issues, the paper could provide more substantial contributions to research.
ex52UHBCUh
FGGP: Fixed-Rate Gradient-First Gradual Pruning
[]
In recent years, the increasing size of deep learning models and their growing demand for computational resources have drawn significant attention to the practice of pruning neural networks, while aiming to preserve their accuracy. In unstructured gradual pruning, which sparsifies a network by gradually removing individual network parameters until a targeted network sparsity is reached, recent works show that both gradient and weight magnitudes should be considered. In this work, we show that such mechanism, e.g., the order of prioritization and selection criteria, is essential. We introduce a gradient-first magnitude-next strategy for choosing the parameters to prune, and show that a fixed-rate subselection criterion between these steps works better, in contrast to the annealing approach in the literature. We validate this on CIFAR-10 dataset, with multiple randomized initializations on both VGG-19 and ResNet-50 network backbones, for pruning targets of 90, 95, and 98% sparsity and for both initially dense and 50% sparse networks. Our proposed fixed-rate gradient-first gradual pruning (FGGP) approach outperforms its state-of-the-art alternatives in most of the above experimental settings, even occasionally surpassing the upperbound of corresponding dense network results, and having the highest ranking across the considered experimental settings.
[ "Sparse networks; Gradual pruning; Weight selection criteria" ]
https://openreview.net/pdf?id=ex52UHBCUh
https://openreview.net/forum?id=ex52UHBCUh
yGIFpDHi0x
meta_review
1,730,460,797,386
ex52UHBCUh
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission36/Area_Chair_xjsr" ]
NLDL.org/2025/Conference
2025
metareview: This work introduces a new criterion for pruning neural networks based on gradient magnitude, asserting that large gradients in weights indicate contribution regardless of the weight magnitude. The authors propose an innovative two-step approach to address theoretical conjectures on gradient magnitude, aiming to avoid pruning inactive weights too early by focusing only on those that are not actively updating. The paper is well-written, clear, and thorough, with interpretable explanations that support reproducibility. While the pruning criterion is interesting, there is a consensus among reviewers that the experimental gains are modest and exhibit high variance, which limits the perceived benefits of the proposed method. Although the authors rightly point out that other approaches often yield minimal improvements, this should not be taken as a benchmark. Given the modest empirical improvements demonstrated here and the use of only one dataset, it is challenging to justify the broader applicability of the proposed method. Strengthening the empirical analysis through a more comprehensive study on generalization and pre-convergence pruning performance would improve the rigor and justification of the method. recommendation: Reject suggested_changes_to_the_recommendation: 2: I'm certain of the recommendation. It should not be changed confidence: 4: The area chair is confident but not absolutely certain
ex52UHBCUh
FGGP: Fixed-Rate Gradient-First Gradual Pruning
[]
In recent years, the increasing size of deep learning models and their growing demand for computational resources have drawn significant attention to the practice of pruning neural networks, while aiming to preserve their accuracy. In unstructured gradual pruning, which sparsifies a network by gradually removing individual network parameters until a targeted network sparsity is reached, recent works show that both gradient and weight magnitudes should be considered. In this work, we show that such mechanism, e.g., the order of prioritization and selection criteria, is essential. We introduce a gradient-first magnitude-next strategy for choosing the parameters to prune, and show that a fixed-rate subselection criterion between these steps works better, in contrast to the annealing approach in the literature. We validate this on CIFAR-10 dataset, with multiple randomized initializations on both VGG-19 and ResNet-50 network backbones, for pruning targets of 90, 95, and 98% sparsity and for both initially dense and 50% sparse networks. Our proposed fixed-rate gradient-first gradual pruning (FGGP) approach outperforms its state-of-the-art alternatives in most of the above experimental settings, even occasionally surpassing the upperbound of corresponding dense network results, and having the highest ranking across the considered experimental settings.
[ "Sparse networks; Gradual pruning; Weight selection criteria" ]
https://openreview.net/pdf?id=ex52UHBCUh
https://openreview.net/forum?id=ex52UHBCUh
mzWwqDjuw1
official_review
1,728,328,874,463
ex52UHBCUh
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission36/Reviewer_ZXiL" ]
NLDL.org/2025/Conference
2025
title: This paper proposes a new pruning strategy FGGP, but with very limited empirical evidence or theoretical proof. summary: While magnitude pruning is a very established method, the papers argues that if the gradients of weights are large, those weights are still contributing to the loss, and magnitude pruning alone may not be effective at minimizing the loss. Based on this rationale, this paper proposes a two-step pruning algorithm that considers gradient magnitude before magnitude. Specifically, it first creates a subset of weights by selecting those with the smallest gradient magnitudes and then applies magnitude pruning within this subset. As a result, it omits weights with large gradient magnitude for pruning, even if they have the smallest magnitudes (which traditional magnitude pruning would prune). The method is evaluated by measuring the accuracy on CIFAR-10 using VGG-19 and ResNet-50. The results show situational marginal improvement over several other gradual pruning and sparse training methods. strengths: - The paper introduces a unique two-step pruning method that prioritizes gradient before magnitude. While the justification may be limited, the idea itself is a creative attempt to determine the saliency of weights. - The description of the proposed method is very clear and easy to follow. weaknesses: Introduction reads very 'scattered', without emphasis on what the problem is, where the gap is, and what problem the paper is proposing to resolve. It lacks a sumary of key contributions, typically listed at the end of introduction for such paper. Core (Method+Conclusion): - "Assuming a simpler first-order... several works aim to minimize this by simply pruning parameters with large magnitudes, but this only applies if those gradients are not large" - they actually remove the smallest magnitude weights (not largest), as they contribute the least to output. Also, stating magnitude pruning only works when gradients are small is oversimplified because it's based on the assumption that small magnitude weights have small contribution to the network, regardless of gradient. - The author seems to have a misunderstanding of GraNet - in method (Fig 1b) and conclusion, the paper seems to claim GraNet uses a two-step ranking strategy, similar to what this paper proposes, but in reverse - magnitude first, gradient second. However, to the best of my understanding, it prunes by magnitude and regrows by gradient - these are two separate processes. Please refer to the original GraNet paper "We prune the weights with the smallest magnitude, as it has evolved as the standard method when pruning happens during training" and "Again, we use the gradient as the importance score for regeneration, same as the regrow method as used in RigL" - While the core idea that gradient magnitude first ensures weights contributing more to the loss are protected from pruning is somewhat reasonable in theory, the paper fails to provide a thorough argument for why sorting by gradient first is better than the other way around. Why is this sorting algorithm even neccesary? Results: - Results are only marginally better sometimes, and it is not clear whether it is just by chance. Overall, the results are unconvincing. - A 2024/2025 pruning paper should include at least some form of imagenet, tinyimagenet benchmarks. It needs more than CIFAR-10. - Ablation study a is well designed - the author picks r={0.20, 0.50, 0.80, 0.95} for the gradient-based subset cutoff, which is then used as a pool for magnitude pruning. However, Figure 2a fails to convey a story. For instance, at 95% sparsity, it's red>green>blue>orange, but at 98% sparsity it's blue>green>orange>red. Red(r=0.95) approachs an extreme case which resembles gradual magnitude pruning, performs much better at 95%. This seems contradictory to your hypothesis. The author then claims r=0.5 performs the best overall - but is it? It is worse than r=0.8 at both 90% and 95%, and only marginally better at 98%. If r=0.95, or 1 (this should be tested as well) is better, doesn't it just kill the novelty of this paper? - Ablation b once again fails to provide a good narrative. The results are different at different sparsities - it lacks consistency. Even at the same sparsity, it is hard to conclude anything. For example, at 98%, it goes gradient-first, fixed > magnitude-first, fixed > gradient-first, cos. We see that gradient-first, cos is the worse than magnitude-first, so why gradient-first? Then again 95% has a completely different order. The author puts emphesis on the 'order of prioritization', but fails to provide any support here. Background: - Background is too long. It has a whole section on structured pruning when the proposed method is exclusively unstructured. Since the author is not re-inventing scheduling, the literature on this my be excessive. - Background contains technical errors. For instance, " RigL [16] uses a two-step pruning criterion by first selecting a subset of weights with the smallest magnitudes and second selecting a subset therein with the smallest gradients to prune. " - it is not a 2-step criterion, but simply simultaneously prunes by magnitude and regrows by gradient at pre-defined intervals. Moreover, this is not a pruning paper, but sparse training where the sparsity stays uniform throughout training. " Another example, " GraNet [17] combines the pruning schedule of [14] with the pruning criterion of [16], achieving the state-of-the-art results in unstructured gradual pruning. Note that although GraNet calls the subset selection process as weight “addition” (where the second stage is explained as if adding [back] high-gradient parameters), this is somewhat a misnomer as GraNet does not aim and cannot grow synapses inexistent at the beginning of pruning" - GraNet indeed adds weights back, refer to Figure 1 of the paper. Although GraNet is used as a key benchmark method in this paper, the author does not seem to grasp key concepts of this method or dynamic sparse training in general. confidence: 5 justification: The decision to prune within a subset of weights with smallest gradients aims to reduce the risk of pruning important weights. However, the effectiveness of this strategy is not well supported by strong theoretical or empirical evidence in the text. It needs much stronger comprehensive evidence such as running on CIFAR-100, TinyImagenet or ImageNet. Moreover, to the best of my understanding, the paper also has technical inaccuracies and misunderstandings in the interpretation of other papers. final_rebuttal_confidence: 5 final_rebuttal_justification: The author has addressed some of my earlier points, so I have adjusted the original score. However, it is highly advisable to conduct additional experiments to further support this empirical study.
ex52UHBCUh
FGGP: Fixed-Rate Gradient-First Gradual Pruning
[]
In recent years, the increasing size of deep learning models and their growing demand for computational resources have drawn significant attention to the practice of pruning neural networks, while aiming to preserve their accuracy. In unstructured gradual pruning, which sparsifies a network by gradually removing individual network parameters until a targeted network sparsity is reached, recent works show that both gradient and weight magnitudes should be considered. In this work, we show that such mechanism, e.g., the order of prioritization and selection criteria, is essential. We introduce a gradient-first magnitude-next strategy for choosing the parameters to prune, and show that a fixed-rate subselection criterion between these steps works better, in contrast to the annealing approach in the literature. We validate this on CIFAR-10 dataset, with multiple randomized initializations on both VGG-19 and ResNet-50 network backbones, for pruning targets of 90, 95, and 98% sparsity and for both initially dense and 50% sparse networks. Our proposed fixed-rate gradient-first gradual pruning (FGGP) approach outperforms its state-of-the-art alternatives in most of the above experimental settings, even occasionally surpassing the upperbound of corresponding dense network results, and having the highest ranking across the considered experimental settings.
[ "Sparse networks; Gradual pruning; Weight selection criteria" ]
https://openreview.net/pdf?id=ex52UHBCUh
https://openreview.net/forum?id=ex52UHBCUh
k3vlpGYLhS
official_review
1,728,327,627,075
ex52UHBCUh
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission36/Reviewer_TQmq" ]
NLDL.org/2025/Conference
2025
title: Review for "FGGP: Fixed-Rate Gradient-First Gradual Pruning" summary: The paper presents a method for performing neural network pruning. It relies in both considering the gradient value and the absolute weight value for pruning in a two-step fashion. The proposed approach is then compared to a few approaches from the literature. strengths: -The Background section is especially thorough. -Both the presentation of the algorithm and the experimental section are clear and sufficiently detailed. The experiments include a variety of pruning thresholds and benchmarks. weaknesses: 1. I feel like the theoretical contribution is somewhat thin. When compared to GraNet, the proposed approach keeps the two-step fashion where one considers the absolute value of the weights, while in the other step, the gradient of the weights is considered. The sorting is applied for similar purposes. Thus even though Section 3.2 explains thoroughly the choice in the various steps of the algorithm, I feel like most of the theoretical contribution has already been done somewhere else. 2.1. The experimental section does not lead to the conclusion that the proposed approach has a significant impact on the obtained performances. It is said that « We herein argue that the criteria and the mechanism (the order, thresholds, etc) used in the prioritization of pruned parameters are essential » (line 390), yet Table 1 does not depict this specific choice of order in prioritization as « essential ». 2.2. It can be found that in most of the experiments, FGGP is the method leading to the biggest variance, sometimes a few times bigger than the runner-up. Considering this, and the fact that only 3 random seeds were used for the results presented in Table 1, I think using many more random seeds is necessary. **Typos and such** -Lines 78 and 96 : please use inline citation syntax, as in Line 92. -Lines 92 and 110 VS 136 and 151 : please be consistent in how subsection titles are used. -It would be best to name the terms from Equation (3), for « the first term » or « the second term » are confusing names. -Line 322 : initiatlization -Though what FGGP stands for is in the title of the work, please present the name and the abbreviation in due form. -Line 376 : text → test confidence: 4 justification: The theoretical contribution is somewhat thin, and the experimental section (though the diversity of experiments is remarkable) is insufficient to conclude in any significant empirical gains from the proposed approach (see **Weaknesses**).
ex52UHBCUh
FGGP: Fixed-Rate Gradient-First Gradual Pruning
[]
In recent years, the increasing size of deep learning models and their growing demand for computational resources have drawn significant attention to the practice of pruning neural networks, while aiming to preserve their accuracy. In unstructured gradual pruning, which sparsifies a network by gradually removing individual network parameters until a targeted network sparsity is reached, recent works show that both gradient and weight magnitudes should be considered. In this work, we show that such mechanism, e.g., the order of prioritization and selection criteria, is essential. We introduce a gradient-first magnitude-next strategy for choosing the parameters to prune, and show that a fixed-rate subselection criterion between these steps works better, in contrast to the annealing approach in the literature. We validate this on CIFAR-10 dataset, with multiple randomized initializations on both VGG-19 and ResNet-50 network backbones, for pruning targets of 90, 95, and 98% sparsity and for both initially dense and 50% sparse networks. Our proposed fixed-rate gradient-first gradual pruning (FGGP) approach outperforms its state-of-the-art alternatives in most of the above experimental settings, even occasionally surpassing the upperbound of corresponding dense network results, and having the highest ranking across the considered experimental settings.
[ "Sparse networks; Gradual pruning; Weight selection criteria" ]
https://openreview.net/pdf?id=ex52UHBCUh
https://openreview.net/forum?id=ex52UHBCUh
fSvKrsIMxH
official_review
1,728,543,503,797
ex52UHBCUh
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission36/Reviewer_27e6" ]
NLDL.org/2025/Conference
2025
title: New pruning technique for neural networks summary: This work proposes a new method for pruning neural networks. This method is based on α gradient-first magnitude-next strategy. The conduct experiments on various models and datasets and prove experimentally that this method beats the state-of-the-art methods. They consider the gradients first, focusing on the parameters with small gradient magnitudes to be omitted, and then they focus on small magnitudes – selecting the parameters with minimal effect on the loss. In more details their method chooses the parameters to prune based on two steps, where first step it ranks the parameters by their gradient magnitudes. Then it selects the smallest of these to prune. strengths: * Intuitive method that is well-motivated and presented. The paper is well written and the idea can be conveyed very clearly to the reader. * The proposed method beats the state-of-the-art pruning methods. It also includes an extensive comparison of the other methods. weaknesses: I would say that the ablation study is kind of limited but the paper has limited pages for publication. I would suggest to extend it and include more experiments exploring the proposed method. For example Figure 2 has different sparsity levels but it would be more informative to include lower levels of sparsity too (like 10% and gradually increase to 99%). confidence: 4 justification: The paper is good and has a good idea. I strongly believe that the readers will benefit of it. final_rebuttal_confidence: 4 final_rebuttal_justification: I have read all the reviews and comments in the paper and I believe that the paper should get accepted to this venue.
ex52UHBCUh
FGGP: Fixed-Rate Gradient-First Gradual Pruning
[]
In recent years, the increasing size of deep learning models and their growing demand for computational resources have drawn significant attention to the practice of pruning neural networks, while aiming to preserve their accuracy. In unstructured gradual pruning, which sparsifies a network by gradually removing individual network parameters until a targeted network sparsity is reached, recent works show that both gradient and weight magnitudes should be considered. In this work, we show that such mechanism, e.g., the order of prioritization and selection criteria, is essential. We introduce a gradient-first magnitude-next strategy for choosing the parameters to prune, and show that a fixed-rate subselection criterion between these steps works better, in contrast to the annealing approach in the literature. We validate this on CIFAR-10 dataset, with multiple randomized initializations on both VGG-19 and ResNet-50 network backbones, for pruning targets of 90, 95, and 98% sparsity and for both initially dense and 50% sparse networks. Our proposed fixed-rate gradient-first gradual pruning (FGGP) approach outperforms its state-of-the-art alternatives in most of the above experimental settings, even occasionally surpassing the upperbound of corresponding dense network results, and having the highest ranking across the considered experimental settings.
[ "Sparse networks; Gradual pruning; Weight selection criteria" ]
https://openreview.net/pdf?id=ex52UHBCUh
https://openreview.net/forum?id=ex52UHBCUh
dQy3XQl4kJ
decision
1,730,901,556,270
ex52UHBCUh
[ "everyone" ]
[ "NLDL.org/2025/Conference/Program_Chairs" ]
NLDL.org/2025/Conference
2025
title: Paper Decision decision: Reject
ea0YJaJShO
Deep Learning for Localization of White Matter Lesions in Neurological Diseases
[ "Julia Machnio", "Mads Nielsen", "Mostafa Mehdipour Ghazi" ]
White Matter (WM) lesions, commonly observed as hyperintensities on FLAIR MRIs or hypointensities on T1-weighted images, are associated with neurological diseases. The spatial distribution of these lesions is linked to an increased risk of developing neurological conditions, emphasizing the need for location-based analyses. Traditional manual identification and localization of WM lesions are labor-intensive and time-consuming, highlighting the need for automated solutions. In this study, we propose novel deep learning-based methods for automated WM lesion segmentation and localization. Our approach utilizes state-of-the-art models to concurrently segment WM lesions and anatomical WM regions, providing detailed insights into their distribution within the brain's anatomical structure. By applying k-means clustering to the regional WM lesion load, distinct subject groups are identified to be associated with various neurological conditions, validating the method's alignment with established clinical findings. The robustness and adaptability of our method across different scanner types and imaging protocols make it a valuable tool for research and clinical practice, offering potential improvements in diagnostic efficiency and patient care.
[ "Deep learning", "segmentation", "localization", "white matter hyperintensity", "neurological disease" ]
https://openreview.net/pdf?id=ea0YJaJShO
https://openreview.net/forum?id=ea0YJaJShO
yjAigCfj8t
decision
1,730,901,556,517
ea0YJaJShO
[ "everyone" ]
[ "NLDL.org/2025/Conference/Program_Chairs" ]
NLDL.org/2025/Conference
2025
title: Paper Decision decision: Accept (Oral) comment: We recommend an oral and a poster presentation given the AC and reviewers recommendations.
ea0YJaJShO
Deep Learning for Localization of White Matter Lesions in Neurological Diseases
[ "Julia Machnio", "Mads Nielsen", "Mostafa Mehdipour Ghazi" ]
White Matter (WM) lesions, commonly observed as hyperintensities on FLAIR MRIs or hypointensities on T1-weighted images, are associated with neurological diseases. The spatial distribution of these lesions is linked to an increased risk of developing neurological conditions, emphasizing the need for location-based analyses. Traditional manual identification and localization of WM lesions are labor-intensive and time-consuming, highlighting the need for automated solutions. In this study, we propose novel deep learning-based methods for automated WM lesion segmentation and localization. Our approach utilizes state-of-the-art models to concurrently segment WM lesions and anatomical WM regions, providing detailed insights into their distribution within the brain's anatomical structure. By applying k-means clustering to the regional WM lesion load, distinct subject groups are identified to be associated with various neurological conditions, validating the method's alignment with established clinical findings. The robustness and adaptability of our method across different scanner types and imaging protocols make it a valuable tool for research and clinical practice, offering potential improvements in diagnostic efficiency and patient care.
[ "Deep learning", "segmentation", "localization", "white matter hyperintensity", "neurological disease" ]
https://openreview.net/pdf?id=ea0YJaJShO
https://openreview.net/forum?id=ea0YJaJShO
hVMqyguma9
official_review
1,728,511,529,782
ea0YJaJShO
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission46/Reviewer_1gxr" ]
NLDL.org/2025/Conference
2025
title: Some unclarity regarding the method design and conclusions from the experiments summary: The proposed method performs simultaneous segmentation of white matter lesions and anatomical white matter regions. Some essential aspects of the methodology remained unclear to me. In general the manuscript is well written and understandable, but it could benefit from rewriting some part on why and how the loss terms are combined and what the benefit is of combining the localization and segmentation for the downstream task (of localization?). The results in the appendix show that many experiments have been performed, but some more work is needed to guide the reader through the experiments and their conclusions. strengths: - Relevant biomedical application - Dataset publicly availabe - Dataset from three different hospitals - Dataset covering multiple vendors - Many experiment results in appendix weaknesses: - It would have been interesting to see generalization capabilities by training on only data from e.g. Utrecht and Singapore and test on Amsterdam or similar. - Limited novelty regarding the methodology. Why is the segmentation needed if there are ways to automatically produce the ground truth? - Double check to introduce the full name before using the abbreviation (e.g. MRI, MNI,..) - The manuscript is generally clearly and well written, but there are some things that are unclear regarding the method. How is the ground truth for the WM region segmentation generated? Referring to Fig. 1A was not enough to understand the method. Why is a network trained to predict it if there is already a method to produce it? - What are the “load of WM lesions”? Please define/introduce. - The tables in appendix are not addressed in text. While they show the extend of the study, they should be put in context and discussed in the text. - How are the loss components weighted? - How to choose k in the k-means clustering step? confidence: 3 justification: Some more clarity is needed to understand why the tasks combined in the loss are relevant and provide an advantage of doing only the eventual downstream task. Some more explanations about the experiments reported in the appendix is needed to understand the study's findings. final_rebuttal_confidence: 3 final_rebuttal_justification: The authors addressed many of the points that I found unclear in the initial submission. There may be a lack of novelty regarding the methodology in segmentation, as also pointed out by reviewer DDNP, but I do agree with both reviewers on the works' potential relevance of to the application of localization and segmentation of white matter lesions.
ea0YJaJShO
Deep Learning for Localization of White Matter Lesions in Neurological Diseases
[ "Julia Machnio", "Mads Nielsen", "Mostafa Mehdipour Ghazi" ]
White Matter (WM) lesions, commonly observed as hyperintensities on FLAIR MRIs or hypointensities on T1-weighted images, are associated with neurological diseases. The spatial distribution of these lesions is linked to an increased risk of developing neurological conditions, emphasizing the need for location-based analyses. Traditional manual identification and localization of WM lesions are labor-intensive and time-consuming, highlighting the need for automated solutions. In this study, we propose novel deep learning-based methods for automated WM lesion segmentation and localization. Our approach utilizes state-of-the-art models to concurrently segment WM lesions and anatomical WM regions, providing detailed insights into their distribution within the brain's anatomical structure. By applying k-means clustering to the regional WM lesion load, distinct subject groups are identified to be associated with various neurological conditions, validating the method's alignment with established clinical findings. The robustness and adaptability of our method across different scanner types and imaging protocols make it a valuable tool for research and clinical practice, offering potential improvements in diagnostic efficiency and patient care.
[ "Deep learning", "segmentation", "localization", "white matter hyperintensity", "neurological disease" ]
https://openreview.net/pdf?id=ea0YJaJShO
https://openreview.net/forum?id=ea0YJaJShO
f32kJGUxxW
official_review
1,728,498,418,769
ea0YJaJShO
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission46/Reviewer_DDNP" ]
NLDL.org/2025/Conference
2025
title: Lack of novelty but could be beneficial for audience with clinical background summary: The author developed several segmentation models for WMH lesion and region segmentation, with refind atlas labeled created and used. The propsoed framework could be used for WM lesion segmentation and localization on MICCAI 2017 WMH Segmentation Challenge dataset. Detailed analysis and discussions were conducted. strengths: The author performed comprehensive experiments on WM lesion and region segmentation. Detailed discussion for results in relation to subject anatomy. The figures for the method and results are illustrative. weaknesses: There are however lack of certain novelty opposing to what the author claimed for segmentation and localization. The discussion sometimes lack of depth. For example, when observing T1 is more significant for region/anotomical segmentation compared to that of FLAIR for lesion segmentation, the author could explore and explain such differences. There are several possible technical errors and to be improved: 1. In Table 1, #scans for train and test were swapped. 2. What are the meanings behind various regions, as in Figure 3, the author directly mentioned region 10 and 12 are "larger regions with simpler curvatures". 3. Why when reporting results, sometimes the author used CE+DS+SR loss, while in other places with CE+DS loss (e.g., in Appendix), especially the author concluded no significant differences between these loss functions (line 276-280). confidence: 4 justification: Although lack of novelty for method, dicusson and analysis could be a benneificial to readers (with clinical background).
ea0YJaJShO
Deep Learning for Localization of White Matter Lesions in Neurological Diseases
[ "Julia Machnio", "Mads Nielsen", "Mostafa Mehdipour Ghazi" ]
White Matter (WM) lesions, commonly observed as hyperintensities on FLAIR MRIs or hypointensities on T1-weighted images, are associated with neurological diseases. The spatial distribution of these lesions is linked to an increased risk of developing neurological conditions, emphasizing the need for location-based analyses. Traditional manual identification and localization of WM lesions are labor-intensive and time-consuming, highlighting the need for automated solutions. In this study, we propose novel deep learning-based methods for automated WM lesion segmentation and localization. Our approach utilizes state-of-the-art models to concurrently segment WM lesions and anatomical WM regions, providing detailed insights into their distribution within the brain's anatomical structure. By applying k-means clustering to the regional WM lesion load, distinct subject groups are identified to be associated with various neurological conditions, validating the method's alignment with established clinical findings. The robustness and adaptability of our method across different scanner types and imaging protocols make it a valuable tool for research and clinical practice, offering potential improvements in diagnostic efficiency and patient care.
[ "Deep learning", "segmentation", "localization", "white matter hyperintensity", "neurological disease" ]
https://openreview.net/pdf?id=ea0YJaJShO
https://openreview.net/forum?id=ea0YJaJShO
VjbkI72xZq
meta_review
1,729,843,160,576
ea0YJaJShO
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission46/Area_Chair_8o9F" ]
NLDL.org/2025/Conference
2025
metareview: The paper has potential novelty and the authors answered the reviewers' comments. recommendation: Accept (Oral) suggested_changes_to_the_recommendation: 3: I agree that the recommendation could be moved up confidence: 5: The area chair is absolutely certain
ea0YJaJShO
Deep Learning for Localization of White Matter Lesions in Neurological Diseases
[ "Julia Machnio", "Mads Nielsen", "Mostafa Mehdipour Ghazi" ]
White Matter (WM) lesions, commonly observed as hyperintensities on FLAIR MRIs or hypointensities on T1-weighted images, are associated with neurological diseases. The spatial distribution of these lesions is linked to an increased risk of developing neurological conditions, emphasizing the need for location-based analyses. Traditional manual identification and localization of WM lesions are labor-intensive and time-consuming, highlighting the need for automated solutions. In this study, we propose novel deep learning-based methods for automated WM lesion segmentation and localization. Our approach utilizes state-of-the-art models to concurrently segment WM lesions and anatomical WM regions, providing detailed insights into their distribution within the brain's anatomical structure. By applying k-means clustering to the regional WM lesion load, distinct subject groups are identified to be associated with various neurological conditions, validating the method's alignment with established clinical findings. The robustness and adaptability of our method across different scanner types and imaging protocols make it a valuable tool for research and clinical practice, offering potential improvements in diagnostic efficiency and patient care.
[ "Deep learning", "segmentation", "localization", "white matter hyperintensity", "neurological disease" ]
https://openreview.net/pdf?id=ea0YJaJShO
https://openreview.net/forum?id=ea0YJaJShO
UuK1wyktHC
meta_review
1,730,827,216,155
ea0YJaJShO
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission46/Area_Chair_1y1g" ]
NLDL.org/2025/Conference
2025
metareview: The authors present an automated deep learning method for segmenting and localising white matter lesions (WMH) in neurological diseases. The methodology involved using state-of-the-art deep learning models, such as U-Net, UNETR, MultiResUNet, and MedNeXt, to segment WMH and anatomical white matter regions simultaneously. The study is based on the MICCAI 2017 WMH Segmentation Challenge dataset, which includes 3D T1 and FLAIR images from 170 subjects across three cohorts. The results showed that deep learning-based methods achieved high accuracy in WM lesion segmentation and localisation, with FLAIR images providing a clearer distinction between WM lesion and tissue intensities. The study also demonstrated the importance of location-specific analysis, highlighting the significance of WMH location in disease risk. Overall the reviewers were positive in terms of the writing and clarity of the paper. Some concerns around novelty were well rebutted, given the paper is primarily focusing on the application of deep learning towards creating an automated system for WMH segmentation rather than a new deep learning approach. My only concern is that the authors proposed some changes to the paper, e.g. changing to Figure 1 and providing the code and Atlas. Therefore, there is an expectation that all these changes will be added to the camera ready version and will be verified by the AC and/or PCs. recommendation: Accept (Poster) suggested_changes_to_the_recommendation: 3: I agree that the recommendation could be moved up confidence: 5: The area chair is absolutely certain
ea0YJaJShO
Deep Learning for Localization of White Matter Lesions in Neurological Diseases
[ "Julia Machnio", "Mads Nielsen", "Mostafa Mehdipour Ghazi" ]
White Matter (WM) lesions, commonly observed as hyperintensities on FLAIR MRIs or hypointensities on T1-weighted images, are associated with neurological diseases. The spatial distribution of these lesions is linked to an increased risk of developing neurological conditions, emphasizing the need for location-based analyses. Traditional manual identification and localization of WM lesions are labor-intensive and time-consuming, highlighting the need for automated solutions. In this study, we propose novel deep learning-based methods for automated WM lesion segmentation and localization. Our approach utilizes state-of-the-art models to concurrently segment WM lesions and anatomical WM regions, providing detailed insights into their distribution within the brain's anatomical structure. By applying k-means clustering to the regional WM lesion load, distinct subject groups are identified to be associated with various neurological conditions, validating the method's alignment with established clinical findings. The robustness and adaptability of our method across different scanner types and imaging protocols make it a valuable tool for research and clinical practice, offering potential improvements in diagnostic efficiency and patient care.
[ "Deep learning", "segmentation", "localization", "white matter hyperintensity", "neurological disease" ]
https://openreview.net/pdf?id=ea0YJaJShO
https://openreview.net/forum?id=ea0YJaJShO
UT5gBXOzGd
official_review
1,728,175,671,432
ea0YJaJShO
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission46/Reviewer_kk7q" ]
NLDL.org/2025/Conference
2025
title: Review for "Deep Learning for Localization of White Matter Lesions in Neurological Diseases" summary: The authors present a pipeline for segmenting white matter lesions and regions. Their findings indicate that grouping patients based on their detected regional white matter hyperintensity load relates to clinical conditions. strengths: 1) The paper is well-written, without apparent errors. 2) Clustering patients based on their regional white matter hyperintensity load is an interesting approach, though further clarification could enhance understanding. 3) The use of publicly available data (MICCAI 2017 WMH Segmentation Challenge) adds transparency and reproducibility to the research. 4) The authors provide comprehensive experimental results using various architectures and inputs. The Appendix section is particularly valuable. weaknesses: 1) The authors do not share their code, which makes it very difficult to reproduce or understand some parts of the paper. 2) Clarity and presentation could be further improved: * Figure 1 is not very informative and it should be the main figure of the paper. Apart from reformating it, minor things like increasing the fonts could help. * Contributions (1) and (2) are closely related but abruptly mentioned without the previous introduction of the underlying problem. Additionally, contribution (1) would rely upon the subsequent sharing of the "atlas". * Contribution (3), "training" deep learning models cannot be considered a contribution by itself. Throughout the paper, the novelty does not lie in the models used, as they are state-of-the-art architectures. The paper would benefit from a clearer emphasis on its unique contributions, which can be argued to exist but are not highlighted effectively. 3) The limitations of the paper are not discussed. 4) In the Introduction, the previous and related work should be clearer, which can be exemplified by the following questions: * Is this "the first method to fully automate WM region segmentation within a subject's anatomical space" (line 103)? Or is it the localization that is done for the first time in the native space? * What are the disadvantages of working in the MNI space? Speed? Is the potential application of this method speed-dependent? confidence: 4 justification: The paper does not have any noticeable writing errors and the appendix section is actually helpful. The experimentation on a public dataset and the idea of grouping the patients by their regional WMH load are noteworthy as well. However, the paper lacks clarity in key areas. The reader struggles to know whether the authors present novel methods for segmentation, apply available methods in novel ways, or neither. Sentences such as "This study introduces deep learning-based methods" (line 089) are vague and contribute to this misunderstanding. This should not be a problem as long as this distinction is clear. Sharing the code would also address some misunderstandings and further improve reproducibility. Additionally, clarifying the contributions and reformatting Figure 1 to better communicate the main information would address the bigger issues. Better framing of the problem in the Introduction and a discussion of the study’s limitations would further aid readers in following the paper’s structure. Overall, while these issues are fixable, given the current state of the paper, my recommendation is borderline rejection. final_rebuttal_confidence: 4 final_rebuttal_justification: Thank you to the authors for their thorough revision of the paper. Here are my key points: * The public availability of the code is essential for this kind of publication. Additionally, sharing their atlas would enhance their contributions (claim 2). * The process for generating the ground truth has been better articulated, and I hope the revised Figure 1 further supports this aspect. * Detailed explanations of the background for the contributions are crucial for understanding the paper's significance. I look forward to these in the revised version. * I appreciate the addition of a limitations section in the discussion, especially considering the other reviewers' comments. Overall, if the authors implement the changes mentioned in their rebuttal, I would be willing to raise my initial rating.
e6JyXSp6sm
Style-Quizzes for Content-Based Fashion Recommendation in Extreme Cold Start Scenarios
[]
This article presents Style-Quiz, a novel method for circumventing the user-based cold start problem in the context of content-based recommender systems. We construct a content-based recommender system for a given environment and generate a quiz built upon its underlying embeddings. During the course of the quiz, the embedding space of the recommender system is segmented via unsupervised hierarchical clustering. The user is presented with a series of images representative of each cluster and tasked with choosing one of them. The chosen cluster is then segmented in the same way as its parent cluster. This process is repeated until the user has honed in on a point in the embedding space that adequately represents that user's tastes. As a user interested in renting or purchasing fashion items is likely to be interested in several different kinds of fashion articles, we also introduce Style-Vectors. A representation of our items, built on deep-learning encoders and triplet loss, that is indicative of their underlying style, not just physical attributes. Our results indicate that Style-Quiz significantly improves early personalized recommendation as compared to recommending globally popular items. To improve reproducibility, we publish both the code and dataset used for the project.
[ "Recommender Systems", "Cold Start Problem", "Fashion" ]
https://openreview.net/pdf?id=e6JyXSp6sm
https://openreview.net/forum?id=e6JyXSp6sm
p8qV9QTQvp
official_review
1,726,569,763,610
e6JyXSp6sm
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission26/Reviewer_uEC4" ]
NLDL.org/2025/Conference
2025
title: NLDL 2025 review for Style-Quizzes for Content-based fashion recommendation in extreme cold start scenarios summary: This work presents a method called Style-Quiz, in which the authors presents a method for dealing with so-called extreme cold start problems in recommender systems. The cold start problem itself comes from how recommender systems work by using previous user-data to recommend for example products for the user to buy. If we don't know much about the user, then it's quite hard to recommend anything, thereby causing the cold start problem which occurs during on-boarding of the user. The article present itself as introducing a concept used in industry, and not so much in academia. Some brands such as Nordstorm and Stitch Fix uses these personalized quizzes to onboard new customers, and this paper seeks to introduce and explore this concept into academic circles. The main method of the paper uses a quiz which, through user input, converges to different convergence points which are presented as cluster centroids for 30 different items. This convergence point is then used as the customers "initial state", if that would be a correct way of stating that, which is much more personalized versus using the global most popular items as an initial point. strengths: The introduction set up an interesting story and I would consider this section solid. The problem and key terms is clearly presented and explained in such a way that those not familiar with the recommender system subfield can still understand what is being explained and understand the novelty of the articles findings. The methods section presents clearly what dataset has been used and where to find the code for this work (I assume, the link is anonymized). weaknesses: Related Works This section comes off as a bit short and lacking. I'm not going to hold this to much against the author as there doesn't seem to be that many works addressing the cold start problem. Maybe some other works into recommender systems or metrics used could be placed here, but I'm not familiar enough with the field to know exactly what could be included here. I'm mainly placing this comment here as there isn't a dedicated "comments" tab for this reviewing software. Method The methods section lacks explanations into how the authors will approach the question of by what metric do we determine the "goodness" of the method, and what can we compare it to. One thing missing in this section is how to measure any sort of "goodness" or improvement over other methods. Other papers might use accuracy, AUC, R^2 values, or any other sense of measurements to argue for their method, but especially for this work it's not obvious how a good method should look like and how to measure this. This hurts the article in my mind as when we get to the results section we'll see some assumptions on a models goodness come forward which at that point has not been discussed in depth. Paragraph 2 of section 3.1 could use a small clarification on "biasing towards similarity between outfit categories". It’s not clear why we’d want to do this. My guess would be that recommender systems tend to recommend similar objects while users want diversity as shown using the Simpson's Diversity Index. It would be nice to have this specified. In Section 3.2 the work presents a scheme to segment the data into smaller and smaller sub-clusters of similar items until each cluster has 30 samples. It's mentioned that this number is arbitrary, which makes sense, but here it would be nice if the authors justified this specific number. Is it somehow derived from the dataset, or is this somewhat common practice? If this number is not derived from the dataset in any way, could it be? It's also not exactly clear how the quiz would be set up. Does this have an effect, or can it be set up in several different ways? This should be explained. Results The results section suffers from a lack of buildup from the methods section. The only real metric considered in this section is mean distance of points from points in the dataset to either the closest convergence points or the global most popular cluster. The authors show that their method reduces the mean distance to the closest cluster representative. Intuitively we can understand that this would lead to a more personalized initial start, but this way of assessing "goodness" has not been explained, and is not explored further. See for example source [13], where they address extreme cold start scenarios and take a section to explain their metrics and what it will show. The cluster size is touched upon, but not further explored. The TSNE plots needs a bit more explaining and exploration. It's not clear why only a few clusters in Figure 2 are colored, and it's hard to know if Figure 3 is there to make a point or not. confidence: 3 justification: This work does present an interesting view into a problem which has not seen much exploration to what I can find looking for similar work. The problem setting is interesting, and the proposed solution sounds like it makes sense. The article does however fall short in its exploration and explanation of the method it presents. Too many questions are left up in the air by the end of the paper. How is the quiz structured, why are the metric presented considered a good metric for this task, why are there no comparisons to other methods dealing with the similar problem, why do we not see a more in depth exploration into this method, etc? The introductions sets the stage for this paper to tell a great story, but the following sections lacks the substance, exploration and rigor to make this article an academic piece.
e6JyXSp6sm
Style-Quizzes for Content-Based Fashion Recommendation in Extreme Cold Start Scenarios
[]
This article presents Style-Quiz, a novel method for circumventing the user-based cold start problem in the context of content-based recommender systems. We construct a content-based recommender system for a given environment and generate a quiz built upon its underlying embeddings. During the course of the quiz, the embedding space of the recommender system is segmented via unsupervised hierarchical clustering. The user is presented with a series of images representative of each cluster and tasked with choosing one of them. The chosen cluster is then segmented in the same way as its parent cluster. This process is repeated until the user has honed in on a point in the embedding space that adequately represents that user's tastes. As a user interested in renting or purchasing fashion items is likely to be interested in several different kinds of fashion articles, we also introduce Style-Vectors. A representation of our items, built on deep-learning encoders and triplet loss, that is indicative of their underlying style, not just physical attributes. Our results indicate that Style-Quiz significantly improves early personalized recommendation as compared to recommending globally popular items. To improve reproducibility, we publish both the code and dataset used for the project.
[ "Recommender Systems", "Cold Start Problem", "Fashion" ]
https://openreview.net/pdf?id=e6JyXSp6sm
https://openreview.net/forum?id=e6JyXSp6sm
cGaXQTrr0f
meta_review
1,730,383,407,064
e6JyXSp6sm
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission26/Area_Chair_vnZo" ]
NLDL.org/2025/Conference
2025
metareview: This paper introduces a StyleQuiz to tackle the cold start problem in recommender systems. The reviewers agree that the paper is well written, and the problem addressed is interesting and relevant. However, they express concerns about the insufficient evaluation to fully support the effectiveness of the proposed approach and the need for a more detailed explanation of the method and its design. Therefore, the current version of the paper is not yet ready for publication. We encourage the authors to address the reviewers’ concerns to enhance the manuscript’s quality. recommendation: Reject suggested_changes_to_the_recommendation: 3: I agree that the recommendation could be moved up confidence: 4: The area chair is confident but not absolutely certain
e6JyXSp6sm
Style-Quizzes for Content-Based Fashion Recommendation in Extreme Cold Start Scenarios
[]
This article presents Style-Quiz, a novel method for circumventing the user-based cold start problem in the context of content-based recommender systems. We construct a content-based recommender system for a given environment and generate a quiz built upon its underlying embeddings. During the course of the quiz, the embedding space of the recommender system is segmented via unsupervised hierarchical clustering. The user is presented with a series of images representative of each cluster and tasked with choosing one of them. The chosen cluster is then segmented in the same way as its parent cluster. This process is repeated until the user has honed in on a point in the embedding space that adequately represents that user's tastes. As a user interested in renting or purchasing fashion items is likely to be interested in several different kinds of fashion articles, we also introduce Style-Vectors. A representation of our items, built on deep-learning encoders and triplet loss, that is indicative of their underlying style, not just physical attributes. Our results indicate that Style-Quiz significantly improves early personalized recommendation as compared to recommending globally popular items. To improve reproducibility, we publish both the code and dataset used for the project.
[ "Recommender Systems", "Cold Start Problem", "Fashion" ]
https://openreview.net/pdf?id=e6JyXSp6sm
https://openreview.net/forum?id=e6JyXSp6sm
WuzVSTuZO0
official_review
1,728,283,320,313
e6JyXSp6sm
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission26/Reviewer_Syhz" ]
NLDL.org/2025/Conference
2025
title: Modification of experiments and methods summary: The paper introduces Style-Quiz, a method for solving the user-based cold start problem in content-based recommender systems. Meanwhile, this method can obtain user preference information in the absence of user history. strengths: 1. The paper is well thought out. 2. The introduction to the concept is complete. weaknesses: 1. Style testing has been used in many scenarios and is well developed, but the paper's approach is too simplistic. 2. The comparison experiments only compare the set of experiments that recommend the most popular items, which is not enough to show that this method is good enough. 3. Experiments should use multiple datasets, using only one is not generalizable. confidence: 3 justification: 1.The method of use is simple. 2.Comparing experiments and datasets there is just one.
e6JyXSp6sm
Style-Quizzes for Content-Based Fashion Recommendation in Extreme Cold Start Scenarios
[]
This article presents Style-Quiz, a novel method for circumventing the user-based cold start problem in the context of content-based recommender systems. We construct a content-based recommender system for a given environment and generate a quiz built upon its underlying embeddings. During the course of the quiz, the embedding space of the recommender system is segmented via unsupervised hierarchical clustering. The user is presented with a series of images representative of each cluster and tasked with choosing one of them. The chosen cluster is then segmented in the same way as its parent cluster. This process is repeated until the user has honed in on a point in the embedding space that adequately represents that user's tastes. As a user interested in renting or purchasing fashion items is likely to be interested in several different kinds of fashion articles, we also introduce Style-Vectors. A representation of our items, built on deep-learning encoders and triplet loss, that is indicative of their underlying style, not just physical attributes. Our results indicate that Style-Quiz significantly improves early personalized recommendation as compared to recommending globally popular items. To improve reproducibility, we publish both the code and dataset used for the project.
[ "Recommender Systems", "Cold Start Problem", "Fashion" ]
https://openreview.net/pdf?id=e6JyXSp6sm
https://openreview.net/forum?id=e6JyXSp6sm
HXmPgXt71Y
official_review
1,728,550,531,013
e6JyXSp6sm
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission26/Reviewer_KEpS" ]
NLDL.org/2025/Conference
2025
title: Official review summary: In this submission, the authors design a style-quiz, helping the users of recommender systems identify their own fashion style preferences in cold-start scenarios. A hierarchical tree is generated for the style embedding of different items, and the users answer the questions in the quiz from the root of the tree to the leaves, and the systems get their preferences from a coarse to a fine level. strengths: 1. The idea is simple but reasonable. 2. The writing of this paper is clear. weaknesses: My main concern is the lack of experiments: 1. Cold-start recommendation is a classic and challenging problem for recommendation system, and many methods have been proposed to solve the problem. However, this work neither provides any experimental result in the main paper nor analyzes the differences between the proposed method and existing ones. Without solid experiments and analysis, the rationality and advantages of the proposed method are not convincing. 2. In practice, the efficiency of the quiz mechanism is important and should be analyzed in detail. For example, how many selections should a user make? How long does it take? Is there any trade-off between the number of selections and the recommendation performance? Without such analysis, this submission is not solid enough. confidence: 5 justification: Although providing anonymous code, this submission did not show any comparisons or analytic experiments in the main paper, making the rationality of the proposed method questionable.
e6JyXSp6sm
Style-Quizzes for Content-Based Fashion Recommendation in Extreme Cold Start Scenarios
[]
This article presents Style-Quiz, a novel method for circumventing the user-based cold start problem in the context of content-based recommender systems. We construct a content-based recommender system for a given environment and generate a quiz built upon its underlying embeddings. During the course of the quiz, the embedding space of the recommender system is segmented via unsupervised hierarchical clustering. The user is presented with a series of images representative of each cluster and tasked with choosing one of them. The chosen cluster is then segmented in the same way as its parent cluster. This process is repeated until the user has honed in on a point in the embedding space that adequately represents that user's tastes. As a user interested in renting or purchasing fashion items is likely to be interested in several different kinds of fashion articles, we also introduce Style-Vectors. A representation of our items, built on deep-learning encoders and triplet loss, that is indicative of their underlying style, not just physical attributes. Our results indicate that Style-Quiz significantly improves early personalized recommendation as compared to recommending globally popular items. To improve reproducibility, we publish both the code and dataset used for the project.
[ "Recommender Systems", "Cold Start Problem", "Fashion" ]
https://openreview.net/pdf?id=e6JyXSp6sm
https://openreview.net/forum?id=e6JyXSp6sm
Gn5NjLdzgw
decision
1,730,901,555,617
e6JyXSp6sm
[ "everyone" ]
[ "NLDL.org/2025/Conference/Program_Chairs" ]
NLDL.org/2025/Conference
2025
title: Paper Decision decision: Reject
e6JyXSp6sm
Style-Quizzes for Content-Based Fashion Recommendation in Extreme Cold Start Scenarios
[]
This article presents Style-Quiz, a novel method for circumventing the user-based cold start problem in the context of content-based recommender systems. We construct a content-based recommender system for a given environment and generate a quiz built upon its underlying embeddings. During the course of the quiz, the embedding space of the recommender system is segmented via unsupervised hierarchical clustering. The user is presented with a series of images representative of each cluster and tasked with choosing one of them. The chosen cluster is then segmented in the same way as its parent cluster. This process is repeated until the user has honed in on a point in the embedding space that adequately represents that user's tastes. As a user interested in renting or purchasing fashion items is likely to be interested in several different kinds of fashion articles, we also introduce Style-Vectors. A representation of our items, built on deep-learning encoders and triplet loss, that is indicative of their underlying style, not just physical attributes. Our results indicate that Style-Quiz significantly improves early personalized recommendation as compared to recommending globally popular items. To improve reproducibility, we publish both the code and dataset used for the project.
[ "Recommender Systems", "Cold Start Problem", "Fashion" ]
https://openreview.net/pdf?id=e6JyXSp6sm
https://openreview.net/forum?id=e6JyXSp6sm
6Z9MemDwng
official_review
1,726,581,243,143
e6JyXSp6sm
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission26/Reviewer_d8jo" ]
NLDL.org/2025/Conference
2025
title: review summary: The authors present a sequential clustering-based approach to identify items of interest of a user to remedy the cold-start problem. The idea is to gradually reduce the candidate set of items until only a few remain by a series of questions which are used to dive into subsets of the item pools. strengths: Interesting and challenging problem that has many degrees of freedom. weaknesses: There is unfortunately no technical contribution and also no quantitative evaluation. It does not become obvious whether the approach works out, there are no baselines to compare with etc. Many design choices are not motivated well, e.g., how many selections are presented to the user, why is there only one choice per question, etc. Model selection seems necessary. confidence: 4 justification: This may be an interesting poster at a workshop but it is not matured yet enough for a conference. final_rebuttal_confidence: 5 final_rebuttal_justification: To me this is a clear reject. More than one reviewer commented on the lack of experimental evidence and the authors didn't actually propose to do so but call it future work. For a workshop, this would certainly be OK but for a conference we need more evidence to support the idea. From the rebuttal, I would also assume that they understood why the paper is going to be rejected and just replied for politeness.
e1dpokhi0c
Connecting Concept Convexity and Human-Machine Alignment in Deep Neural Networks
[ "Teresa Dorszewski", "Lenka Tětková", "Lorenz Linhardt", "Lars Kai Hansen" ]
Understanding how neural networks align with human cognitive processes is a crucial step toward developing more interpretable and reliable AI systems. Motivated by theories of human cognition, this study examines the relationship between convexity in neural network representations and human-machine alignment based on behavioral data. We identify a correlation between these two dimensions in pretrained and fine-tuned vision transformer models. Our findings suggest the convex regions formed in latent spaces of neural networks to some extent align with human-defined categories and reflect the similarity relations humans use in cognitive tasks. While optimizing for alignment generally enhances convexity, increasing convexity through fine-tuning yields inconsistent effects on alignment, which suggests a complex relationship between the two. This study presents a first step toward understanding the relationship between the convexity of latent representations and human-machine alignment.
[ "human-machine alignment", "convexity", "deep neural networks", "representation learning" ]
https://openreview.net/pdf?id=e1dpokhi0c
https://openreview.net/forum?id=e1dpokhi0c
smkTY8O9Of
meta_review
1,730,712,831,852
e1dpokhi0c
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission22/Area_Chair_RniS" ]
NLDL.org/2025/Conference
2025
metareview: The paper examines the relationship between convexity in the representations produced by neural networks and human-machine alignment. They establish a correlation in real-world models between these quantities. Additionally, they find that convexity tend to increase when optimizing for alignment. The reviewers agree that the topic of the paper is interesting and provides insights worth publication. Given the nature of the reviews and that the authors have improved the paper by addressing the concerns raised by the reviewers during the rebuttal phase, I recommend accepting the paper. recommendation: Accept (Poster) suggested_changes_to_the_recommendation: 1: I agree that the recommendation could be moved down confidence: 4: The area chair is confident but not absolutely certain
e1dpokhi0c
Connecting Concept Convexity and Human-Machine Alignment in Deep Neural Networks
[ "Teresa Dorszewski", "Lenka Tětková", "Lorenz Linhardt", "Lars Kai Hansen" ]
Understanding how neural networks align with human cognitive processes is a crucial step toward developing more interpretable and reliable AI systems. Motivated by theories of human cognition, this study examines the relationship between convexity in neural network representations and human-machine alignment based on behavioral data. We identify a correlation between these two dimensions in pretrained and fine-tuned vision transformer models. Our findings suggest the convex regions formed in latent spaces of neural networks to some extent align with human-defined categories and reflect the similarity relations humans use in cognitive tasks. While optimizing for alignment generally enhances convexity, increasing convexity through fine-tuning yields inconsistent effects on alignment, which suggests a complex relationship between the two. This study presents a first step toward understanding the relationship between the convexity of latent representations and human-machine alignment.
[ "human-machine alignment", "convexity", "deep neural networks", "representation learning" ]
https://openreview.net/pdf?id=e1dpokhi0c
https://openreview.net/forum?id=e1dpokhi0c
WkFsgDYOpG
official_review
1,728,503,619,599
e1dpokhi0c
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission22/Reviewer_hHhT" ]
NLDL.org/2025/Conference
2025
title: Connecting Concept Convexity and Human-Machine Alignment in Deep Neural Networks summary: The paper provides insights into how convexity in latent representations and human-machine alignment are connected. It looks at convexity scores and alignment for 3 different transformer-based networks at different stages/layers using both simply pretrained and fine-tuned versions, as well as how directly improving human-machine alignment via a dedicated transform affects the convexity. While the results are not entirely consistent across settings, they identify some positive correlation between alignment and convexity and a potential causal relationship between them. strengths: - the paper is well written and provides a clear description of the background work as well as their topic of investigation. - all the main concepts are well described and illustrated - it provides useful insight into the relationship between convexity and human-machine alignment, which may inform further works on the topic (eg. shows that the middle stages of the networks may be more resonable to study in human-machine alignment). - the code is available and the process well documented weaknesses: A few points I would like to be adressed for clarity: - what is meant by 'centered representations' in 3.3? - You mention that the convexity and OOOA follow bell-shaped curves. However there are sudden increases again at the last layers for some of the methods. Any comments on that? - Figure 2C has inadequate caption/description: For which network is that plotted? Since they don't all have the same amount of layers... Is it averaged? - seems that convexity changes most (for the best) at transformations at first layer, while last layer is more important for OOOA. Any thought on why that could be? - I find the part on confounding factors (eg. in A.3) lacking/not at all worked out. I would expect you to at least speculate about the topic and further research directions here, instead of simply mentioning the results. After all, this paper is not a method contribution paper, but trying to look under the hood and explain or at least indicate the underlying relationships. - I'm not entirely convinced the findings on pretrained vs finetuned are generalizable; have you looked at the specific tasks (both for finetuning and pretraining) and how well they resemble the alignment task? The potential problem I see here is that this task for measuring alignment is biased towards models trained for similar tasks (eg classification of natural images). I would thus be careful when using its results in arguments comparing different networks that were trained for sufficiently different tasks. Some smaller comments that would further improve the paper: - you only provide a definition of graph convexity, then describe the convexity score inside text. I would recommend you provide even a formal definition of the score in addition to this. - legend missing in Figure A1 - more networks and specifically different tasks would be needed to be able to evaluate properly and in a more generalizable way, although I suppose the scope of this paper was really just to present the idea and not a thorough evaluation. - mention the existence of additional results (specifically the results from other than last layers) that are in the supplementary clearly in main text confidence: 3 justification: There are of course points that could be improved and the paper as it is provides only an incremental improvement in the understanding of the interplay between convexity and alignment (could be improved with more experiments/evaluations and targeted discussion). However, I think that it is definitely of high enough quality in terms of writing and, more importantly, illustrates a new way of looking at convexity&human-machine alignment (and ultimately explainability) that is worth sharing more broadly and could very well inform and direct further research on this. final_rebuttal_confidence: 3 final_rebuttal_justification: The authors have further improved the paper according to the reviews. All the comments point to the fact that while the experiments (or argumentation for some of the choices made) are not the strongest, the presented idea and insights are interesting for the wider community.
e1dpokhi0c
Connecting Concept Convexity and Human-Machine Alignment in Deep Neural Networks
[ "Teresa Dorszewski", "Lenka Tětková", "Lorenz Linhardt", "Lars Kai Hansen" ]
Understanding how neural networks align with human cognitive processes is a crucial step toward developing more interpretable and reliable AI systems. Motivated by theories of human cognition, this study examines the relationship between convexity in neural network representations and human-machine alignment based on behavioral data. We identify a correlation between these two dimensions in pretrained and fine-tuned vision transformer models. Our findings suggest the convex regions formed in latent spaces of neural networks to some extent align with human-defined categories and reflect the similarity relations humans use in cognitive tasks. While optimizing for alignment generally enhances convexity, increasing convexity through fine-tuning yields inconsistent effects on alignment, which suggests a complex relationship between the two. This study presents a first step toward understanding the relationship between the convexity of latent representations and human-machine alignment.
[ "human-machine alignment", "convexity", "deep neural networks", "representation learning" ]
https://openreview.net/pdf?id=e1dpokhi0c
https://openreview.net/forum?id=e1dpokhi0c
SmsCcr5jyK
official_review
1,728,132,613,216
e1dpokhi0c
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission22/Reviewer_uXoU" ]
NLDL.org/2025/Conference
2025
title: Review of "Connecting Concept Convexity and Human-Machine Alignment in Deep Neural Networks" summary: The authors empirically study, for various transformer-based vision models, the relationship between the two concepts _human-machine alignment_ and _concept convexity_. For both of these concepts they evaluate an existing proxy/metric, namely _graph convexity_ and the _odd one out accuracy_, and evaluate these on the latent representations of pictures from based classes after various transformer layers in the model. The authors mention that this study provides the first evidence of a relationship between the two concepts. Although it is difficult to draw conclusions from correlation coefficients, the authors also performed an intervention study where they optimize for odd-one-out-accuracy to see the influence on convexity, showing that especially for models that are not finetuned, also the convexity increases. strengths: Especially at the high level, I believe the question on the relationship between the representations in machine-learning models and human-machine alignment is interesting. The article is neatly written, and on a high level the story becomes clear quickly. It is also interesting to study these numbers, and one can probably get more interesting statistics out of them. It is difficult to analyze correlation scores to interpret relationships, but the authors alleviate this at least partly by doing an intervention study as well: in one of their experiments, they optimize for human-alignment, and measure the effect on convexity. weaknesses: In contrast to the larger story, the details were more difficult to figure out for me, and I am still uncomfortable pointing out exactly how various quantities are computed. The authors mention for instance "We performed a correlation analysis of the two scores using Pearson’s R on a layer-wise basis across all models." It was originally unclear to me whether there the number of datapoints in the computation of such a correlation would be just be the twelve averaged scores corresponding to the twelve trained models, or whether there would be a data point for every triplet of classes, for instance. A formula might help, or an even more precise wording. Similarly, the appendix mentions "The latent representations were extracted according to the way they were used by the classifier in the original implementation. Hence, we took the vector corresponding to the classifier token for ViT and averaged over all the other tokens for data2vec and BEiT." I have difficulty understanding what this means. In general, I am unsure how and where the latent representation of a class is actually computed. This is relevant for the interpretation of the convexity concept. Although not necessarily a weakness of the paper, it is in a way difficult to connect the proxy to the intended concept, i.e. graph convexity to a measure of convexity, and out-of-order-alignment with human-machine alignment. For instance, whereas the authors in the conclusion talk about the formation of convex regions, it is not necessary for achieving a high correlation score that latent representations of a certain class form a mathematically convex set. On a related, note, the mathematical concept of convexity would not be affected by an affine transformation, while the graph convexity does, because of the construction of the nearest-neighbor graph. The article might benefit from a discussion on these issues. More detailed remark: I am wondering why the vectors appearing in the inner product in the definition of Z are not normalized. confidence: 3 justification: The high-level story is interesting to read and it addresses a concrete version of the interesting question how human-machine alignment and latent representations in machine-learning models relate. I also think it is nice that the experiments have been done and that anybody can look at the numbers and ponder about what they mean. That also immediately gives the counterside: it is a a bit difficult to interpret back the results in terms of the higher level question. final_rebuttal_confidence: 3 final_rebuttal_justification: The high-level story is interesting to read and it addresses a concrete version of the interesting question how human-machine alignment and latent representations in machine-learning models relate. I also think it is nice that the experiments have been done and that anybody can look at the numbers and ponder about what they mean. That also immediately gives the counterside: it is a a bit difficult to interpret back the results in terms of the higher level question. I also think the authors responded well to the reviews, by incorporating concerns in the text.
e1dpokhi0c
Connecting Concept Convexity and Human-Machine Alignment in Deep Neural Networks
[ "Teresa Dorszewski", "Lenka Tětková", "Lorenz Linhardt", "Lars Kai Hansen" ]
Understanding how neural networks align with human cognitive processes is a crucial step toward developing more interpretable and reliable AI systems. Motivated by theories of human cognition, this study examines the relationship between convexity in neural network representations and human-machine alignment based on behavioral data. We identify a correlation between these two dimensions in pretrained and fine-tuned vision transformer models. Our findings suggest the convex regions formed in latent spaces of neural networks to some extent align with human-defined categories and reflect the similarity relations humans use in cognitive tasks. While optimizing for alignment generally enhances convexity, increasing convexity through fine-tuning yields inconsistent effects on alignment, which suggests a complex relationship between the two. This study presents a first step toward understanding the relationship between the convexity of latent representations and human-machine alignment.
[ "human-machine alignment", "convexity", "deep neural networks", "representation learning" ]
https://openreview.net/pdf?id=e1dpokhi0c
https://openreview.net/forum?id=e1dpokhi0c
OUqBlbiQLB
official_review
1,728,370,449,241
e1dpokhi0c
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission22/Reviewer_Qz4L" ]
NLDL.org/2025/Conference
2025
title: Review summary: The topic is representation learning. In particular, the paper studies graph convexity and human-machine alignment of the representations learned by neural networks trained either self-supervised or supervised on ImageNet-data. Graph convexity is measured for each class in the THINGS dataset. For human-machine alignment, triplet odd-one-out-accuracy (OOOA) on the THINGS dataset is used. The paper finds that human-machine alignment is typically highest in middle layers of networks, while graph convexity typically increases monotonically with deeper layers. In the authors' words, correlation scores of the two measures over several networks "indicate a significant correlation between these two measures in some scenarios". The paper also studies what happens to the graph convexity when optimizing an affine transformation to increase the OOOA. The graph convexity increases for some networks and decreases for others. The scale of the experiments (12 networks) is ultimately too small to say anything conclusively about the relationship between OOOA and graph convexity. As the authors note, "Further research is warranted to explore under which conditions convexity and human-machine alignment align.". strengths: - Results on OOOA and graph convexity individually are interesting. - The experiments seem correctly conducted. weaknesses: - It is not clear why the correlation between graph convexity and OOOA is studied. Is the aim to test Gärdenfors' theory by arguing that if human concepts are convex in neural network latent spaces, then human concepts are convex? If so, what are the conclusions from this study? - The experiments are too small-scale to give any conclusive results. - The paper divides the studied networks into "pretrained" and "finetuned", but the networks were pretrained in different ways. As noted in Appendix A.3, perhaps the more important division is self-supervised vs supervised. Again, the number of studied networks is too small to determine this. - For graph convexity, Euclidean distance is used to build the graph. However, cosine similarity is used to measure similarity in the OOOA measure. This means that if a class has embeddings close to zero, it can be graph convex while having embeddings in all directions from zero, leading automatically to low OOOA. Do the conclusions from the experiments change if the same similarity is used for both measures? One could switch to either use cosine similarity to build the graph for measuring graph convexity or use Euclidean distance to measure similarity in the OOOA. confidence: 3 justification: The conclusion of the paper is that there are some situations in which graph convexity and human-machine alignment correlate. This is a quite weak conclusion, but it was impossible to draw stronger conclusions due to the small-scale experiments. Further, the similarities used for graph convexity (Euclidean distance) and OOOA (cosine similarity) are not the same, which could influence results. I believe the paper would be significantly improved if the authors more clearly stated the study's aim and contribution and resolved the mentioned similarity discrepancy. final_rebuttal_confidence: 3 final_rebuttal_justification: The authors have addressed the concerns of all the reviewers. The paper contains insights worth discussing at the conference, although the experiments are perhaps not large-scale enough to draw far-reaching conclusions.
e1dpokhi0c
Connecting Concept Convexity and Human-Machine Alignment in Deep Neural Networks
[ "Teresa Dorszewski", "Lenka Tětková", "Lorenz Linhardt", "Lars Kai Hansen" ]
Understanding how neural networks align with human cognitive processes is a crucial step toward developing more interpretable and reliable AI systems. Motivated by theories of human cognition, this study examines the relationship between convexity in neural network representations and human-machine alignment based on behavioral data. We identify a correlation between these two dimensions in pretrained and fine-tuned vision transformer models. Our findings suggest the convex regions formed in latent spaces of neural networks to some extent align with human-defined categories and reflect the similarity relations humans use in cognitive tasks. While optimizing for alignment generally enhances convexity, increasing convexity through fine-tuning yields inconsistent effects on alignment, which suggests a complex relationship between the two. This study presents a first step toward understanding the relationship between the convexity of latent representations and human-machine alignment.
[ "human-machine alignment", "convexity", "deep neural networks", "representation learning" ]
https://openreview.net/pdf?id=e1dpokhi0c
https://openreview.net/forum?id=e1dpokhi0c
GR9HemkgAl
decision
1,730,901,555,417
e1dpokhi0c
[ "everyone" ]
[ "NLDL.org/2025/Conference/Program_Chairs" ]
NLDL.org/2025/Conference
2025
title: Paper Decision decision: Accept (Poster) comment: We recommend a poster presentation given the AC and reviewers recommendations.
e1dpokhi0c
Connecting Concept Convexity and Human-Machine Alignment in Deep Neural Networks
[ "Teresa Dorszewski", "Lenka Tětková", "Lorenz Linhardt", "Lars Kai Hansen" ]
Understanding how neural networks align with human cognitive processes is a crucial step toward developing more interpretable and reliable AI systems. Motivated by theories of human cognition, this study examines the relationship between convexity in neural network representations and human-machine alignment based on behavioral data. We identify a correlation between these two dimensions in pretrained and fine-tuned vision transformer models. Our findings suggest the convex regions formed in latent spaces of neural networks to some extent align with human-defined categories and reflect the similarity relations humans use in cognitive tasks. While optimizing for alignment generally enhances convexity, increasing convexity through fine-tuning yields inconsistent effects on alignment, which suggests a complex relationship between the two. This study presents a first step toward understanding the relationship between the convexity of latent representations and human-machine alignment.
[ "human-machine alignment", "convexity", "deep neural networks", "representation learning" ]
https://openreview.net/pdf?id=e1dpokhi0c
https://openreview.net/forum?id=e1dpokhi0c
78GlcGsnw3
official_review
1,728,774,554,767
e1dpokhi0c
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission22/Reviewer_95m5" ]
NLDL.org/2025/Conference
2025
title: Interesting unusual metrics with potential benefits summary: The paper describes a novel approach of measuring convexity in neural networks and relating it to the alignment with humans, by measuring the performance on the OOOA (odd one out task). The study demonstrates that convexity in neural networks seem to be related with human alignment in the way information is organized in the latent space. They show that optimizing for OOOA task have a positive effect in convexity in Neural networks. strengths: The paper provides an interesting and novel method to study neural networks and the alignment with humans with respect to the out one odd task. weaknesses: I might have missed it, but I feel that while the concept of convexity is nicely introduced, a more clear understanding on why is it desirable is missing? What do we get out models that are more convex? Are they more interpretable? The authors mention briefly in the intro that convex systems are more robust, but there is no evidence presented in the paper about this. Could a robust study be included to show how the increased convexity helped the models after the alignment was provided? Also not sure why there are no mention on any convolutional models that have been shown more aligned with humans in terms of architecture. confidence: 4 justification: I think there is novelty in including the convexity as a measurement of capacity for models, but it seems the results are half way through, the authors show that increasing OOOA accuracy helps increasing the convexity, but what happen next? Perhaps more work into showing this benefit would make the paper stronger and a clearer higher impact to the community. final_rebuttal_confidence: 4 final_rebuttal_justification: Given the positive reviews that other reviewers have raised. I upgraded my score.
dFKeHbPkew
Hybrid Concept-based Models: Using Concepts to Improve Neural Networks' Accuracy
[]
Most datasets used for supervised machine learning consist of a single label per data point. However, in cases where more information than just the class label is available, would it be possible to train models more efficiently? We introduce two novel model architectures, which we call hybrid concept-based models, that train using both class labels and additional information in the dataset referred to as concepts. In order to thoroughly assess their performance, we introduce ConceptShapes, an open and flexible class of datasets with concept labels. We show that the hybrid concept-based models can outperform standard computer vision models and previously proposed concept-based models with respect to accuracy. We also introduce an algorithm for performing adversarial concept attacks, where an image is perturbed in a way that does not change a concept-based model's concept predictions, but changes the class prediction. The existence of such adversarial examples raises questions about the interpretable qualities promised by concept-based models.
[ "Deep learning", "computer vision", "concept-based models", "data efficient models." ]
https://openreview.net/pdf?id=dFKeHbPkew
https://openreview.net/forum?id=dFKeHbPkew
y3r4lJaGIj
official_review
1,728,530,051,730
dFKeHbPkew
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission3/Reviewer_AF2y" ]
NLDL.org/2025/Conference
2025
title: Review on the paper "Achieving Data Efficient Neural Networks with Hybrid Concept-based Models" summary: The paper proposes an interesting idea to include additional fine grained features with actual images to improve image classification. The paper proposed many ideas such as two model architectures to incorporate concept labels, a new concept dataset and a method to generate adversarial samples. While the new ideas are strength of the paper at the same time the authors could not emphasize any of the proposed ideas clearly with adequate discussions given the page restrictions of the submission that made the paper weak. The paper missing some key elements needed to concretely prove the effectiveness of the proposed models and or ideas. At the start of introduction the authors comments on the interpretability of DL models which is a well-known issue. Use of concepts in model training to aid interpretability is not new. The authors should provide more proof regarding the effectiveness of the proposed models with respect to the concept leakage problem. Also the idea of better performance with fewer data with the aid of concept labels is also an interesting finding that is not supported by convincing experiments or discussion. strengths: The paper is well written and easy to follow. Proposes few interesting ideas that makes the paper interesting. Such as: - Using concepts to increase interpretability - A new concept dataset - Concept to aid training with small datasets - Models to integrate concepts with training data weaknesses: I think the authors should consider the following weaknesses of the paper: 1. In the introduction from line 25 to 51 they discuss regarding interpretability of the concepts and in 52 they say interpretability is not their main goal. Instead, authors can skip the interpretability discussion and focus on data efficiency. 2. I am unsure why the authors use the term data efficient? I assume the claim is due to achieving good performance with smaller number of samples and concepts. To support this claim, the authors should experiment with a large dataset and take a few samples from the dataset and show that the same level of performance as the large dataset can be achieved using smaller sample size and concepts. Figure 7 shows a different thing - it actually showing a counter example as increasing the number of samples is increasing performance of the models. 3. More details are required to understand the models in Figure 2 and Figure 3. I assume that the bottleneck layer and output layer are Multilayer perceptrons with output layer generating the class labels. However, the overall discussion of the models must be improved. 4. The right most plot in Fig 7 showing that proposed models are performing better than the oracle - this result is surprising but not explained by the authors. confidence: 3 justification: The paper should be rejected in its current form as the claimed ideas and conclusions have significance in this area but lack appropriate justification and or empirical evidence. The study lacks robust data to support its conclusions. The methodology (proposed models) are inadequately described. Given this feedback, I recommend to reject the paper. The authors are encouraged to: revise and resubmit in future with substantial improvements. They should focus on one main contribution and conduct additional empirical research to support their claim. Building upon one contribution, they can provide additional contributions supporting the main claim. Provide more comprehensive evidence for their conclusions.
dFKeHbPkew
Hybrid Concept-based Models: Using Concepts to Improve Neural Networks' Accuracy
[]
Most datasets used for supervised machine learning consist of a single label per data point. However, in cases where more information than just the class label is available, would it be possible to train models more efficiently? We introduce two novel model architectures, which we call hybrid concept-based models, that train using both class labels and additional information in the dataset referred to as concepts. In order to thoroughly assess their performance, we introduce ConceptShapes, an open and flexible class of datasets with concept labels. We show that the hybrid concept-based models can outperform standard computer vision models and previously proposed concept-based models with respect to accuracy. We also introduce an algorithm for performing adversarial concept attacks, where an image is perturbed in a way that does not change a concept-based model's concept predictions, but changes the class prediction. The existence of such adversarial examples raises questions about the interpretable qualities promised by concept-based models.
[ "Deep learning", "computer vision", "concept-based models", "data efficient models." ]
https://openreview.net/pdf?id=dFKeHbPkew
https://openreview.net/forum?id=dFKeHbPkew
t6lRhmsxyF
official_review
1,728,456,445,381
dFKeHbPkew
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission3/Reviewer_EuRs" ]
NLDL.org/2025/Conference
2025
title: Interesting paper with useful takeaway messages summary: The authors study two main themes the context of concept-based models. The first theme concerns issues with existing benchmark datasets, and propose a synthetic dataset for concepts (an MNIST of sorts). The second theme concerns the robustness of concept-based predictions, and is pursued along two directions. The first direction concerns enhanced variants of the CBM architecture, basically involving a skip connection offering an alternate route to task predictions that doesn't rely on concept predictions. The second direction demonstrates adversarial concept attacks that change task predictions without changing concept predictions, raising questions about the interpretable qualities promised by concept-based models. strengths: - The paper is well-written and easy to read. - Experiments fit the key questions identified, and are easy to re-produce. - Presents ample opportunity to investigate further along the themes/directions presented. weaknesses: - The paper feels a bit "lite" trying to fit different themes/directions within 5 pages. - See L053 where sparsity considerations are introduced. - IMHO, on a first reading of the paper, I didn't get a strong impression on data efficiency considerations. - It would seem more natural to use a title such as "On the Robustness of Concept-Based Models: Benchmarks, Architectures, and Attacks" - Note that even with that long title, I still didn't get to data efficiency (!) - The proposed synthetic dataset, with combinations of geometric shapes, is much simpler than birds with rich textures and intricate combinations of body parts, etc. - The presentation needs a minor revision - Figure 1 is too verbose, essentially repeating the subfigures to highlight concept predictions being invariant. - Most plots are accompanied by only descriptive captions, that fall short of offering a summary or conclusion to look for. - Relevant parts in some plots are too small to see due to the choice of scale or axis range chosen. - Nitpicking - We will deviate -- interesting choice of words, which warrants elaboration on the motivation and connection to the main theme - L245: majority votes -- If a composite verb is absolutely necessary, I'd write it with a hyphen as majority-votes. Still, this language is not standard to the reviewer's knowledge. Please rephrase for clarity. confidence: 3 justification: Overall, the submission raises important questions, includes sound results, and makes for interesting discussion and follow up work.
dFKeHbPkew
Hybrid Concept-based Models: Using Concepts to Improve Neural Networks' Accuracy
[]
Most datasets used for supervised machine learning consist of a single label per data point. However, in cases where more information than just the class label is available, would it be possible to train models more efficiently? We introduce two novel model architectures, which we call hybrid concept-based models, that train using both class labels and additional information in the dataset referred to as concepts. In order to thoroughly assess their performance, we introduce ConceptShapes, an open and flexible class of datasets with concept labels. We show that the hybrid concept-based models can outperform standard computer vision models and previously proposed concept-based models with respect to accuracy. We also introduce an algorithm for performing adversarial concept attacks, where an image is perturbed in a way that does not change a concept-based model's concept predictions, but changes the class prediction. The existence of such adversarial examples raises questions about the interpretable qualities promised by concept-based models.
[ "Deep learning", "computer vision", "concept-based models", "data efficient models." ]
https://openreview.net/pdf?id=dFKeHbPkew
https://openreview.net/forum?id=dFKeHbPkew
ggoYj51wtS
official_review
1,728,249,038,924
dFKeHbPkew
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission3/Reviewer_2rhX" ]
NLDL.org/2025/Conference
2025
title: Interesting study with nice experiments summary: The paper is an interesting study which questions the interpretability of concept based methods. The authors perform a thorough analysis by: (a) Creating a synthetic concept based dataset that does not suffer from the problems of data labelling noise and is efficient to generate. (b) New architectures for concept based learning. (b) Utilize this dataset to show that it's easy to perform adversarial attacks and fool models where the underlying concept is kept the same but the final output changes. The authors final claim is that concept based methods are not interpretable because of adversarial attacks. strengths: (a) Very clearly written, with good explanations of motivations for the experiments (b) Experiments are well explained and clear (c) Good motivation on why the CUB dataset's concepts are lacking and the need for a better dataset. weaknesses: (a) It seems that there are two different contributions in this paper - (i) New concept based architecture and (ii) Showing the lack of interpretability of the concept networks. It would be more clear and thorough if the paper could focus more deeply on one of these contributions rather than trying to put two distinct contributions into the same research work. (b) While it has been shown that concept based models are susceptible to adversarial attacks I would not agree that it nullifies the whole interpretability argument of concept based networks as argued in the paper. The adversarial attacks simply show that concept based models are not interpretable when data is out of distribution from training data. It does not prove that concept based models are uninterpretable for in distribution data. (c) The main problem with the synthetic dataset is it not being accurate to real images and might be easier than natural images. confidence: 3 justification: While I think the paper does have certain weaknesses in terms of its claims that concept based models are uninterpretable, I do think the paper is a good contribution because of the dataset along with its results pointing out the dissonance between concepts and final model outputs. The dataset that they share might be a good springboard for further research on concept based learning. final_rebuttal_confidence: 4 final_rebuttal_justification: I believe the authors have some interesting insights to share in this paper. They fully acknowledge the shortcomings related to synthetic data in their paper but despite that I think it's an interesting dataset to do ablations for understanding the effect of concepts in neural network training. The one real weakness is the number of different ideas shared in this one paper - it would have been clearer and made for a much stronger paper if there were fewer ideas but explored more deeply.
dFKeHbPkew
Hybrid Concept-based Models: Using Concepts to Improve Neural Networks' Accuracy
[]
Most datasets used for supervised machine learning consist of a single label per data point. However, in cases where more information than just the class label is available, would it be possible to train models more efficiently? We introduce two novel model architectures, which we call hybrid concept-based models, that train using both class labels and additional information in the dataset referred to as concepts. In order to thoroughly assess their performance, we introduce ConceptShapes, an open and flexible class of datasets with concept labels. We show that the hybrid concept-based models can outperform standard computer vision models and previously proposed concept-based models with respect to accuracy. We also introduce an algorithm for performing adversarial concept attacks, where an image is perturbed in a way that does not change a concept-based model's concept predictions, but changes the class prediction. The existence of such adversarial examples raises questions about the interpretable qualities promised by concept-based models.
[ "Deep learning", "computer vision", "concept-based models", "data efficient models." ]
https://openreview.net/pdf?id=dFKeHbPkew
https://openreview.net/forum?id=dFKeHbPkew
fuku2nq04q
decision
1,730,901,554,447
dFKeHbPkew
[ "everyone" ]
[ "NLDL.org/2025/Conference/Program_Chairs" ]
NLDL.org/2025/Conference
2025
title: Paper Decision decision: Reject
dFKeHbPkew
Hybrid Concept-based Models: Using Concepts to Improve Neural Networks' Accuracy
[]
Most datasets used for supervised machine learning consist of a single label per data point. However, in cases where more information than just the class label is available, would it be possible to train models more efficiently? We introduce two novel model architectures, which we call hybrid concept-based models, that train using both class labels and additional information in the dataset referred to as concepts. In order to thoroughly assess their performance, we introduce ConceptShapes, an open and flexible class of datasets with concept labels. We show that the hybrid concept-based models can outperform standard computer vision models and previously proposed concept-based models with respect to accuracy. We also introduce an algorithm for performing adversarial concept attacks, where an image is perturbed in a way that does not change a concept-based model's concept predictions, but changes the class prediction. The existence of such adversarial examples raises questions about the interpretable qualities promised by concept-based models.
[ "Deep learning", "computer vision", "concept-based models", "data efficient models." ]
https://openreview.net/pdf?id=dFKeHbPkew
https://openreview.net/forum?id=dFKeHbPkew
INbYrCdO5Z
meta_review
1,730,391,723,406
dFKeHbPkew
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission3/Area_Chair_6mHp" ]
NLDL.org/2025/Conference
2025
metareview: Given the reviewers' comments to the author's rebuttal and my assessment of the paper, I would suggest that this paper be rejected, primarily because of its lack of novelty and the experimental results (which do not fully support the stated objectives). recommendation: Reject suggested_changes_to_the_recommendation: 2: I'm certain of the recommendation. It should not be changed confidence: 3: The area chair is somewhat confident
bkQRCWYrMb
BoRA: Bayesian Hierarchical Low-Rank Adaption for Multi-Task Large Language Models
[ "Simen Eide", "Arnoldo Frigessi" ]
This paper introduces Bayesian Hierarchical Low-Rank Adaption (BoRA), a novel method for finetuning multi-task Large Language Models (LLMs). Current finetuning approaches, such as Low-Rank Adaption (LoRA), perform exeptionally well in reducing training parameters and memory usage but face limitations when applied to multiple similar tasks. Practitioners usually have to choose between training separate models for each task or a single model for all tasks, both of which come with trade-offs in specialization and data utilization. BoRA addresses these trade-offs by leveraging a Bayesian hierarchical model that allows tasks to share information through global hierarchical priors. This enables tasks with limited data to benefit from the overall structure derived from related tasks while allowing tasks with more data to specialize. Our experimental results show that BoRA outperforms both individual and unified model approaches, achieving lower perplexity and better generalization across tasks. This method provides a scalable and efficient solution for multi-task LLM finetuning, with significant practical implications for diverse applications.
[ "LLM", "bayesian", "multi-task learning" ]
https://openreview.net/pdf?id=bkQRCWYrMb
https://openreview.net/forum?id=bkQRCWYrMb
w2H6MRFEJ7
official_review
1,727,623,397,029
bkQRCWYrMb
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission1/Reviewer_YdRC" ]
NLDL.org/2025/Conference
2025
title: Review of BoRA summary: The authors proposed a new method for fine-tuning multi-task Large Language Models called Bayesian Hierarchical Low-Rank Adaptation (BoRA). Instead of training a model for each task or a single model for all tasks without specialization, this fine-tuning approach allows the tasks to share global information but be specialized for their related tasks. They do that by changing the posterior distribution of the model by an equation that combines both the likelihood of the data belonging to the respective task and the Gaussian hierarchical prior over the task parameters, which measure how much structure and information the tasks share with each other. strengths: The paper presents a new fine-tuning method that performs multitasking in a single network. The authors also present easy-to-follow equations for their approach, step by step, to achieve the final posterior distribution. I first wondered why the authors selected the first 25 speakers of the dataset, but this selection allowed for different numbers of speeches, which would be important to see how the method is performing with different data sizes. weaknesses: I missed another multi-task learning method when comparing it with the work. The authors provide a Perplexity test with different Precision (tau) hyperparameter values, but it would be better if we had another method for comparison. The same can be said about the Section 2.1. Multi-task learning is popular, but you only cited one work on that without much detail, and they are an important part of the work. It needs to be improved to better pinpoint your work in the literature. A few references need to be updated with publications outside arXiv. The paper needs an English review to avoid inconsistencies, such as using "dataset" and "data set" in the same paragraph. confidence: 4 justification: This paper should be accepted because it provides a new method for fine-tuning multi-task Large Language Models that only need a single network to train every task. They present an explanatory step-by-step approach to the posterior distribution, with a good selection of the experiment's dataset. Some of their weaknesses are not too relevant, such as the references and English revision. So, with that, I propose the acceptance of the paper.
bkQRCWYrMb
BoRA: Bayesian Hierarchical Low-Rank Adaption for Multi-Task Large Language Models
[ "Simen Eide", "Arnoldo Frigessi" ]
This paper introduces Bayesian Hierarchical Low-Rank Adaption (BoRA), a novel method for finetuning multi-task Large Language Models (LLMs). Current finetuning approaches, such as Low-Rank Adaption (LoRA), perform exeptionally well in reducing training parameters and memory usage but face limitations when applied to multiple similar tasks. Practitioners usually have to choose between training separate models for each task or a single model for all tasks, both of which come with trade-offs in specialization and data utilization. BoRA addresses these trade-offs by leveraging a Bayesian hierarchical model that allows tasks to share information through global hierarchical priors. This enables tasks with limited data to benefit from the overall structure derived from related tasks while allowing tasks with more data to specialize. Our experimental results show that BoRA outperforms both individual and unified model approaches, achieving lower perplexity and better generalization across tasks. This method provides a scalable and efficient solution for multi-task LLM finetuning, with significant practical implications for diverse applications.
[ "LLM", "bayesian", "multi-task learning" ]
https://openreview.net/pdf?id=bkQRCWYrMb
https://openreview.net/forum?id=bkQRCWYrMb
qgTAsLaULo
official_review
1,728,547,684,986
bkQRCWYrMb
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission1/Reviewer_3ife" ]
NLDL.org/2025/Conference
2025
title: low-rank adaptation methods for LLM in the context of fine-tuning for multiple tasks at onc summary: **Summary**: The authors propose a low-rank adaptation methods for LLM in the context of fine-tuning for multiple tasks at once. In contrast to standard `LoRA` where one module would be trained **independetly** for each task, `BoRA` aims to leverage possible transferability across tasks. More specifically, BoRA also introduces separate module parameters $\theta_d$ for each task; but, to model the relationship between tasks, the join distribution $\theta_{1\dots D}$ is modelled with a Gaussian hierarchical prior, with a uniform hyperprior $\Theta$. strengths: **Strengths:** * The paper is well presented and clear * The core idea of using a shared prior/hyperprior is sound * The experiments section contains interesting ablations on the impact of dataset/task size on the improvement observed in the multi-task BORA weaknesses: **Weaknesses:** * By design, BoRA seems to assume the tasks are related enough to be modelled by a joint prior. However, task interference / negative transfer is a key issue in the multi-task literature: While the model does allow for the BoRA module parameters to deviate from the common prior, it would be interesting to test this In practice. In fact, the results of Table 1 seem to imply that the benchmark considered in the paper does not suffer much from task interference, since a single LoRA module trained on all task performs better than training independent LoRA modules for each task (13.91 vs 16.70 perplexity) * In terms of related work, the paper could also mention the task merging literature as it seems closely related to multi-task PEFT in general. **Minor notes:** * Some notations are a bit cumbersome (e.g. introducing the intermediate notion of documents in Section 3.1 seems superfluous; for instance, most recent LLMs tend to directly refer to the number of tokens they have been trained on) * In Figure 2, it would be useful to plot a constant line for the perplexity value of $\tau = 0$ ($\sim$ LoRA baseline) confidence: 4 justification: While I think the experimentation has some shortcomings (only one benchmark with positive transfer among tasks, and only comparing to the straightforward LoRA baseline), I find the paper to be well presented and the idea well grounded, and I also appreciated the ablation experiments on exploring not only the impact of $\tau$ but also of the dataset size. final_rebuttal_confidence: 4 final_rebuttal_justification: Taking into account the rebuttal and other reviews, I am inclined to keep my original rating: While I think the experimental section could include more bechmarks and/or baselines, the proposed idea is interesting and well explained, and experiments include an interesting ablations on $\tau$.
bkQRCWYrMb
BoRA: Bayesian Hierarchical Low-Rank Adaption for Multi-Task Large Language Models
[ "Simen Eide", "Arnoldo Frigessi" ]
This paper introduces Bayesian Hierarchical Low-Rank Adaption (BoRA), a novel method for finetuning multi-task Large Language Models (LLMs). Current finetuning approaches, such as Low-Rank Adaption (LoRA), perform exeptionally well in reducing training parameters and memory usage but face limitations when applied to multiple similar tasks. Practitioners usually have to choose between training separate models for each task or a single model for all tasks, both of which come with trade-offs in specialization and data utilization. BoRA addresses these trade-offs by leveraging a Bayesian hierarchical model that allows tasks to share information through global hierarchical priors. This enables tasks with limited data to benefit from the overall structure derived from related tasks while allowing tasks with more data to specialize. Our experimental results show that BoRA outperforms both individual and unified model approaches, achieving lower perplexity and better generalization across tasks. This method provides a scalable and efficient solution for multi-task LLM finetuning, with significant practical implications for diverse applications.
[ "LLM", "bayesian", "multi-task learning" ]
https://openreview.net/pdf?id=bkQRCWYrMb
https://openreview.net/forum?id=bkQRCWYrMb
eXRDS7NsYc
meta_review
1,730,309,568,377
bkQRCWYrMb
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission1/Area_Chair_Ljha" ]
NLDL.org/2025/Conference
2025
metareview: The paper proposes to use a Bayesian model for fine-tuning optimization of a multi-task LLM model. The contribution could be considered incremental. Experiments are weak (no comparison) and the plots are low-quality. The paper is evaluated as clear and well-presented. Extending the training to a multi-task scenario is reasonable, and using the Bayesian model for the training is a sound approach. pros: 1. fine-tuning of LLM in a multi-task scenario could be a realistic scenario 2. The Bayesian framework is reasonable cons: 1. weak evaluation, with no comparison 2. low-quality plots 3. incremental contribution recommendation: Accept (Poster) suggested_changes_to_the_recommendation: 1: I agree that the recommendation could be moved down confidence: 3: The area chair is somewhat confident
bkQRCWYrMb
BoRA: Bayesian Hierarchical Low-Rank Adaption for Multi-Task Large Language Models
[ "Simen Eide", "Arnoldo Frigessi" ]
This paper introduces Bayesian Hierarchical Low-Rank Adaption (BoRA), a novel method for finetuning multi-task Large Language Models (LLMs). Current finetuning approaches, such as Low-Rank Adaption (LoRA), perform exeptionally well in reducing training parameters and memory usage but face limitations when applied to multiple similar tasks. Practitioners usually have to choose between training separate models for each task or a single model for all tasks, both of which come with trade-offs in specialization and data utilization. BoRA addresses these trade-offs by leveraging a Bayesian hierarchical model that allows tasks to share information through global hierarchical priors. This enables tasks with limited data to benefit from the overall structure derived from related tasks while allowing tasks with more data to specialize. Our experimental results show that BoRA outperforms both individual and unified model approaches, achieving lower perplexity and better generalization across tasks. This method provides a scalable and efficient solution for multi-task LLM finetuning, with significant practical implications for diverse applications.
[ "LLM", "bayesian", "multi-task learning" ]
https://openreview.net/pdf?id=bkQRCWYrMb
https://openreview.net/forum?id=bkQRCWYrMb
doxZghDw8n
decision
1,730,901,554,360
bkQRCWYrMb
[ "everyone" ]
[ "NLDL.org/2025/Conference/Program_Chairs" ]
NLDL.org/2025/Conference
2025
title: Paper Decision decision: Accept (Poster) comment: We recommend a poster presentation given the AC and reviewers recommendations.
bkQRCWYrMb
BoRA: Bayesian Hierarchical Low-Rank Adaption for Multi-Task Large Language Models
[ "Simen Eide", "Arnoldo Frigessi" ]
This paper introduces Bayesian Hierarchical Low-Rank Adaption (BoRA), a novel method for finetuning multi-task Large Language Models (LLMs). Current finetuning approaches, such as Low-Rank Adaption (LoRA), perform exeptionally well in reducing training parameters and memory usage but face limitations when applied to multiple similar tasks. Practitioners usually have to choose between training separate models for each task or a single model for all tasks, both of which come with trade-offs in specialization and data utilization. BoRA addresses these trade-offs by leveraging a Bayesian hierarchical model that allows tasks to share information through global hierarchical priors. This enables tasks with limited data to benefit from the overall structure derived from related tasks while allowing tasks with more data to specialize. Our experimental results show that BoRA outperforms both individual and unified model approaches, achieving lower perplexity and better generalization across tasks. This method provides a scalable and efficient solution for multi-task LLM finetuning, with significant practical implications for diverse applications.
[ "LLM", "bayesian", "multi-task learning" ]
https://openreview.net/pdf?id=bkQRCWYrMb
https://openreview.net/forum?id=bkQRCWYrMb
E0Dd4Aas5f
official_review
1,728,518,464,793
bkQRCWYrMb
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission1/Reviewer_kKbg" ]
NLDL.org/2025/Conference
2025
title: This paper has already been published on arXiv (https://arxiv.org/abs/2407.15857), and provides the BoRA method which is a new approach that enhances the performance of Large Language Models (LLMs) when trained for multiple tasks summary: This paper has already been published on arXiv (https://arxiv.org/abs/2407.15857) and provides the BoRA method, a new approach that enhances the performance of Large Language Models when trained for multiple tasks. Using Bayesian hierarchical priors, BoRA balances training separate models for each task and using a single model for all tasks. This allows tasks with limited data to benefit from the knowledge shared by other tasks. Experiments on Norwegian parliamentary speeches demonstrate that BoRA outperforms traditional methods. The paper has a good soundness strengths: The authors enhanced existing techniques, including LoRA and multi-task learning, by applying Bayesian priors, which are theoretically sound. Their method, BoRA, provides a scalable and efficient approach to fine-tuning LLMs, allowing them to perform well across various tasks while preserving their versatility. This is particularly valuable for applications that require a broad range of functionalities. The paper is well-structured and written in clear, polished English. weaknesses: The method’s performance heavily relies on the precision hyperparameter (τ), but the paper lacks a thorough investigation of its sensitivity across different tasks. The absence of a complete Bayesian analysis prevents a deeper understanding of uncertainty estimates, limiting the discussion on the reliability of the model’s predictions. More experiments exploring how different values of τ impact model performance in various settings would be beneficial for practitioners. The paper could benefit from using additional metrics (e.g., BLEU Score, task-wise variance, Mean Reciprocal Rank) to better understand the model’s performance, particularly in multi-task learning and fine-tuning. The paper does not focus on computational cost metrics (e.g., training time, memory usage, GPU utilization), associated with BoRA compared to LoRA or other methods. Such details could further enhance the understanding of BoRA’s practical efficiency in real-world applications. The discussion would be enriched by including quantitative measurements of training time, energy consumption, or resource allocation for BoRA. confidence: 5 justification: The paper, already published on arXiv (https://arxiv.org/abs/2407.15857), presents the BoRA method, a novel approach for improving Large Language Models (LLMs) in multi-task learning by utilizing Bayesian hierarchical priors. This approach balances training separate models for each task and using a single model, allowing tasks with limited data to benefit from shared knowledge. BoRA, tested on Norwegian parliamentary speeches, outperforms traditional methods and enhances existing techniques like LoRA and multi-task learning. Although BoRA shows promise, the authors suggest further exploration of the hyperparameter τ's sensitivity and full Bayesian analysis for uncertainty estimates. The paper also highlights the need for more metrics to assess performance across tasks and a deeper focus on computational cost, particularly in comparison to LoRA.
bkQRCWYrMb
BoRA: Bayesian Hierarchical Low-Rank Adaption for Multi-Task Large Language Models
[ "Simen Eide", "Arnoldo Frigessi" ]
This paper introduces Bayesian Hierarchical Low-Rank Adaption (BoRA), a novel method for finetuning multi-task Large Language Models (LLMs). Current finetuning approaches, such as Low-Rank Adaption (LoRA), perform exeptionally well in reducing training parameters and memory usage but face limitations when applied to multiple similar tasks. Practitioners usually have to choose between training separate models for each task or a single model for all tasks, both of which come with trade-offs in specialization and data utilization. BoRA addresses these trade-offs by leveraging a Bayesian hierarchical model that allows tasks to share information through global hierarchical priors. This enables tasks with limited data to benefit from the overall structure derived from related tasks while allowing tasks with more data to specialize. Our experimental results show that BoRA outperforms both individual and unified model approaches, achieving lower perplexity and better generalization across tasks. This method provides a scalable and efficient solution for multi-task LLM finetuning, with significant practical implications for diverse applications.
[ "LLM", "bayesian", "multi-task learning" ]
https://openreview.net/pdf?id=bkQRCWYrMb
https://openreview.net/forum?id=bkQRCWYrMb
4fGadd6FfV
official_review
1,728,588,574,603
bkQRCWYrMb
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission1/Reviewer_X5J4" ]
NLDL.org/2025/Conference
2025
title: Promising but Requires Broader Validation and Deeper Analysis summary: The paper titled *"BoRA: Bayesian Hierarchical Low-Rank Adaption for Multi-task Large Language Models"* introduces BoRA, a novel method designed to fine-tune Large Language Models (LLMs) in multi-task environments. The approach addresses the limitations of existing fine-tuning techniques, specifically the trade-off between task specialization and the effective utilization of data. BoRA leverages a Bayesian hierarchical model, which allows multiple tasks to share information through global hierarchical priors. This enables tasks with limited data to benefit from the shared structure derived from related tasks, while tasks with more data can focus on specializing. In this way, BoRA strikes a balance between the two main approaches in multi-task learning: training separate models for each task and training a single model for all tasks. By doing so, it effectively mitigates the trade-offs that typically arise in such scenarios. In addition to this hierarchical structure, BoRA extends the Low-Rank Adaption (LoRA) technique, a method commonly used to reduce the number of trainable parameters in LLMs. BoRA enhances LoRA by introducing hierarchical priors over the task-specific parameters, which enables a more structured and data-efficient approach to multi-task learning. The authors validate their method using a dataset of Norwegian parliament speeches, where each speaker is treated as a distinct task. BoRA outperforms both models trained on individual tasks and a unified model trained across all tasks. The results show that BoRA achieves lower perplexity and better generalization across tasks, particularly benefiting tasks with less available data. This demonstrates that BoRA provides a scalable and efficient solution for fine-tuning LLMs across multiple tasks with varying data sizes, reducing the complexity of managing individual models while maintaining strong task-specific performance. From a theoretical standpoint, BoRA is grounded in Bayesian principles, offering a solid framework for combining task-specific learning with global knowledge. The Bayesian hierarchical approach ensures that even tasks with limited data can leverage global parameters to improve performance without overfitting. Furthermore, the empirical results, based on a realistic dataset, confirm the correctness of the method. The Talk of Norway dataset, with tasks of varying data sizes, serves as an appropriate testbed for evaluating BoRA's effectiveness in real-world scenarios. Overall, BoRA presents a methodologically sound and practically useful technique for fine-tuning LLMs in multi-task settings. By introducing Bayesian hierarchical priors, it allows the model to adapt to the specific needs of each task while maintaining efficiency and scalability. The results show that BoRA significantly improves upon traditional fine-tuning methods for multi-task problems, particularly in cases where data availability varies across tasks. strengths: The paper titled *"BoRA: Bayesian Hierarchical Low-Rank Adaption for Multi-task Large Language Models"* introduces a novel and theoretically grounded method for fine-tuning Large Language Models (LLMs) in multi-task environments. The primary innovation lies in the integration of Bayesian hierarchical priors into Low-Rank Adaption (LoRA), addressing typical trade-offs in multi-task learning between task specialization and shared learning across related tasks. One of the strongest aspects of the paper is its correctness. The methodology is rooted in solid theoretical foundations. By utilizing Bayesian hierarchical priors, the approach ensures that tasks with limited data can still benefit from shared global knowledge, while tasks with ample data can specialize accordingly. The mathematical framework underpinning BoRA is sound, and the authors have carefully constructed the optimization method, using AdamW for Maximum a Posteriori (MAP) estimation, which is appropriate given the model’s scale and objectives. The empirical results further validate the correctness of the approach. The authors employ a realistic dataset—the Talk of Norway, a collection of parliamentary speeches—and treat each speaker as an individual task. The results clearly demonstrate that BoRA outperforms both individual task-specific models and a unified model trained across all tasks. The improvements in perplexity and generalization across tasks provide concrete evidence of the model’s effectiveness, and the authors' use of established benchmarks and evaluation metrics further strengthens the case for BoRA’s correctness. In terms of quality, the paper is well-structured and logically presented. The authors provide a clear explanation of the problem, highlighting why current fine-tuning techniques, such as LoRA, fall short in multi-task settings. They build a convincing argument for the introduction of BoRA, which strikes a balance between training separate models for each task and training a single model for all tasks. The experimental results are thoroughly analyzed, and the visualizations effectively communicate the model’s performance under different conditions, making the findings easy to follow and interpret. Despite its strengths, there are areas where the paper could be further improved. For example, the authors could expand on their discussion of the precision hyperparameter τ, particularly regarding its selection process and sensitivity. While the introduction of Bayesian priors is well-motivated, it would be useful to understand whether τ's value is sensitive to specific tasks or datasets. Additionally, it would be interesting to see more insights into the scalability of BoRA, particularly when applied to larger models than the ‘opt-350m’ used in the experiments. The paper is also clear and accessible, even when discussing complex concepts such as Bayesian hierarchical models and low-rank adaptation. The equations are well-explained, and the authors do an excellent job of breaking down the key components of the model. The figures and tables effectively support the narrative, particularly those that show how task dataset size influences performance. However, some sections could benefit from additional elaboration. For example, more insights into the scaling behavior of BoRA and its performance with a larger number of tasks or significantly larger models would enhance the paper’s clarity. Similarly, the authors could offer a more detailed discussion of why BoRA performs particularly well for tasks with limited data. In terms of significance, this work is a considerable contribution to the field. Multi-task learning is an important area of research in machine learning, and the ability to efficiently fine-tune LLMs across multiple tasks has broad implications. BoRA’s ability to handle varying data sizes across tasks, while maintaining both task-specific performance and scalability, makes it a promising solution for real-world applications. The method could be particularly useful in settings where training and maintaining separate models for each task is computationally prohibitive. Moreover, because BoRA builds on LoRA, a widely used fine-tuning technique, the approach is more likely to be adopted by practitioners in the field. The significance of the paper is enhanced by its potential to improve model performance without the need for multiple task-specific models. This opens new possibilities for applying BoRA in various industries where data availability and task complexity can vary significantly. The empirical results provide strong evidence of the method’s potential, making it an exciting contribution to multi-task learning research. Given the overall quality and contribution of this paper, there are several questions that could be clarified in a rebuttal. For example, it would be helpful to understand how sensitive BoRA’s performance is to the choice of the precision hyperparameter τ and whether different strategies for setting τ have been explored. Additionally, the authors could elaborate on the scalability of BoRA when applied to larger models, and whether any computational or optimization challenges arise when scaling up to models with billions of parameters. Further insights into which tasks benefited the most from BoRA and why would also provide a deeper understanding of its effectiveness. In conclusion, this paper presents a valuable and well-executed contribution to the field of multi-task learning for LLMs. The theoretical soundness of BoRA, combined with strong empirical results, demonstrates its potential as a significant advancement in fine-tuning techniques. While there are a few areas where further clarification would be beneficial, these do not detract from the paper’s overall strength. Given its correctness, clarity, quality, and significance, I would strongly recommend accepting this paper for publication. It represents an important step forward in developing scalable, efficient methods for fine-tuning LLMs in multi-task settings, and it has the potential to influence both academic research and real-world applications. weaknesses: The paper *"BoRA: Bayesian Hierarchical Low-Rank Adaption for Multi-task Large Language Models"* presents an interesting approach to fine-tuning Large Language Models (LLMs) for multi-task learning. While the overall framework is novel and grounded in strong theoretical principles, there are some notable weaknesses that could benefit from further elaboration and improvement. One issue lies in the correctness of the assumptions and choices made in the model’s design. Although the use of Gaussian hierarchical priors is reasonable, it might not be sufficient to capture the more complex relationships that exist between tasks in real-world settings. The assumption that tasks share a common Gaussian prior may oversimplify the diversity of tasks, especially in scenarios where tasks are not closely related. The authors do not address how BoRA would handle highly divergent tasks or those that introduce noise into the global structure, which raises concerns about its robustness in practical applications. Moreover, while the model’s optimization relies on Maximum a Posteriori (MAP) estimation, which is efficient, it lacks the uncertainty quantification typically provided by full Bayesian approaches such as Markov Chain Monte Carlo (MCMC) or Variational Inference. The authors mention these more advanced methods but do not explore their potential impact on the model’s overall performance. This raises the question of whether the current approach could lead to overconfidence in the learned task parameters, especially in tasks with limited data. A fuller discussion on the trade-offs between MAP estimation and these alternative Bayesian methods could provide more clarity on the consequences of the chosen optimization technique. Another area where the paper falls short is in the experimental setup. The authors validate BoRA using the Talk of Norway dataset, which, while offering a structured set of tasks (parliamentary speeches), may not reflect the complexity of more varied real-world tasks. It is unclear how BoRA would generalize to more heterogeneous datasets where the relationships between tasks are less clear or more noisy. The choice of dataset, while providing a reasonable first test, could limit the generalizability of the method, and this is not adequately addressed in the paper. In terms of quality, while the theoretical formulation of the model is well-done, certain methodological choices are not fully explored. For instance, the precision hyperparameter τ is a critical component of the hierarchical model, yet the authors provide little insight into how its value is selected or how sensitive the model’s performance is to this choice. More thorough sensitivity analyses would have been useful in demonstrating the robustness of the model across different configurations of τ. Without this, the reader is left wondering whether the results are highly dependent on specific hyperparameter settings or if they would generalize to other scenarios. Additionally, the experimental results are somewhat limited in scope. While BoRA is compared to individual models and a unified model trained on all tasks, the paper lacks a broader set of baselines, particularly comparisons with other state-of-the-art multi-task learning methods. This makes it difficult to gauge how much of an improvement BoRA truly offers over existing solutions. Including comparisons with recent multi-task fine-tuning methods or hierarchical Bayesian models could have provided a stronger validation of the proposed approach. Another limitation of the paper is its focus on a relatively small model ('opt-350m'). While this is a reasonable choice for experimentation, it raises concerns about BoRA’s scalability to larger LLMs, which are commonly used in practice. The authors do not discuss whether BoRA’s computational overhead scales efficiently with model size or whether the approach might face challenges when applied to models with billions of parameters. This is a significant omission, as one of the paper’s claims is that BoRA provides a scalable solution for multi-task learning. Clarity is another aspect where the paper could be improved. While the authors do a good job of explaining complex concepts like Bayesian hierarchical modeling and LoRA, some sections could benefit from more detailed explanations. The discussion of how the hierarchical model influences task-specific parameters, for example, remains somewhat abstract. Providing more concrete examples or intuitively explaining how this process works in practice would make the paper more accessible to a broader audience. The presentation of the results also lacks sufficient interpretation. While the authors report perplexity improvements, they do not fully explain why perplexity was chosen as the key metric or how it relates to multi-task learning performance across a variety of domains. Perplexity is standard in language modeling, but it may not always be the most informative measure for evaluating the effectiveness of multi-task models. Offering more detailed interpretations of the results and linking them to practical outcomes could provide better context for the reader. Moreover, the paper’s discussion of related work is somewhat brief and does not sufficiently explore how BoRA fits into the broader landscape of multi-task learning methods. There is little discussion on how BoRA compares to other parameter-efficient fine-tuning methods, and the novelty of BoRA could be better highlighted by positioning it more clearly within the existing literature. Finally, while the method shows promise, its broader significance is difficult to assess given the narrow scope of the experimental validation. The authors test BoRA on a single dataset, which limits the claims they can make about its generalizability. Although the paper demonstrates improvements on parliamentary speeches, it is unclear how BoRA would perform in tasks that are less structured or more diverse, such as dialogue generation or machine translation. The authors do not discuss whether BoRA is robust to tasks that might introduce noise or complexity into the hierarchical structure, which raises questions about its applicability in more challenging real-world settings. Another issue related to significance is that the authors do not provide a clear analysis of BoRA’s computational efficiency. While they claim that the method is scalable, they offer little evidence to support this claim, particularly when it comes to scaling to larger models or more complex task distributions. Without this, it is difficult to fully assess the practical relevance of BoRA for large-scale applications, which weakens the broader impact of the paper. In conclusion, while the paper makes an important contribution to the field of multi-task learning, it has several weaknesses that reduce its overall impact. These include the lack of generalization to more diverse tasks, the absence of detailed sensitivity analyses for key hyperparameters, limited comparisons with other methods, and unclear scalability to larger models. These limitations suggest that while BoRA is a promising approach, more work is needed to fully validate its claims and demonstrate its robustness across a wider range of scenarios. confidence: 5 justification: My assessment of the paper is based on a balanced consideration of its innovative contributions, theoretical soundness, and areas that require further development. The paper introduces a novel approach, *BoRA* (Bayesian Hierarchical Low-Rank Adaption), which successfully extends the LoRA method to multi-task learning by using Bayesian hierarchical priors. This is a strong contribution that addresses a significant problem in fine-tuning Large Language Models (LLMs) for multiple tasks, offering an efficient and scalable method to balance task specialization and global knowledge sharing. The core strength of the paper lies in the soundness of the theoretical framework. The authors have carefully articulated how Bayesian hierarchical priors can allow tasks with limited data to benefit from shared global parameters while enabling tasks with larger datasets to specialize. This balance effectively mitigates the trade-off seen in traditional multi-task learning approaches. The empirical results, demonstrating BoRA’s improved perplexity on the Talk of Norway dataset, offer convincing evidence of the model's capabilities within this structured, controlled setting. These strengths reflect the paper's potential to impact multi-task learning approaches in practical settings where data distribution is uneven across tasks. However, despite these contributions, the paper has some critical limitations that affect its overall impact. The primary concern is the narrow scope of the experimental validation. The reliance on a single dataset of parliamentary speeches limits the generalizability of BoRA, especially when considering tasks that are more diverse, noisy, or heterogeneous in nature. This restricted validation raises concerns about whether BoRA can be applied to more complex or less structured real-world tasks. The lack of broader experimentation undermines the claim that BoRA is widely applicable to multi-task learning in diverse domains. Further, the choice of key hyperparameters, such as the precision hyperparameter τ, is not sufficiently explored. The authors do not provide enough details on how τ is chosen or its sensitivity to performance across different tasks, which leaves an important aspect of the model's functionality unexplored. Additionally, while BoRA is claimed to be scalable, the paper does not provide concrete evidence of how it performs with larger LLMs or in larger-scale settings. The absence of this scalability analysis limits the paper’s claims of BoRA’s practical relevance for modern language models, which often consist of billions of parameters. Moreover, the paper could have benefited from more comprehensive comparisons with existing state-of-the-art methods for multi-task fine-tuning. Without such comparisons, it is difficult to assess how BoRA fares against other parameter-efficient fine-tuning techniques. This lack of context makes it harder to fully appreciate the improvement BoRA offers. In terms of clarity, while the explanation of the theoretical model is generally solid, some sections, particularly those detailing how task-specific parameters interact with the global model, could be more accessible. Providing more intuitive or practical examples would have made the technical content easier to grasp for a broader audience. In conclusion, my assessment is that the paper makes an important and innovative contribution to multi-task learning for LLMs.
alnaQJdBNs
Deep Q-Learning with Whittle Index for Contextual Restless Bandits: Application to Email Recommender Systems
[ "Ibtihal El Mimouni", "Konstantin Avrachenkov" ]
In this paper, we introduce DQWIC, a novel algorithm that combines Deep Reinforcement Learning and Whittle index theory within the Contextual Restless Multi-Armed Bandit framework for the discounted criterion. DQWIC is designed to learn in evolving environments typical of real-world applications, such as recommender systems, where user preferences and environmental dynamics evolve over time. In particular, we apply DQWIC to the problem of optimizing email recommendations, where it tackles the dual challenges of enhancing content relevance and reducing spam messages, thereby addressing ethical concerns related to intrusive emailing. The algorithm leverages two neural networks: a Q-network for approximating action-value functions and a Whittle-network for estimating Whittle indices, both of which integrate contextual features to inform decision-making. In addition, the inclusion of context allows us to handle many heterogeneous users in a scalable way. The learning process occurs through a two time scale stochastic approximation, with the Q-network updated frequently to minimize the loss between predicted and target Q-values, and the Whittle-network updated on a slower time scale. To evaluate its effectiveness, we conducted experiments in partnership with a company specializing in digital marketing. Our results, derived from both synthetic and real-world data, show that DQWIC outperforms existing email marketing baselines.
[ "Deep reinforcement learning", "Restless multi-armed bandits", "Whittle index", "Deep Q-learning", "Recommender systems", "Responsible email marketing" ]
https://openreview.net/pdf?id=alnaQJdBNs
https://openreview.net/forum?id=alnaQJdBNs
R9dWvzPxVN
meta_review
1,730,367,657,253
alnaQJdBNs
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission53/Area_Chair_H5f5" ]
NLDL.org/2025/Conference
2025
metareview: The paper presents a DQN-based method combined with the Whittle index to address the CRMAB problem. It is well-written, and the overall approach appears to be sound and novel. However, there is room for improvement in positioning the proposed approach within the literature, providing a stronger motivation for modeling the problem this way, incorporating some more background on Whittle index, and clarifying the evaluation, as suggested by the reviewers. With the expectation that these will be updated in the camera-ready version, I recommend accepting the paper. recommendation: Accept (Poster) suggested_changes_to_the_recommendation: 1: I agree that the recommendation could be moved down confidence: 5: The area chair is absolutely certain
alnaQJdBNs
Deep Q-Learning with Whittle Index for Contextual Restless Bandits: Application to Email Recommender Systems
[ "Ibtihal El Mimouni", "Konstantin Avrachenkov" ]
In this paper, we introduce DQWIC, a novel algorithm that combines Deep Reinforcement Learning and Whittle index theory within the Contextual Restless Multi-Armed Bandit framework for the discounted criterion. DQWIC is designed to learn in evolving environments typical of real-world applications, such as recommender systems, where user preferences and environmental dynamics evolve over time. In particular, we apply DQWIC to the problem of optimizing email recommendations, where it tackles the dual challenges of enhancing content relevance and reducing spam messages, thereby addressing ethical concerns related to intrusive emailing. The algorithm leverages two neural networks: a Q-network for approximating action-value functions and a Whittle-network for estimating Whittle indices, both of which integrate contextual features to inform decision-making. In addition, the inclusion of context allows us to handle many heterogeneous users in a scalable way. The learning process occurs through a two time scale stochastic approximation, with the Q-network updated frequently to minimize the loss between predicted and target Q-values, and the Whittle-network updated on a slower time scale. To evaluate its effectiveness, we conducted experiments in partnership with a company specializing in digital marketing. Our results, derived from both synthetic and real-world data, show that DQWIC outperforms existing email marketing baselines.
[ "Deep reinforcement learning", "Restless multi-armed bandits", "Whittle index", "Deep Q-learning", "Recommender systems", "Responsible email marketing" ]
https://openreview.net/pdf?id=alnaQJdBNs
https://openreview.net/forum?id=alnaQJdBNs
QAY5IwbZ5G
official_review
1,727,956,981,766
alnaQJdBNs
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission53/Reviewer_aNd2" ]
NLDL.org/2025/Conference
2025
title: Recommend to accept summary: This submission tries to study Deep Q-Learning with the Whittle Index for Contextual Restless Bandits problem with Application to Email Recommender Systems so as to propose DQWIC, where you aim to enhance content relevance and reduce spam messages, and you employ Q-network featured two hidden layers with 130 and 50 neurons strengths: 1. this draft is easy to follow 2. your core idea of combining deep Q-learning with the Whittle index into CRMAB is reasonable, which shares the same spirit of clustering 3. this manuscript has the potential to work on large-scale data, especially decentralised scenarios weaknesses: 1. your baselines in the experiments are a bit weak 2. the data scale adopted in section 5.3 is not massive that you're encouraged to significantly enlarge it 3. related state-of-the-art you may want to compare: Fast Distributed Bandits for Online Recommendation Systems, The Art of Clustering Bandits confidence: 5 justification: rest comments -try to add more experimental results would be helpful to improve this work -on the other hand, a theoretical analysis is also encouraged to provide to make this work more solid Overall, it's enjoyable to have a read for this intriguing work, though there are some jobs that need to be done to better shape your merits, in short, it would be a pleasure to recommend towards acceptance. final_rebuttal_confidence: 5 final_rebuttal_justification: I've read the rebuttal and again, would love to recommend towards acceptance, and assume that the authors will polish based on all comments and suggestions.
alnaQJdBNs
Deep Q-Learning with Whittle Index for Contextual Restless Bandits: Application to Email Recommender Systems
[ "Ibtihal El Mimouni", "Konstantin Avrachenkov" ]
In this paper, we introduce DQWIC, a novel algorithm that combines Deep Reinforcement Learning and Whittle index theory within the Contextual Restless Multi-Armed Bandit framework for the discounted criterion. DQWIC is designed to learn in evolving environments typical of real-world applications, such as recommender systems, where user preferences and environmental dynamics evolve over time. In particular, we apply DQWIC to the problem of optimizing email recommendations, where it tackles the dual challenges of enhancing content relevance and reducing spam messages, thereby addressing ethical concerns related to intrusive emailing. The algorithm leverages two neural networks: a Q-network for approximating action-value functions and a Whittle-network for estimating Whittle indices, both of which integrate contextual features to inform decision-making. In addition, the inclusion of context allows us to handle many heterogeneous users in a scalable way. The learning process occurs through a two time scale stochastic approximation, with the Q-network updated frequently to minimize the loss between predicted and target Q-values, and the Whittle-network updated on a slower time scale. To evaluate its effectiveness, we conducted experiments in partnership with a company specializing in digital marketing. Our results, derived from both synthetic and real-world data, show that DQWIC outperforms existing email marketing baselines.
[ "Deep reinforcement learning", "Restless multi-armed bandits", "Whittle index", "Deep Q-learning", "Recommender systems", "Responsible email marketing" ]
https://openreview.net/pdf?id=alnaQJdBNs
https://openreview.net/forum?id=alnaQJdBNs
Mp2odAgMw3
official_review
1,727,081,794,958
alnaQJdBNs
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission53/Reviewer_D4bP" ]
NLDL.org/2025/Conference
2025
title: review summary: The authors propose to learn two networks for recommendation problems, one estimating a traditional contextual bandit and the other the Whittle index. The results are aggregated and used for email recommendation. Technically, the approach is interesting but lacks some motivation. strengths: Interesting novel combination of MAB and Whittle index but also somewhat unmotivated (see weakness). Empirical evaluation on toy and real data. weaknesses: The combination of MAB and Whittle index is motivated at all. The abstract should already tell us why this is a good idea. What novelty brings the second network computing the Whittle index into the game, what problem does is solve/remedy and how could we motivate the idea of bringing Whittle into this in the first place? Regarding experimentation I also see the necessity to present an ablation w/ and w/out Whittle to show that it actually has an effect and quantify how big it is, under what circumstances etc. confidence: 4 justification: Interesting idea that needs a better motivation and further empirical evidence. final_rebuttal_confidence: 4 final_rebuttal_justification: I'd follow the authors arguments. It would be good to incorporate some of the rebuttal into the main text of the manuscript when preparing a possible camera-ready copy.
alnaQJdBNs
Deep Q-Learning with Whittle Index for Contextual Restless Bandits: Application to Email Recommender Systems
[ "Ibtihal El Mimouni", "Konstantin Avrachenkov" ]
In this paper, we introduce DQWIC, a novel algorithm that combines Deep Reinforcement Learning and Whittle index theory within the Contextual Restless Multi-Armed Bandit framework for the discounted criterion. DQWIC is designed to learn in evolving environments typical of real-world applications, such as recommender systems, where user preferences and environmental dynamics evolve over time. In particular, we apply DQWIC to the problem of optimizing email recommendations, where it tackles the dual challenges of enhancing content relevance and reducing spam messages, thereby addressing ethical concerns related to intrusive emailing. The algorithm leverages two neural networks: a Q-network for approximating action-value functions and a Whittle-network for estimating Whittle indices, both of which integrate contextual features to inform decision-making. In addition, the inclusion of context allows us to handle many heterogeneous users in a scalable way. The learning process occurs through a two time scale stochastic approximation, with the Q-network updated frequently to minimize the loss between predicted and target Q-values, and the Whittle-network updated on a slower time scale. To evaluate its effectiveness, we conducted experiments in partnership with a company specializing in digital marketing. Our results, derived from both synthetic and real-world data, show that DQWIC outperforms existing email marketing baselines.
[ "Deep reinforcement learning", "Restless multi-armed bandits", "Whittle index", "Deep Q-learning", "Recommender systems", "Responsible email marketing" ]
https://openreview.net/pdf?id=alnaQJdBNs
https://openreview.net/forum?id=alnaQJdBNs
M2jIBg4B5G
decision
1,730,901,556,868
alnaQJdBNs
[ "everyone" ]
[ "NLDL.org/2025/Conference/Program_Chairs" ]
NLDL.org/2025/Conference
2025
title: Paper Decision decision: Accept (Poster) comment: We recommend a poster presentation given the AC and reviewers recommendations.
alnaQJdBNs
Deep Q-Learning with Whittle Index for Contextual Restless Bandits: Application to Email Recommender Systems
[ "Ibtihal El Mimouni", "Konstantin Avrachenkov" ]
In this paper, we introduce DQWIC, a novel algorithm that combines Deep Reinforcement Learning and Whittle index theory within the Contextual Restless Multi-Armed Bandit framework for the discounted criterion. DQWIC is designed to learn in evolving environments typical of real-world applications, such as recommender systems, where user preferences and environmental dynamics evolve over time. In particular, we apply DQWIC to the problem of optimizing email recommendations, where it tackles the dual challenges of enhancing content relevance and reducing spam messages, thereby addressing ethical concerns related to intrusive emailing. The algorithm leverages two neural networks: a Q-network for approximating action-value functions and a Whittle-network for estimating Whittle indices, both of which integrate contextual features to inform decision-making. In addition, the inclusion of context allows us to handle many heterogeneous users in a scalable way. The learning process occurs through a two time scale stochastic approximation, with the Q-network updated frequently to minimize the loss between predicted and target Q-values, and the Whittle-network updated on a slower time scale. To evaluate its effectiveness, we conducted experiments in partnership with a company specializing in digital marketing. Our results, derived from both synthetic and real-world data, show that DQWIC outperforms existing email marketing baselines.
[ "Deep reinforcement learning", "Restless multi-armed bandits", "Whittle index", "Deep Q-learning", "Recommender systems", "Responsible email marketing" ]
https://openreview.net/pdf?id=alnaQJdBNs
https://openreview.net/forum?id=alnaQJdBNs
ILxJAxEE6t
official_review
1,728,512,034,431
alnaQJdBNs
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission53/Reviewer_3bkE" ]
NLDL.org/2025/Conference
2025
title: Review summary: This paper tackles the contextual restless mutli-armed bandits system problem, and proposes to approach it with a solution inspired by deep Q-learning. The paper uses the Whittle index, that allows to re-write the CRMAB problem without constraints, by relaxing the "number of arms" constraint with a Lagrangian. Then, they use a modified DQN algorithm with an additional network. Their Q-network is learnt just like in DQn, but with an extra dependency on a parameter. This parameter is an estimation of a whitlle index, computed by the second network. Then, the algorithm is applied to email recommender systems, on both real and synthetic data, showing a cleat improvement over rule-based baselines. strengths: The algorithm is clearly presented and evaluated. There is no theoretical evaluation of the method (and I don,t think it is needed) but the methodology makes sense overall. weaknesses: **Baseline** I think one thing missing from the paper is a comparison to a baseline that would show the importance and effectiveness of the whittle network. For example, what happens if one gets rid of it completely and simply run DQN on the problem ? I think this baseline (or a similar idea) should be presented along the results. **Scaling** The author mention : "We also tried other architectures where we increased the complexity [...]. We noticed that a moderate increase in network capacity, can enhance performance, while excessive complexity, can degrade the learning [...] ". Scaling neural networks is usually a tricky subject in deep RL, and thus I think this experimental results would be very valuable added in the paper or appendix. **Few comments** - eq 4 is not very clear, it looks like all Q-values should be the same, should the Q depend on \lambda here ? - Q-values and values are not defined properly in the paper - the concept of whittle index is used in the introduction, but is only explained later. For the deep Rl audience, it could be useful to shortly describe what it is used for in the introduction. confidence: 3 justification: Paper is overall sound and clear, with no major errors or concerns. A main improvement direction is a more relevant baseline, that would showcase the empirical role of the whittle network.
alnaQJdBNs
Deep Q-Learning with Whittle Index for Contextual Restless Bandits: Application to Email Recommender Systems
[ "Ibtihal El Mimouni", "Konstantin Avrachenkov" ]
In this paper, we introduce DQWIC, a novel algorithm that combines Deep Reinforcement Learning and Whittle index theory within the Contextual Restless Multi-Armed Bandit framework for the discounted criterion. DQWIC is designed to learn in evolving environments typical of real-world applications, such as recommender systems, where user preferences and environmental dynamics evolve over time. In particular, we apply DQWIC to the problem of optimizing email recommendations, where it tackles the dual challenges of enhancing content relevance and reducing spam messages, thereby addressing ethical concerns related to intrusive emailing. The algorithm leverages two neural networks: a Q-network for approximating action-value functions and a Whittle-network for estimating Whittle indices, both of which integrate contextual features to inform decision-making. In addition, the inclusion of context allows us to handle many heterogeneous users in a scalable way. The learning process occurs through a two time scale stochastic approximation, with the Q-network updated frequently to minimize the loss between predicted and target Q-values, and the Whittle-network updated on a slower time scale. To evaluate its effectiveness, we conducted experiments in partnership with a company specializing in digital marketing. Our results, derived from both synthetic and real-world data, show that DQWIC outperforms existing email marketing baselines.
[ "Deep reinforcement learning", "Restless multi-armed bandits", "Whittle index", "Deep Q-learning", "Recommender systems", "Responsible email marketing" ]
https://openreview.net/pdf?id=alnaQJdBNs
https://openreview.net/forum?id=alnaQJdBNs
BxeHztrIU3
official_review
1,728,482,685,376
alnaQJdBNs
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission53/Reviewer_aM1w" ]
NLDL.org/2025/Conference
2025
title: Review summary: This paper proposes a new, deep-learning based, algorithm for contextual restless bandits. The contextual restless bandit is a model which combines the dependence on exogenous features ('context') with (potentially action dependent) changing action states, and has been studied in two recent papers (Chen et al (2024) and Liang et al. (2024)). The proposed algorithm is the novel aspect of this paper, which combines two neural networks: one to learn relationships between context and rewards and one to learn Whittle indices (popular tools in tackling restless bandit problems). A problem in email marketing serves as a motivation for the problem and algorithm, and data from this problem serve as a basis for empirical work where the algorithm is found to learn a good strategy and outperform simple benchmarks. strengths: The paper is generally clear and correct. The challenges of context and non-stationarity are well explained and the proposed algorithm is a sensible candidate to handle these two aspects. Further to this the algorithm appears to work well on the problem, and the methodology is described in a level of detail sufficient to allow reproduction of the methods. I think there would be interest in the problem and algorithm from the NLDL community and beyond. weaknesses: While I found the method to be appropriate to the problem posed, I did feel that the relationship between this problem and other better studied problems could be better explained. Firstly, I struggled to find a clear explanation of how this problem differs from other contextual MDP settings. Here each arm is modelled as its own MDP and the problem is considered as a restless bandit. But if the combination of arm states was treated as the state in a wider contextual MDP (potentially partially observed) would there not be some equivalence to the current setting meaning a wider range of algorithms may be applicable? Second, while there is little work on contextual *restless* bandits, there does seem to be a line of work on non-stationary contextual bandits that is of relevance. See e.g. Luo et al. (2018), Wu et al. (2018), Russac et al. (2019). While Whittle index based approaches may not be directly applicable here, and some of these may e.g. operate with changing arm contexts or common regression parameters in a way your model does not, it does not seem to be an entirely disconnected literature. The other issue I found was that the experiments appear to only compare to non-adaptive approaches (though I couldn't tell whether the Q-value approach was based on a static Q-table or is adaptive but just fails to learn anything at all). This seemed a surprising choice, and I wondered how the approach would compare to either other methods from the literature, be those earlier contextual restless MAB algorithms, or choices from the non-stationary contextual bandits literature I have indicated. Ultimately I have three questions for the authors which would influence my overall recommendation: 1. How does the problem differ from a contextual MDP? If it does, what are the situations where modelling it this way introduces a benefit and how can a practitioner be sure they are facing such a situation? 2. How do your methods and the problem relate to the literature on non-stationary contextual bandits? 3. Are all of your baseline methods non-adaptive? Is it possible to introduce an adaptive approach or explain why approaches previously existing in the literature were not suitable for comparison here? Luo, Wei, Agrawal, Langford (2018) Efficient Contextual Bandits in Non-stationary worlds. Conference on Learning Theory Wu, Iyer, Wang (2018) Learning Contextual Bandits in a Non-stationary Environment. SIGIR '18 Russac, Veranda, Cappe (2019) Weighted Linear Bandits for Non-stationary Environments. NeurIPS 2019. confidence: 4 justification: While the paper is clear and correct, I have outstanding concerns about its relationship to the existing literature. I have asked questions regarding this and hope that a convincing response, which either highlights a misunderstanding on my part or promises how (specifically) the paper could be updated to reflect its connections to the literature and improve limitations of the experiments could improve my rating. final_rebuttal_confidence: 4 final_rebuttal_justification: On balance, I feel the paper is appropriate for acceptance, but I would echo the comment of the other reviewer that aspects of the rebuttal should be (substantively) added to the paper.
ZF64XEUgHm
Learning incomplete factorization preconditioners for GMRES
[ "Paul Häusner", "Aleix Nieto Juscafresa", "Jens Sjölund" ]
Incomplete LU factorizations of sparse matrices are widely used as preconditioners in Krylov subspace methods to speed up solving linear systems. Unfortunately, computing the preconditioner itself can be time-consuming and sensitive to hyper-parameters. Instead, we replace the hand-engineered algorithm with a graph neural network that is trained to approximate the matrix factorization directly. To apply the output of the neural network as a preconditioner, we propose an output activation function that guarantees that the predicted factorization is invertible. Further, applying a graph neural network architecture allows us to ensure that the output itself is sparse which is desirable from a computational standpoint. We theoretically analyze and empirically evaluate different loss functions to train the learned preconditioners and show their effectiveness in decreasing the number of GMRES iterations and improving the spectral properties on synthetic data. The code is available at https://github.com/paulhausner/neural-incomplete-factorization.
[ "graph neural networks", "preconditioner", "data-driven optimization" ]
https://openreview.net/pdf?id=ZF64XEUgHm
https://openreview.net/forum?id=ZF64XEUgHm
zztRsE7v1P
meta_review
1,730,561,242,616
ZF64XEUgHm
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission30/Area_Chair_TCbh" ]
NLDL.org/2025/Conference
2025
metareview: The reviewers are all convinced of accepting this paper. recommendation: Accept (Poster) suggested_changes_to_the_recommendation: 2: I'm certain of the recommendation. It should not be changed confidence: 4: The area chair is confident but not absolutely certain
ZF64XEUgHm
Learning incomplete factorization preconditioners for GMRES
[ "Paul Häusner", "Aleix Nieto Juscafresa", "Jens Sjölund" ]
Incomplete LU factorizations of sparse matrices are widely used as preconditioners in Krylov subspace methods to speed up solving linear systems. Unfortunately, computing the preconditioner itself can be time-consuming and sensitive to hyper-parameters. Instead, we replace the hand-engineered algorithm with a graph neural network that is trained to approximate the matrix factorization directly. To apply the output of the neural network as a preconditioner, we propose an output activation function that guarantees that the predicted factorization is invertible. Further, applying a graph neural network architecture allows us to ensure that the output itself is sparse which is desirable from a computational standpoint. We theoretically analyze and empirically evaluate different loss functions to train the learned preconditioners and show their effectiveness in decreasing the number of GMRES iterations and improving the spectral properties on synthetic data. The code is available at https://github.com/paulhausner/neural-incomplete-factorization.
[ "graph neural networks", "preconditioner", "data-driven optimization" ]
https://openreview.net/pdf?id=ZF64XEUgHm
https://openreview.net/forum?id=ZF64XEUgHm
mEQ7XaTpEd
official_review
1,728,977,114,217
ZF64XEUgHm
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission30/Reviewer_h7hq" ]
NLDL.org/2025/Conference
2025
title: Review of "Learning incomplete factorization preconditioners for GMRES" summary: This paper proposes to learn incomplete LU factorisations of sparse matrices through graph neural network data driven methods to be used a preconditioner for linear equations of GMRES. The paper highlights the flaws in traditional handcrafted methods that require significant time and strong heuristics to obtain a fast and accurate solutions for equation systems. This works proposes that the data driven approaches aim to learn tailored systems thus improving speed when adapting to different systems. The work is trained and evaluated on a synthetic dataset comprising problems arising from the discretization of the Poisson equation. Here results are presented for different losses and predconditioners. The results demonstrate improved performance both in terms of computational efficiency and task performance. strengths: 1. The paper is very well presented and written, while the flow allows for a pleasant read. The background presents enough information such that all readers can grasp key concepts of the paper, while the method explains the rationale clearly. Generally this work is very well presented. 2. Limitations are for the most part addressed, identifying issues regarding the distribution of problems and thus lack of generalisation to new problem domains. The issue of neural network training is also addressed. 3. The method itself provides a simple yet seemingly effective method to learn preconditioners. The method takes care to ensure desirable and essential properties are maintained such as invertibility, while doing so without overly elaborate mechanisms by employing a modified activation function. 4. The evaluation for the most part is effective at showing the computational performance of the approach while maintaining the task specific performance evaluated for. 5. The authors provide extensive details regarding the algorithmic implementation of their method that provides confidence in the reproducibility of the algorithm itself. All parameterisation of the method is also provided with extensive supplementary material. weaknesses: Major Comments: 1. The clear weakness of this work lies in the experimental analysis. The use of one problem set, and synthetic data limits the understanding of method generalisation given there is an assumption that is not validated that the test data lies significantly outside the training distribution as to be a fair evaluation of performance. I understand the nature of this problem limits possible evaluation scenarios, however, further empirical analyses would significantly strengthen the work. 2. The proposed combined loss seemingly performs worse than individual losses but does achieve this result faster. However, the concern here is that the GMRES method with no preconditioner is also performing at a similar standard, yet much slower. Can the authors quantify more clearly how much benefit they obtain from their method given pre-training and performance gains not being drastic. 3. No comparisons made against existing works. Although, your related works clearly state there are other data driven methods to generate preconditioners these are not empirically analysed. How do I know your method is better than these alternatives? 4. What is the distinct purpose of employing a graph neural network? Correct me if I’m mistaken, however, there seems to be no advantage or reason to employ a graph neural network over a standard ANN or CNN approach? While you employ a GNN to enforce the scarcity, the same effect can be achieved with other NN approaches and slightly modified constraints. Minor Comments: 1. Your introduction could perhaps better explain the challenges and limitations of existing approaches to better define the problem statement. 2. The caption for Table 1 could be more informative, it is initially hard to interpret without careful reading of the main text. More effective caption could help the reader better interpret what metrics are being used to evaluate performance and what methods are being analysed. Also I assume time is in seconds? This needs clarification. 3. Given your method is primarily comparing against iterations and time, and as mentioned in your limitations, it would be beneficial to make more clear the compute time for pre-training and a cost (or at least approximate) of the time taken with pre-training in mind. 4. The background of graph neural networks could perhaps be omitted to the appendix given the audience of the conference, and the resulting space used to explain prior misconceptions and misunderstandings in further details. Questions: 1. For the limits that you define as bounds of the training case to only explore he edges of the equation system. I understand that investigating the whole system is not tractable and your lemma provide the bounds, however, were any further analyses performed on cases outside of these bounds? 2. How does the impact of epsilon in the activation function affect performance of the method? How was this value found? confidence: 3 justification: This work presents interesting approach that is simple in design yet effective in some settings. Although, more empirical analysis is needed to better validate the claims of the authors, the paper presents a novel method that is well grounded and justified in proof and literature. I therefore believe the contributions of this work to be valuable to the community and the level of novelty appropriate for the venue. final_rebuttal_confidence: 4 final_rebuttal_justification: The authors address my weaknesses and answer questions, given my already positive outlook I maintain my score.
ZF64XEUgHm
Learning incomplete factorization preconditioners for GMRES
[ "Paul Häusner", "Aleix Nieto Juscafresa", "Jens Sjölund" ]
Incomplete LU factorizations of sparse matrices are widely used as preconditioners in Krylov subspace methods to speed up solving linear systems. Unfortunately, computing the preconditioner itself can be time-consuming and sensitive to hyper-parameters. Instead, we replace the hand-engineered algorithm with a graph neural network that is trained to approximate the matrix factorization directly. To apply the output of the neural network as a preconditioner, we propose an output activation function that guarantees that the predicted factorization is invertible. Further, applying a graph neural network architecture allows us to ensure that the output itself is sparse which is desirable from a computational standpoint. We theoretically analyze and empirically evaluate different loss functions to train the learned preconditioners and show their effectiveness in decreasing the number of GMRES iterations and improving the spectral properties on synthetic data. The code is available at https://github.com/paulhausner/neural-incomplete-factorization.
[ "graph neural networks", "preconditioner", "data-driven optimization" ]
https://openreview.net/pdf?id=ZF64XEUgHm
https://openreview.net/forum?id=ZF64XEUgHm
bh9yfgcs0K
decision
1,730,901,555,822
ZF64XEUgHm
[ "everyone" ]
[ "NLDL.org/2025/Conference/Program_Chairs" ]
NLDL.org/2025/Conference
2025
title: Paper Decision decision: Accept (Oral) comment: We have decided to offer opportunities for oral presentations in the remaining available slots in the NLDL program. Thus, despite the AC's poster recommendation, we recommend an oral presentation in addition to the poster presentation given the AC's and reviewers' recommendations.
ZF64XEUgHm
Learning incomplete factorization preconditioners for GMRES
[ "Paul Häusner", "Aleix Nieto Juscafresa", "Jens Sjölund" ]
Incomplete LU factorizations of sparse matrices are widely used as preconditioners in Krylov subspace methods to speed up solving linear systems. Unfortunately, computing the preconditioner itself can be time-consuming and sensitive to hyper-parameters. Instead, we replace the hand-engineered algorithm with a graph neural network that is trained to approximate the matrix factorization directly. To apply the output of the neural network as a preconditioner, we propose an output activation function that guarantees that the predicted factorization is invertible. Further, applying a graph neural network architecture allows us to ensure that the output itself is sparse which is desirable from a computational standpoint. We theoretically analyze and empirically evaluate different loss functions to train the learned preconditioners and show their effectiveness in decreasing the number of GMRES iterations and improving the spectral properties on synthetic data. The code is available at https://github.com/paulhausner/neural-incomplete-factorization.
[ "graph neural networks", "preconditioner", "data-driven optimization" ]
https://openreview.net/pdf?id=ZF64XEUgHm
https://openreview.net/forum?id=ZF64XEUgHm
ZpzzkyE53q
official_review
1,728,723,995,149
ZF64XEUgHm
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission30/Reviewer_3QJn" ]
NLDL.org/2025/Conference
2025
title: Interesting paper! summary: The paper proposes an approach to replace hand-engineered algorithms for incomplete factorizations with a data-driven model based on GNNs that can be trained to optimize the preconditioner for specific problem distributions, leading to faster and more reliable performance in solving linear systems. In particular, they use GNN model that learns and outputs incomplete LU factors for sparse matrices, providing a non-singular precondition. The work is aligned with using GNNs to solve linear algebra problems, as described in this paper: https://arxiv.org/abs/2310.14084. One of the main contributions is the derivation of a new loss function that accounts for both large and small singular values of the system, improving the spectral properties and accelerating the GMRES iterations. Also, a particular activation function to ensure invertibility. strengths: The paper is well written, the contribution seems valuable, and the problem is interesting. weaknesses: The only downside is the experimental evaluation that is a bit limited and might benefit from an ablation study, e.g., to compare the proposed activation function with existing ones. Nevertheless, I believe the paper is worth to be published. confidence: 4 justification: Pros outweigh the downsides
ZF64XEUgHm
Learning incomplete factorization preconditioners for GMRES
[ "Paul Häusner", "Aleix Nieto Juscafresa", "Jens Sjölund" ]
Incomplete LU factorizations of sparse matrices are widely used as preconditioners in Krylov subspace methods to speed up solving linear systems. Unfortunately, computing the preconditioner itself can be time-consuming and sensitive to hyper-parameters. Instead, we replace the hand-engineered algorithm with a graph neural network that is trained to approximate the matrix factorization directly. To apply the output of the neural network as a preconditioner, we propose an output activation function that guarantees that the predicted factorization is invertible. Further, applying a graph neural network architecture allows us to ensure that the output itself is sparse which is desirable from a computational standpoint. We theoretically analyze and empirically evaluate different loss functions to train the learned preconditioners and show their effectiveness in decreasing the number of GMRES iterations and improving the spectral properties on synthetic data. The code is available at https://github.com/paulhausner/neural-incomplete-factorization.
[ "graph neural networks", "preconditioner", "data-driven optimization" ]
https://openreview.net/pdf?id=ZF64XEUgHm
https://openreview.net/forum?id=ZF64XEUgHm
IAedBGpZK3
official_review
1,728,701,345,421
ZF64XEUgHm
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission30/Reviewer_dXKW" ]
NLDL.org/2025/Conference
2025
title: Review summary: This paper explores using neural networks to improve solving linear system of equations. Specifically, the paper looks at the GMRES method and improving the Kyrlov subspaces using a preconditioner. The LU factorization of the preconditioner is found using a GNN. The paper presents two loss functions and compares the method against baselines. strengths: The idea to compute the LU factorization is new and relevant to the community. Prior work has looked at computing eigendecompositions however these reply on the matrices being dense and PSD. Here the matrices being factored are sparse and we are obtaining sparse partial factorizations as well. The paper is very well written. weaknesses: I’m not sure if the method in the paper works as thought. The loss function that optimizes for the top singular results in very large conditions numbers. The same is true for the method that optimizes for both the max and min singular values. The iterations to convergence is reduced but it is unclear why this occurs. The GNN used is not clear. What are the node features? These are not defined. Additionally the GNN only passes messages on the edges and not the nodes. This is also quite non standard. confidence: 4 justification: The paper presented a new method an interesting idea. The method seems to have reasonable properties and the idea I think can be of interest.
ZF64XEUgHm
Learning incomplete factorization preconditioners for GMRES
[ "Paul Häusner", "Aleix Nieto Juscafresa", "Jens Sjölund" ]
Incomplete LU factorizations of sparse matrices are widely used as preconditioners in Krylov subspace methods to speed up solving linear systems. Unfortunately, computing the preconditioner itself can be time-consuming and sensitive to hyper-parameters. Instead, we replace the hand-engineered algorithm with a graph neural network that is trained to approximate the matrix factorization directly. To apply the output of the neural network as a preconditioner, we propose an output activation function that guarantees that the predicted factorization is invertible. Further, applying a graph neural network architecture allows us to ensure that the output itself is sparse which is desirable from a computational standpoint. We theoretically analyze and empirically evaluate different loss functions to train the learned preconditioners and show their effectiveness in decreasing the number of GMRES iterations and improving the spectral properties on synthetic data. The code is available at https://github.com/paulhausner/neural-incomplete-factorization.
[ "graph neural networks", "preconditioner", "data-driven optimization" ]
https://openreview.net/pdf?id=ZF64XEUgHm
https://openreview.net/forum?id=ZF64XEUgHm
7gojEqnHrU
official_review
1,728,906,469,302
ZF64XEUgHm
[ "everyone" ]
[ "NLDL.org/2025/Conference/Submission30/Reviewer_NxYE" ]
NLDL.org/2025/Conference
2025
title: Interesting data-driven approach to lower-upper factorisation of large-scale sparse matrices using graph neural networks summary: This manuscript introduces a data-driven approach to perform lower-upper (LU) factorisation of large-scale sparse matrices. The starting point is the generalised minimal residual method (GMRES) algorithm, which is one of the most popular methods for solving the task of LU factorisation in the case of large-scale spare matrices. This task is typically computationally demanding, and iterative methods are necessary. These iterative methods are highly dependent on the preconditioning methods, which are hyperparameter sensitive and can have difficulties converging. Therefore, a graph neural network (GNN)-based approach is proposed to learn the preconditioner instead. By precomputing matrix factorisation to create a supervised dataset, the GNN can learn to perform the task and solve new factorisation in less iterations. The writing is clear and the results are encouraging and well presented. strengths: 1. LU factorisation of large-scale sparse matrices is highly common in machine learning problems, and contributions toward making this process faster and more reliable could have a great impact in numerous branches of machine learning. 2. The use of GNNs seems like a suitable tool for the task, and loss functions follow nicely from established theory. 3. The writing is clear, the manuscript is well structured, and the math is presented in an understandable manner at a suitable detail level. weaknesses: 1. Other deep learning-based baselines could improve the impression of the results. The comparison with the classical preconditions is suitable and welcome, but it would also be useful to see how alternative deep learning-based approaches could work in this problem setting. For instance, the work of Chen [1] would be impractical due to the requirement of retraining for each problem, but would at least give an indication to what the performance of a different deep learning-based approach could be. I do not expect this baseline to be implemented in an update version as I believe it would be too much work for this iteration. But a discussion on what deep learning-based baseline that would be most suitable to compare to in future works would be beneficial. 2. The evaluation could be made more robust. Training on 200 samples and testing on 10 is reasonable, but generating a larger dataset would be useful to ensure that the performance estimates are reliable. It would also be interesting to see the variation in performance between different training runs to shed light on the stability of the proposed methodology. 3. The introduction could be made more friendly towards reader that are unfamiliar with the field of LU factorisation. The introduction starts with "The GMRES algorithm" without defining the acronym and without a reference. Similarly, LU factorisation is also presented without being defined. While both of these are well-known methods, it would increase the clarity of the writing if they were written out on their first appearance. [1] J. Chen. “Graph Neural Preconditioners for Iterative Solutions of Sparse Linear Systems”. In: arXiv preprint arXiv:2406.00809 (2024) confidence: 3 justification: I think this is an interesting paper that is well-written with encouraging results. There are some limitations related to alternative baselines and the evaluation, but I do not consider these major limitations. Therefore, I recommend that the paper is accepted. final_rebuttal_confidence: 4 final_rebuttal_justification: I think the authors have done a good job with the rebuttal. The addition of another data-driven approach is welcome, and it is also interesting to see the effect of different hyperparameters. Given my initial positive response, this has been reinforced and I keep my original recommendation.
XCUzATsVdU
One-Class SVM-guided Negative Sampling for Enhanced Contrastive Learning
[ "Dhruv Jain", "Tsiry Mayet", "Romain HÉRAULT", "Romain MODZELEWSKI" ]
Recent studies on contrastive learning have emphasized carefully sampling and mixing negative samples. This study introduces a novel and improved approach for generating synthetic negatives. We propose a new method using One-Class Support Vector Machine (OCSVM) to guide in the selection process before mixing named as **Mixing OCSVM negatives (MiOC)**. Our results show that our approach creates more meaningful embeddings, which lead to better classification performance. We implement our method using publicly available datasets (Imagenet100, Cifar10, Cifar100, Cinic10, and STL10). We observed that MiOC exhibit favorable performance compared to state-of-the-art methods across these datasets. By presenting a novel approach, this study emphasizes the exploration of alternative mixing techniques that expand the sampling space beyond the conventional confines of hard negatives produced by the ranking of the dot product.
[ "Contrastive Learning", "Self Supervised Learning", "One-Class SVM", "Deep Learning" ]
https://openreview.net/pdf?id=XCUzATsVdU
https://openreview.net/forum?id=XCUzATsVdU
kYci58m2rF
decision
1,730,901,555,930
XCUzATsVdU
[ "everyone" ]
[ "NLDL.org/2025/Conference/Program_Chairs" ]
NLDL.org/2025/Conference
2025
title: Paper Decision decision: Accept (Oral) comment: We recommend an oral and a poster presentation given the AC and reviewers recommendations.