paper_id
stringlengths 10
10
| title
stringlengths 17
149
| abstract
stringlengths 468
2.59k
| pdf_url
stringlengths 71
71
| reviews
listlengths 2
7
| markdown
stringclasses 11
values |
---|---|---|---|---|---|
yURca4wi2L | Temporally Consistent Atmospheric Turbulence Mitigation with Neural Representations | Atmospheric turbulence, caused by random fluctuations in the atmosphere's refractive index, introduces complex spatio-temporal distortions in imagery captured at long range. Video Atmospheric Turbulence Mitigation (ATM) aims to restore videos affected by these distortions. However, existing video ATM methods, both supervised and self-supervised, struggle to maintain temporally consistent mitigation across frames, leading to visually incoherent results. This limitation arises from the stochastic nature of atmospheric turbulence, which varies across space and time. Inspired by the observation that atmospheric turbulence induces high-frequency temporal variations, we propose ConVRT, a novel framework for consistent video restoration through turbulence. ConVRT introduces a neural video representation that explicitly decouples spatial and temporal information into a spatial content field and a temporal deformation field, enabling targeted regularization of the network's temporal representation capability. By leveraging the low-pass filtering properties of the regularized temporal representations, ConVRT effectively mitigates turbulence-induced temporal frequency variations and promotes temporal consistency. Furthermore, our training framework seamlessly integrates supervised pre-training on synthetic turbulence data with self-supervised learning on real-world videos, significantly improving the temporally consistent mitigation of ATM methods on diverse real-world data. More information can be found on our project page: https://convrt-2024.github.io/ | https://openreview.net/pdf/b24f5f37299924fd9dcef2c90341e7676d541ddb.pdf | [
{
"confidence": 5,
"rating": 4,
"review_id": "K23YgwPkO7",
"review_text": "The study focuses on Video Atmospheric Turbulence Mitigation (ATM), which aims to restore videos that are affected by distortions caused by atmospheric turbulence. Specifically, the proposed ConVRT introduces a neural video representation that decouples spatial and temporal information, allowing targeted regularization of the network's temporal representation capability. Also, this paper integrates supervised and self-supervised learning, significantly improving the temporally consistent mitigation of ATM methods on diverse real-world data.\n\nThe study introduces a novel framework called ConVRT for video ATM, which addresses the challenge of maintaining temporal consistency in turbulence mitigation. It proposes a neural video representation that decouples spatial and temporal information, allowing targeted regularization and effective mitigation of turbulence-induced temporal frequency variations. Furthermore, the study integrates supervised and self-supervised learning, improving the temporally consistent mitigation of ATM methods on diverse real-world data.\n\n1.\tAs a neural representation method, ConVRT is designed to handle video clips with a limited number of frames, making it challenging to handle larger video sequences and more significant motions without compromising accuracy.\n2.\tThe related work section lacks completeness. More test-time methods should be carefully reviewed. Also, no SOTA methods were involved in the experimental section for comparison, which is unacceptable.\n3.\tIn addition to VRT, more SOTA transformer-based methods, as well as the recent-popular diffusion-based models, should be added for analysis.\n4.\tConVRT employs test-time optimization, which can be computationally intensive. Therefore, model efficiency should be discussed.\n5.\tSome intermediate results should be given.\n\nPlease refer to the detailed comments!"
},
{
"confidence": 4,
"rating": 5,
"review_id": "L8Zb0wdSL1",
"review_text": "This paper proposes a method for improving the temporal consistency of turbulence-affected videos. The proposed method uses neural representations (MLP layers) to separately model the spatial and temporal deformations caused by air turbulence and is able to improve the temporal consistency of restoration results. It seems that the proposed method needs to be used in conjunction with another base turbulence restoration method. The prosed method (by combining with several other SOTA approaches) is evaluated on existing real turbulent video dataset, and a small dataset collected by the authors. Temporal consistency, especially for videos with moving objects, is apparently improved after applying the proposed method.\n\n- The paper is generally well-written and easy to follow.\n- The MLP-based network for mitigating spatial and temporal deformations is new and seems to be effective, especially on maintaining temporal consistency. \n- Qualitative evaluation is performed on videos with moving objects and have shown apparent improvement.\n- A small real-world dataset was captured and used for testing, but details about the dataset are not clear.\n\n- It seems that the method is an add-on approach for regularizing the temporal consistency of videos restored by another turbulence restoration methods. Since turbulent degradation is more complex than temporal inconsistency and spatial distortions (e.g., there might be blurriness and color aberration beyond the spatial and temporal deformations), being able to only handle these two types of artifacts seems quite limited for turbulence mitigation. \n- The quantitative evaluation results (Table 2) are confusing. This table shows metric scores for using the proposed method as a stand-alone (i.e., using the original turbulent video as input). However, the \"no base results\" are not demonstrated in any visual comparison figures (not even in supplementary materials). It would be useful to see visual comparison between \"no base results\" and others. What really confuses me about this table is that the metric scores of the original turbulent images (denoted as \"ori\") are even better than processed results in many cases (for example, its PSNR is higher than DATUM results for the HeatChamber dataset, and there many such cases). But according to visual results, most processed results have apparent improvement. Besides, after applying the proposed method, some metric scores become much worse (for example, the slice_tv scores for most datasets and method combinations). There should be some discussions explaining the metric results. \n- The paper missed some relevant prior works (see below). These works either use MLP for modeling turbulent distortions or use similar idea to enforce temporal consistency and they should be discussed and compared with the proposed method.\nLi et al., \"Unsupervised Non-Rigid Image Distortion Removal via Grid Deformation,\" ICCV 2021.\nThapa et al. \"Learning to Remove Refractive Distortions from Underwater Images,\" ICCV 2021.\n\n- It seems that motions in demonstrated results are quite slight and slow. Is the method also robust to large motions? How is this related to turbulence strength? \n- I'm interested at knowing more details about the airport dataset acquired by the authors, like size of the dataset, type of scenes, turbulence strength, etc."
},
{
"confidence": 4,
"rating": 5,
"review_id": "OIcW9k1tA7",
"review_text": "This paper introduced ConVRT, a novel method for video atmospheric turbulence mitigation. \nThis paper has a good structure and is well-written.\n\nThis paper proposed a new method to deal with turbulence mitigation.\n\n1. limited real-world case visualization\n2. limited proof of algorithm effectiveness\n3. limited comparison with classic algorithm\n\n1. Test cases with visualization are so limited, even in the supplemental material. Please show more real-world cases to show the effectiveness of the algorithm.\n2. As the representative unsupervised method, \"Unsupervised non-rigid image distortion removal via grid deformation,\" it is important to compare with it, no matter from the algorithm or qualitative result.\n3. Lack of ablation studies to prove the effectiveness of the proposed algorithm.\n4. Can the method deal with the video with moving objects?\n5. No matter whether it is for a single image-based or multiple frames-based turbulence mitigation, most existing algorithms can deal with them very well. With the help of the diffusion model, the resulting image can be refined further. It means that it could generate a good final result in a short time. Then, what is the advantage of your algorithm?\n6. It is important to visualize the temporal and spatial field to verify the algorithm's effectiveness."
},
{
"confidence": 4,
"rating": 6,
"review_id": "N5tYR5Hcf2",
"review_text": "This paper presents an implicit neural representation (INR) framework for taking a pre-trained supervised video atmospheric turbulence mitigation (ATM) model and regularizing its output to be more temporally consistent. The main components are (1) an INR called the temporal deformation field; and (2) a subsequent INR called the spatial content field to output RGB intensity at a pixel in the video (at a certain time). These two INRs are trained on the output of a pre-trained ATM model, and regularized using a disparity loss (with MiDas pre-trained network) for temporal consistency, and a similarity loss for the content of the video. Experiments are conducted on real-world datasets with comparison to state-of-the-art ATM models recently proposed as well as some simulated ablation studies.\n\n+ Method can improve a variety of existing state-of-the-art ATM models, and the use of INRs with different feature representations used as inputs seems like an original contribution (at least to this application field, if not video representation in general)\n\n+ The use of KLT tracker for visualizing the temporal variability across the frames is a good visualization and helps show the improvement of the method in a qualitative way\n\n+ Supplemental videos on the website show the method stabilizing video in the presence of turbulence\n\n+ Extensive quantification of the method shown in Table 2 to illustrate the effectiveness of ConVRT\n\n- There is little detail about the spatial feature map M, temporal feature map N, canonical spatial feature map C. What does it mean to call these feature maps, and why are they chosen the way they are? For instance, why the Hadamard product for M and N, and not just learning a 3D feature map at that place instead directly? I also don't see how the C map is \"canonical\" to me in any obvious way (for instance, you could change the dimensions of Q1, Q2 and I don't see why that couldn't work in the method?). \n\n- The method seems to be focused primarily on fixing errors for supervised ATM methods. However, some of the classical approaches such as [Mao 2020] that utilize lucky frames, would they have this problem of temporal variability? I'm not necessarily asking for a comparison or new experiments, but it would be good to discuss if this problem primarily is for supervised methods. \n\nReference: Zhiyuan Mao, Nicholas Chimitt, and Stanley H. Chan, ‘‘Image Reconstruction of Static and Dynamic Scenes through Anisoplanatic Turbulence’’, IEEE Transactions on Computational Imaging, vol. 6, pp. 1415-1428, Oct. 2020\n\n- Table 3 is a very modest improvement . Does the Ltemp really help? A qualitative example would really help clear up that this L_temp is working (show a figure with and without Ltemp).\n\n- How would the method handle issues such as camera shake (common in long-range videos that are shot with high optical zoom)? \n\nMinor suggestions:\n- Line 187 - TurbNet, shouldn't it be whatever ATM method you are comparing with? \n- Line 205 - One shouldn't be capitalized\n- Line 109 - unresolved citation\n- Table 1 - more stylistic, but I don't think its necessary to put down the venue into the table. We shouldn't judge methods based on their venue, but on the content of the method itself, so having the citation alone is enough to let readers draw their own conclusions about the papers. I would remove this column from the table. \n- Table 3 - there is a typo in the PSNR_Img column where the lower number is bolded rather than the higher one\n\n1. Can the authors explain (1) how the feature maps M, N, C are novel compared to other INR video representations? Is it the use of the Hadamard product, and the two stage architecture? If so, an ablation study of comparing the Hadamard product of M and N to just a 3D spatial feature map directly is warranted. (2) I would be good to visualize these features after optimization (what do they look like), and what information/interpretability can be gleaned from them? \n\n2. There are missing details: what is the details of the MLP layers, any positional encoding? These should be added to a supplemental file. \n\n3. I am interested if L_temp is the key factor that improves the method, or its a minor improvement. Showing a qualitative example as discussed earlier would be beneficial here. \n\n4. Can there be a discussion about issues involving camera shake? I assume the method would require stabilized videos first, and if there is any residual motion leftover, this would cause errors in the reconstruction. \n\n5. What is the wall clock time of the method? How long does it take (in actual seconds) from start to finish? The paper only states 80 epochs, I'm curious how long that actually takes. \n\n6. In line 134 - it says an additional enhancement module is applied after S_field. This was never discussed again. How important is this module? What's the performance with and without the module?"
}
] | |
yTTomSJsSW | Aligning Large Language Models with Representation Editing: A Control Perspective | Aligning large language models (LLMs) with human objectives is crucial for real-world applications. However, fine-tuning LLMs for alignment often suffers from unstable training and requires substantial computing resources. Test-time alignment techniques, such as prompting and guided decoding, do not modify the underlying model, and their performance remains dependent on the original model's capabilities. To address these challenges, we propose aligning LLMs through representation editing. The core of our method is to view a pre-trained autoregressive LLM as a discrete-time stochastic dynamical system. To achieve alignment for specific objectives, we introduce external control signals into the state space of this language dynamical system. We train a value function directly on the hidden states according to the Bellman equation, enabling gradient-based optimization to obtain the optimal control signals at test time. Our experiments demonstrate that our method outperforms existing test-time alignment techniques while requiring significantly fewer resources compared to fine-tuning methods. Our code is available at [https://github.com/Lingkai-Kong/RE-Control](https://github.com/Lingkai-Kong/RE-Control). | https://openreview.net/pdf/5b01199621eef2e71cc22c61871a279fc51beeba.pdf | [
{
"confidence": 3,
"rating": 7,
"review_id": "xZJUOLLLyN",
"review_text": "In their paper, the authors introduce RE-CONTROL, a novel approach designed to align Large Language Models (LLMs) through representation editing. They view LLMs as discrete-time stochastic dynamical systems and propose the insertion of control signals into the internal representations. This technique allows for precise manipulation of the model's outputs during test time token by token. The experiments show that this method increases the win rate on HH dataset and does not need significant inference time.\n\n- Viewing LLMs as a dynamical system and interpret the steering vector as a kind of controlling signal to align models is innovative.\n- Make LLMs adjustable during the generation process, and the evaluation does not have to wait until the entire sentence is generated.\n- They empirically show that their method outperform some test-time alignment methods and does not need significant inference time, which makes the method be more practical usable.\n\n- Some parts of the paper are confusing, especially certain expressions. For example, they did not clarify some notations like a_t, V_{phi} etc.. The legend in figure 1 seems mismatched. And some figures are not mentioned in the paper.\n- I think the performance of this method is highly depend on the value model. However, the paper does not discuss the reliability of the value model, which is crucial since it needs to assess the alignment effectiveness of the entire result based on each newly generated token and do so before the results are generated.\n- The theoretical analysis and interpretation of their method is interesting, but lack rigor. e.g. the generated token (y_t) should be determined by logits (o_t), which is a part of state in the dynamic system. However, the paper interprets the generated token as kind of random variable or random noise (w_t).\n\nPlease refer to the part of weaknesses."
},
{
"confidence": 4,
"rating": 6,
"review_id": "a5aGs7Bay6",
"review_text": "The paper suggests editing language model features for alignment tasks. The authors first learn a value function of a language model from a human-preference dataset. They then increment feature representations in model layers to maximize test-time utility. Empirical evidence shows that this feature editing method surpasses both test-time and training-time alignment baselines.\n\nThe proposed method, RE-CONTROL, is a useful middle ground between current training-time and test-time alignment methods:\n\n- RE-CONTROL, unlike existing training-time methods, does not alter a language model’s parameters, reducing training costs. Instead, it learns a value function offline. \n\n- RE-CONTROL, unlike existing test-time methods, employs a learned value function to inject feature increments into features of language models.\n\nThe experiments are extensive in that they compared RE-CONTROL with both training-time and test-time alignment methods.\n\nWhile the paper is technically well-executed, I believe it has three main limitations: (i) the lack of compute--performance tradeoff analysis (ii) the lack of details in comparing RE-CONTROL with training-time alignment baselines. (iii) the limitation in application scope.\n\nFirst, a compute-performance tradeoff analysis would clarify the behavior of RE-CONTROL. RE-CONTROL is more compute-intensive than other test-time decoding alternatives because it requires gradient ascent steps at decoding time (Section 4.4). These steps add up and can become quite intensive for generating long text. Therefore, comparing RE-CONTROL with test-time alignment alternatives while considering compute time would be informative. For instance, the authors could display the win rate of different test-time decoding methods on the y-axis and their wallclock time on the x-axis.\n\nSecond, I think the performance comparison between RE-CONTROL and training-time alignment methods in Section 6.1 seems very preliminary. There, the authors empirically show that the test-time alignment method RE-CONTROL *outperforms* training-time alignment methods like PPO, by concluding that\n\n>We observe that RE-CONTROL achieves a higher GPT-4 win rate and average reward compared to both PPO and DPO. Furthermore, RE-CONTROL also outperforms these methods in terms of diversity and coherence.\n\nI'm puzzled by how to interpret the results here. Should the take-home message here be \"Decoding-time RE-CONTROL is better than training-time PPO in alignment. Period.\" or are there qualifications to this statement? I strongly suspect that some qualification is needed. To some extent, RE-CONTROL is a decoding-time approximation of PPO. Both methods use a learned value function to steer the model's behavior. At decoding time, RE-CONTROL does this in a more lossy (due to test-time gradient ascent) and shallower (because not all parameters are updated) way. Thus, with adequate training, I expected PPO to yield better results than RE-CONTROL. Note that this doesn't undermine RE-CONTROL's capability, as it is more lightweight than PPO. \n\n\nThirdly, while RE-CONTROL is technically sound, its application scope seems narrow. To my understanding, RE-CONTROL is most appealing to users who are unwilling to train a language model offline, who are willing to train a value function offline, who aim to save computing power during training, and who don't mind using more compute during decoding. These intersections of users seem limiting. This raises the question: Is it better to simply use a similar compute budget for efficient alignment (e.g., LoRa) of the LM model using standard methods (DPO, PPO, etc.) and avoid ongoing compute costs during decoding?\n\nAs mentioned above, in my opinion, it is surprising that decoding-time RE-CONTROL outperforms training-time PPO. To compare PPO and RE-CONTROL more carefully, could the authors consider some ablation studies? For example, you could use the same value function for both PPO and RE-CONTROL, one at training time to fine-tune the model parameters and the other at decoding time to produce the feature increment and compare the results."
},
{
"confidence": 3,
"rating": 5,
"review_id": "b19RrLSqZ6",
"review_text": "The paper introduces an alternative procedure for LLM alignment that does not fine-tune LLM weights, but instead learns a separate value function that is used to update hidden states. The value function is learned using a variation of temporal difference, then applied at inference time to modify hidden states by gradient ascent, maximizing the predicted state value. Authors evaluate their approach with multiple 7B LLMs on HH-RLHF data, comparing against both RLHF and training-free baselines. The paper also analyzes OOD generalization to HarmfulQA.\n\n- Authors propose an interesting approach to that can be used to alter LLM behavior in general\n- When experimenting with HH-RLHF dataset, authors evaluate against multiple types of baselines and provide additional analysis that was interesting to read\n- The paper is generally well-written and easy to follow\n- Authors made the code available, in a (mostly) serviceable state\n\n**1a. Motivation for the choice of baselines.**\n\nIn your work, you cite, among others, ARGS[26], DeAL [22], Value Augmented Sampling [21] that also learn value functions and use them to steer model outputs (in other ways), but, to the best of my knowledge, you do not compare against them as baselines, instead choosing a relatively older work on controlled decoding. While [21] may be dismissed as concurrent work, the other works appear to be a relevant alternative and it is not clear why they were not chosen as baselines.\n\nIf there is a reason why these works will, beyond reasonable doubt, fail at the task that you evaluate on, I would recommend that you explain this in the paper. If there is no such reason, the paper would benefit from comparing against them.\n\n**1b. Motivation for the choice of models**\n\nYour paper focuses on Llama, Vicuna and Falcon models, of the 7B variety. While these are indeed LLMs, the original Llama was released circa 1.5 years ago and since then, LLMs improved **significantly** across tasks.\nPicking older LLMs appears counterintuitive, as their generally worse quality makes it harder to measure possible drawdowns introduced by LLM alignment.\n\nIf you have a reason for choosing these models, please explain why you focus on older LLMs as compared to, for example, Llama 3 8B (or 70B), Qwen2, Gemma or other models near the top of https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard . If there is no such reason, the paper would benefit from switching to more accurate models.\n\n**2. Inference time exploration**\n\nLLM use cases are often sensitive to inference throughput (tokens per second) and latency (time to first / next token).\n\nTo the best of my understanding, RE-Control introduces an iterative optimization step to each forward pass during autoregressive inference. Depending on the configuration, this may result in a significant slowdown, which may limit the practical significance of your approach.\n\nI would argue that the work would benefit from analyzing this difference in speed in different settings (e.g. single-sequence vs batch inference, etc).\n\n**3. Main experiments are limited to one dataset and relatively small past generation LLMs, ranked by GPT-4**\n\nThis is definitely not a fault on authors' side, but the paper makes its main conclusions based on 7B models, using reward functions trained on a single dataset. This could result in accidental false conclusions if it turns out that, for instance, RE-Control harms the quality of stronger models or if it is somehow implicitly overfitting on on GPT4 opinions.\n\nThe standard way to minimize this risk is to diversify the experiments: try alternative alignment datasets (e.g. webgpt_comparisons, oasst1, etc), try larger models (llama-3 70B), introduce human rankings in some setups, etc. I understand that not all of these evaluations may be available to the authors, but for a NeurIPS publication, I would expect more variation in the experiments and, if there is a confounder that could not be eliminated (e.g. using GPT4 and no human eval), it should be stated among the limitations section.\n\n**Questions on the definition of state**\n\nTo the best of my (possibly wrong) understanding, when you apply Bellman equation, you assume that the dynamic system's state satisfies Markov assumption. [If not, please explain why not]\n\nSince LLMs use attention to previous hidden states, hidden vector for a specific state do not satisfy Markov assumption, since LLM's next token probability depends not only on them, but on a more distant past as well. In contrast, a fully markovian state would need to contain all previous hidden vectors, or the current hidden vectors and all past KV projections, or a sequence of all previous tokens (no hidden vectors necessary).\n\nIn other words, **when you define V(s), does s refer to just the current token's hiddens or a full state with Markov assumption?**\n\nIf you mean the latter state, then the test-time intervention (S4.4) needs to modify all previous hidden states of an LLM. This is important because modifying past hidden states may result in a very inefficient LLM inference algorithm.\n\nIf only the current state, you seem to apply policy iteration (S4.2-4.3) to a non-markov state. Please explain how you make sure that this algorithm still has the guarantees of optimal policy. If it doesn't, please clearly explain that the algorithm is a heuristic inspired by PI rather than actual PI.\n\n### On reproducibility\n\nTo reiterate, the fact that you publish the code is great. None of my complaints below affected the final score.\n\nThe codebase lacks library versions (requirements.txt / dockerfile / list them in the readme), which makes it difficult to reproduce, especially in the future. While I ultimately managed to run the code by choosing the libraries with an educated guess (and minor modifications to the code), I am still not sure if I got the method to work \"as intended\" and not introduce silent errors.\n\nFor legal reasons, it would be best to direct the users to a version of Llama 7B that contains its original license, at least in the final version of the paper.\n\nUsing GPT-4 opinion means that the experiments would be difficult to reproduce after it is cycled ou\n\n\n\n\n### Typos / minor:\n\n> L16 LLama\n\nThe capitalization for the first version was LLaMA, second and third are Llama.\n\n> supplementary code: intervented_model\n\nyou may have meant “intervened”"
},
{
"confidence": 3,
"rating": 8,
"review_id": "ZdxWPemKVG",
"review_text": "The paper \"Aligning Large Language Models with Representation Editing: A Control Perspective\" proposes a method for aligning large language models (LLMs) with human objectives through representation editing. Unlike fine-tuning, which is resource-intensive and unstable, or test-time alignment techniques like prompting that rely on the original model's capabilities, this method introduces external control signals into the hidden states of a pre-trained LLM. The method treats the LLM as a discrete-time stochastic dynamical system and applies control theory to train a value function on the hidden states, optimizing control signals at test time. The experiments show that this method, named RE-CONTROL, outperforms existing test-time alignment techniques and requires fewer resources compared to fine-tuning methods.\n\nInnovative Approach: The use of control theory to introduce control signals into the hidden states of LLMs is novel and provides a new perspective on alignment.\nResource Efficiency: RE-CONTROL is less resource-intensive than traditional fine-tuning methods, making it more practical for large-scale applications.\nEmpirical Success: The experiments demonstrate that RE-CONTROL outperforms existing test-time alignment methods, showing strong generalization and alignment capabilities.\nFlexibility: The method offers more flexibility than prompting or guided decoding as it perturbs the representation space dynamically during the generation process\n\nComplexity: The method involves sophisticated control theory and optimization techniques, which might be challenging to implement and understand for practitioners without a strong background in these areas.\nDependency on Value Function: The success of the method heavily relies on the accuracy and training of the value function, which might introduce additional challenges in terms of training and performance.\n\nWhat are the specific challenges encountered during the training of the value function, and how can they be mitigated?"
}
] | |
yS9xU6ANiA | Exogenous Matching: Learning Good Proposals for Tractable Counterfactual Estimation | We propose an importance sampling method for tractable and efficient estimation of counterfactual expressions in general settings, named Exogenous Matching. By minimizing a common upper bound of counterfactual estimators, we transform the variance minimization problem into a conditional distribution learning problem, enabling its integration with existing conditional distribution modeling approaches. We validate the theoretical results through experiments under various types and settings of Structural Causal Models (SCMs) and demonstrate the outperformance on counterfactual estimation tasks compared to other existing importance sampling methods. We also explore the impact of injecting structural prior knowledge (counterfactual Markov boundaries) on the results. Finally, we apply this method to identifiable proxy SCMs and demonstrate the unbiasedness of the estimates, empirically illustrating the applicability of the method to practical scenarios. | https://openreview.net/pdf/d87540bc855077295fdd9a86acc73b4ca59a2097.pdf | [
{
"confidence": 4,
"rating": 6,
"review_id": "IfRfFLofWP",
"review_text": "The manuscript introduces exogenous matching, an importance sampling method for efficient estimation of counterfactual expressions in various settings. This method transforms the variance minimization problem into a conditional distribution learning problem, allowing integration with existing modeling approaches. The authors validate their theoretical findings through experiments with different Structural Causal Models (SCMs), showing competetive performance in a range of counterfactual estimation tasks. They also examine the impact of structural prior knowledge and demonstrate the method's unbiased estimates and practical applicability in identifiable proxy SCMs.\n\nUpdate: revising score upward following author rebuttal.\n\nThe topic is timely and important, as counterfactual estimation has become an increasingly popular subject in statistics and machine learning. The proposal builds on recent results in neural causal models, specifically with normalizing flows. The ability to incorporate prior knowledge in the form of Markov boundaries is especially welcome, since computing counterfactuals is often intractable without such constraints. The theoretical results appear sound (though I confess I did not go closely through the proofs) and the empirical results are compelling.\n\nThe manuscript is not always clear, probably because a great deal of material has been moved to the appendix to accommodate page count. The result is a somewhat disjointed text that would likely be better served by a journal publication than a conference paper. That said, I am generally supportive of this submission and would be willing to revise my score upward if my questions are adequately addressed (see below).\n\nWhen we say the causal model is “not fully specified”, does that just refer to the structural equations or to the graphical structure as well? In general, I was not always certain just how much causal information is used as input to this method.\n\nI’m a bit confused by Eq. 5. If this is meant to be a variance estimator, then presumably the RHS should be something like $\\mathbb{E}[X^2] – (\\mathbb{E}[X])^2$, where $X$ denotes the likelihood ratio $p(u) / q(u)$, correct? This is almost but not quite what we find here. Looking at the appendix, I don’t see why Eq. 48 follows from Eq. 47. Why do we get to drop the square from the first term?\n\nWhat does the constant $c$ denote in Eq. 6? Is it just the entropy of $P$? A brief word on this would help with intuition.\n\nShould there definitely be a negative both in front of the expectation *and* within it? I assume the first summand should just be $-\\mathbb{E} [ \\log Q(u \\mid Y_* (u)) ] $? This would look more like the classic cross entropy formula, as indicated in the following line. Eq. 7 also suggests this.\n\nOn “augmented graphs” – does this “reverse projection” of the ADMG always work? Different DAGs can have the same ADMG, for instance if two latent variables have all the same endogenous children. Perhaps there’s an unstated minimality assumption at work here?\n\nWhat is $m$ in Eq. 13?\n\nIt is not clear to me from Sect. 5 what the sample size and data dimensionality are for these tasks? In general, this section appears rushed. The performance metrics are also somewhat surprising. If data is simulated, then we presumably have ground truth with respect to counterfactual probabilities. If so, then why not just compute the mean square error of the proposed estimator, perhaps as a function of sample size?"
},
{
"confidence": 3,
"rating": 5,
"review_id": "7uawmz8Auh",
"review_text": "This paper introduces an importance sampling method for efficient estimation of counterfactual expressions within general settings. It transforms the variance minimization problem into a conditional distribution learning issue, allowing integration with existing modeling approaches. The paper also explores the impact of incorporating structural prior knowledge, i.e. Markov boundaries, and applies the method to identifiable proxy SCMs, proving the unbiasedness of estimates and illustrating the method's practical applicability.\n\n1. The paper is well-structured, with many subsections and bullet points summarizing paragraphs.\n2. The approach proposed in this paper has clear intuition and is easy to implement.\n\n1. Contributions are not disentangled well. All three points involve experimental or empirical findings.\n2. Some results of the ablation study are abnormal. First, the results show that the approach proposed is not robust. Under setting SIMPSON-NLIN and M, the inclusion of Markov Boundary Mask significantly improves ESP. However, under setting LARGEBD-NLIN and NAPKIN, the Markov Boundary Mask harms the performance, especially with backbone SOSPF. Second, the ESPs under setting LARGEBD-NLIN with backbone SOSPF are almost 0, even when $|s|=1$, which is hard to explain if including Markov Boundary Mask is effective. Third, the variance under setting LARGEBD-NLIN with backbone NICE is extremely large. \n3. Insufficient explanation or legend for figures, making it difficult for readers to understand. For example, in Figure 1, $\\mathcal{M}$ with a subscripted hammer is not explained. In Figure 3, the legend does not indicate what different colors mean.\n4. Hard to follow. Some terminologies need explanation or reference. For example, in line 234, *faithfulness* is not defined.\n\nI would like to know whether Theorem 2 and 3 are novel. Have the papers cited (like 81, 3, 94, 111) and other papers proposed methods to obtain Markov boundaries? If so, what is the improvement of the method proposed in this paper?"
},
{
"confidence": 3,
"rating": 5,
"review_id": "X4z3BYCgKl",
"review_text": "This paper presents Exogenous Matching (EXOM), a new importance sampling method for estimating counterfactual probabilities in Structural Causal Models (SCMs). EXOM transforms variance minimization into a conditional distribution learning problem, providing an upper bound on counterfactual estimator variance as per Theorem 1. It outperforms existing methods across various SCM settings and integrates well with identifiable neural proxy SCMs for practical applications. By incorporating prior knowledge through Markov boundaries, EXOM further enhances performance, demonstrating its potential as an efficient tool for counterfactual estimation in diverse scenarios.\n\n1. EXOM provides a tractable and efficient approach for counterfactual estimation in general settings, including scenarios with discrete or continuous exogenous variables and various observations and interventions. This flexibility makes it applicable to a wide range of causal inference problems.\n \n2. The method is built on solid theoretical grounds, with the authors deriving an optimizable variance upper bound for counterfactual estimators.\n \n3. The authors incorporate structural prior knowledge, specifically Markov boundaries, into the neural networks used for parameter optimization. They empirically validate the effectiveness of this approach across various scenarios.\n \n4. EXOM consistently outperforms other importance sampling methods in various SCM settings, as demonstrated by the experimental results. Its compatibility with identifiable neural proxy SCMs further enhances its practical applicability.\n\n1. Theorem 1 relies on the assumption that the density ratio $q(\\mathbf{u}|\\mathbf{y}_ {\\ast})/q(\\mathbf{u}|\\mathbf{y}_ {\\ast}^\\prime)\\leq \\kappa$ holds for all $\\mathbf{u} \\in \\Omega_{\\mathbf{u}}$ and $y_{\\ast}$, $y_ {\\ast} ^\\prime \\in \\Omega_{\\mathbf{Y}_{\\ast}}$. This assumption may be overly stringent, as probability measures with infinite support sets might easily violate it. Could the authors elaborate on this assumption and provide examples of distributions that satisfy it?\n \n2. In the Sampling and Optimization section, the distribution of the exogenous variable $\\mathbf{U}$ is assumed to be known. However, in practical scenarios, $\\mathbf{U}$ is often unknown, necessitating additional efforts to estimate $\\mathbf{P_U}$ [A]. Could the authors provide further clarification on this assumption and discuss potential methods for estimating $\\mathbf{P_U}$?\n \n3. I understand the authors only consider models that provide identifiability results. However, it is encouraged to include neural proxy SCM methods based on VAE and DDPM as experimental baselines. While these may lack identifiability guarantees, comparing against them would further illustrate the superiority of the proposed method in relation to current state-of-the-art techniques.\n \n4. While the method shows good performance on the tested SCMs, it's unclear how well it scales to larger, more complex causal models. The experiments are conducted on relatively small SCMs, and scalability to high-dimensional or densely connected causal graphs isn't thoroughly addressed.\n \n\n[A] Ren, Shaogang, and Xiaoning Qian. \"Causal Bayesian Optimization via Exogenous Distribution Learning.\" *arXiv preprint arXiv:2402.02277* (2024).\n\n1. In Table 1, the EXOM method with MAP shows significantly better performance on the SIMPSON-NLIN and NAPKIN datasets compared to EXOM with GMM, whereas the performance on the FAIRNESS-XW dataset is similar for both methods. Could the authors provide further explanation for this discrepancy? Why does the GMM approach underperform in these specific cases, and what factors contribute to the similar performance on FAIRNESS-XW?\n \n2. In the ablation study investigating the impact of injecting Markov boundaries, could the authors please include the performance results of EXOM without Markov boundaries? This comparison would further illustrate the benefit of incorporating Markov boundaries."
},
{
"confidence": 3,
"rating": 7,
"review_id": "LWxBWZsbKk",
"review_text": "Based on the importance sampling methods, the authors propose an exogenous matching approach to estimate counterfactual probability in general settings. They derive the variance upper bound of counterfactual estimators and transform it into the conditional learning problem. They also employ the Markov boundaries information in the inference to improve the learning performances further. Extensive experiments validate the superiority and practicality of their method.\n\n- This paper is clearly and well written.\n\n- The authors give a theoretical analysis of their estimator, its log-variance upper bound in the general settings, and the counterfactual Markov boundary, and they also perform extensive experiments to demonstrate effectiveness in several cases: with two types of stochastic counterfactual processes, with three categories of fully specified SCMs, etc.\n\n- This paper makes it clear in lines 140-144 about the assumptions needed for the proposed method. Regarding assumption ii), I think it is not mild, and I am wondering if the proposed method would be sensitive to the specified distribution $P_{\\textbf{U}}$ of $\\textbf{U}$. The authors might have performed such experiments, but it is not quite clear. \n\n- Are the Markov boundaries learned from the observational data via the d-separation, or they are given prior? The authors claimed that such Markov boundaries are structural prior knowledge in line 223, whereas they gave Theorem 3 to demonstrate how to obtain them. \n\n- It is suggested to offer the whole procedure or pseudo code of their proposed algorithm somewhere.\n\n- In line 239, “augmentied” might be a typo error.\n\nPlease see my questions in Weaknesses."
}
] | |
yRuJqoWoCs | $SE(3)$ Equivariant Ray Embeddings for Implicit Multi-View Depth Estimation | Incorporating inductive bias by embedding geometric entities (such as rays) as input has proven successful in multi-view learning. However, the methods adopting this technique typically lack equivariance, which is crucial for effective 3D learning. Equivariance serves as a valuable inductive prior, aiding in the generation of robust multi-view features for 3D scene understanding. In this paper, we explore the application of equivariant multi-view learning to depth estimation, not only recognizing its significance for computer vision and robotics but also addressing the limitations of previous research. Most prior studies have either overlooked equivariance in this setting or achieved only approximate equivariance through data augmentation, which often leads to inconsistencies across different reference frames. To address this issue, we propose to embed $SE(3)$ equivariance into the Perceiver IO architecture. We employ Spherical Harmonics for positional encoding to ensure 3D rotation equivariance, and develop a specialized equivariant encoder and decoder within the Perceiver IO architecture. To validate our model, we applied it to the task of stereo depth estimation, achieving state of the art results on real-world datasets without explicit geometric constraints or extensive data augmentation. | https://openreview.net/pdf/e8039c44b88c7c3803572ada21d4746ef0778d7d.pdf | [
{
"confidence": 4,
"rating": 6,
"review_id": "Vxjny9S0tW",
"review_text": "This paper introduces an SE(3)-equivariant multi-view depth estimation model based on the Perceiver IO framework. Specifically, each feature ray is treated as a token, and the feature vector of each ray is concatenated with an equivariant positional embedding. To achieve equivariance, the authors propose using spherical harmonics to encode the ray poses. Ray features are treated as type-0 (rotation-invariant) irreps. These equivariant ray encodings are processed through several equivariant self-attention layers and aggregated into global features and a canonical reference frame. The camera pose encoding is first inverse-transformed into this inferred canonical frame, resulting in an SE(3)-invariant query. A series of cross-attention layers between the encoded global features and the query features is then used to predict pixel colors. The authors demonstrate the effectiveness of the proposed approach on the ScanNet and DeMoN datasets.\n\n1. To the best of the reviewers' knowledge, this is the first paper to address SE(3)-equivariant positional embedding for the transformer/PerceiverIO framework for multiview applications. While Fuchs et al [1]. and Liao et al [2,3]. have addressed SE(3)-equivariant attention for GNNs, their methods are more complex and computationally inefficient compared to the proposed approach.\n2. The proposed method shows competitive benchmark results compared to state-of-the-art methods across multiple datasets. The ablation study convincingly demonstrates the significance of the equivariant embedding.\n3. In the appendix, the authors put significant effort to make the concepts accessible to beginners, including detailed visualizations for how the computations are done. This contrasts with typical papers on SE(3)-equivariance, which often include difficult equations that can be a barrier to entry for newcomers.\n\n\n\n\n[1] Fuchs et al., “SE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks,” NeurIPS`20\n\n[2] Liao et al., “Equiformer: Equivariant graph attention transformer for 3d atomistic graphs,” ICLR’23\n\n[3] Liao et al., “Equiformerv2: Improved equivariant transformer for scaling to higher-degree representations,” ICLR’24\n\n1. The authors introduced a new equivariant nonlinearity inspired by [4], but the motivation and benefits are not clearly demonstrated. What is the distinctive advantage of this new nonlinearity, compared to existing SE(3)-equivariant nonlinearities?\n\n2. The number of parameters was not fixed during the ablation experiments regarding the maximum spherical harmonics degrees. A recent study [5] claimed that the reported increase in performance due to incorporating higher-type irreps in various works could actually be due to the increased number of parameters. It is essential to control the number of parameters to be similar between the ablated models and the proposed model.\n\n[4] Deng et al., \"Vector Neurons: A General Framework for SO(3)-Equivariant Networks,” ICCV’21\n\n[5] Wang et al., “Rethinking the Benefits of Steerable Features in 3D Equivariant Graph Neural Networks,” ICLR’24\n\n1. According to the equations in Appendix F, different types of irreps do not mix in the self-attention layer. They also do not mix in the proposed nonlinearities in Appendix F. It seems like in the proposed method, each of the irreps (except for type-0) can only indirectly modulate other irreps of different types via attention. Am I correct?\n\n2. Subtracting the mean of the center is not stable under the addition or removal of camera points. Is it possible to use relative positional encoding, similar to rotary embedding, to achieve translational equivariance without relying on centroid subtraction?"
},
{
"confidence": 4,
"rating": 7,
"review_id": "2NCIWK4dQy",
"review_text": "This paper introduces a ray embedding representation with rotational and translational equivariance, integrating the existing Perceiver IO architecture to achieve robust multi-view implicit depth estimation. The paper first utilizes the mean shift and spherical harmonics to achieve translational equivariance, and then builds upon this to use spherical harmonics to achieve a rotationally equivariant representation, ultimately combining to obtain a three-dimensional transformation embedding with equivariance. By further designing equivariant encoders and decoders, the paper realizes robust estimation of depth from new perspectives. Experiments on the ScanNet and DeMoN datasets demonstrate the effectiveness of the proposed method.\n\n-The motivation is clear, the algorithm design makes sense, and the experimental results are complete.\n\n-Ablation study: Since the equivariance consists of two parts, namely translation and rotation, what would be the qualitative and quantitative impact of removing these two parts respectively?\n\n-The task setting of implicit depth estimation seems to be very compatible with the existing sparse view NeRF/GS methods. Although the focus of the two is different, with NeRF/GS focusing more on rendering images, while DeFiNe and EPIO mainly focus on geometry, there is a possibility of mutual exchange between the two. Can you report the comparative results with such methods? For example, ENeRF.\n\nLin, et al. \"Efficient Neural Radiance Fields for Interactive Free-viewpoint Video\", SIGGRAPH-ASIA 2022.\n\n-DeFiNe can synthesize novel view images, can EPIO do the same? What are the results like?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "TkuBNfRs6c",
"review_text": "This paper presents a SE(3) rotational and translational equivariant variation of Perceive IO for multi-view depth estimation with known camera poses. The authors first encode both the pixel-wise ray direction and the camera translation using spherical harmonics as the position encoding, and then to maintain equivariance under global transformations through the network's forward pass, the authors modify several components, including the linear projection, the latent array construction, and the output decoding. To demonstrate the effectiveness of the proposed method, the authors conducted experiments on several RGBD datasets, including ScanNet, SUN3D, TUM-RGBD, and Scene11, and achieved better performance than existing implicit multi-view depth estimations, such as DeFiNe, and multi-view stereo (MVS) models, such as DPSNet.\n\n- The authors introduce the problem well, explaining the importance of equivariance to the task of multi-view depth estimation effectively. They also provide a brief yet sufficient review of existing works, clearly positioning this work within the field.\n- The authors have carefully designed several novel equivariant components:\n - A SE(3) equivariant positional encoding, where besides rotation, the authors smartly encode camera translation also using spherical harmonics.\n - An equivariant linear projection layer where the linear projection is applied to each group of features that corresponds to position embedding derived from the spherical harmonics of a specific order.\n - Equivariant latent array construction and the reversal of the rotation from the latent array before being cross-attended to the output queries.\n \n These designs, along with the adoption of existing equivariant components through the Perceive IO pipeline, ensure good performance and can be inspiring for other tasks that require equivariance.\n- The experiments are sufficient and demonstrate the equivariance of the output and the overall accuracy.\n\nThe major weakness of this paper lies in its presentation and organization, which makes the paper difficult to read:\n\n- Many important details from Sections 3.4 to 3.6 are placed in the appendix, making the main paper not self-contained. For instance, details in Appendices A.3 and E would be better suited in the main paper.\n\n- Sections 3.4 to 3.6 are organized into fragmented components, where the holistic process of the Perceiver IO is missing. Specifically, the authors should introduce each modification in the order of the Perceiver IO pipeline. \n\n- The description of individual components are also confusing:\n - It is better to only briefly discuss components that are equivariant itself, such as attention, and discuss only how they made the input to the attention equivariant, such as the latent array in Section 3.5.1. Otherwise, it might be misleading to suggest that there are new equivariant attention modules themselves.\n - Why is only rotation sampled and encoded when constructing the latent array in Section 3.5.2 and Figure 4, while the inputs have the encoded camera translation?\n - Similarly, in Section 3.6, only reverse rotation is applied to the latents after several self-attention transformation blocks, while the translation is omitted.\n - Line 261-262: \"which allows us to leverage higher frequency information beyond the dimensional constraints of SPH.\" The authors indicate that the Fourier encoding is not equivariant but use it for the output query, therefore, the authors should elaborate more on the insight behind this choice and provide sufficient proof to support this design.\n - Many illustrations (Figures 8-11) in the appendix are confusing and do not help to clarify the equations.\n\nPlease refer to the weakness."
}
] | |
yRhrVaDOWE | Diffusion-based Curriculum Reinforcement Learning | Curriculum Reinforcement Learning (CRL) is an approach to facilitate the learning process of agents by structuring tasks in a sequence of increasing complexity. Despite its potential, many existing CRL methods struggle to efficiently guide agents toward desired outcomes, particularly in the absence of domain knowledge. This paper introduces DiCuRL (Diffusion Curriculum Reinforcement Learning), a novel method that leverages conditional diffusion models to generate curriculum goals. To estimate how close an agent is to achieving its goal, our method uniquely incorporates a $Q$-function and a trainable reward function based on Adversarial Intrinsic Motivation within the diffusion model. Furthermore, it promotes exploration through the inherent noising and denoising mechanism present in the diffusion models and is environment-agnostic. This combination allows for the generation of challenging yet achievable goals, enabling agents to learn effectively without relying on domain knowledge. We demonstrate the effectiveness of DiCuRL in three different maze environments and two robotic manipulation tasks simulated in MuJoCo, where it outperforms or matches nine state-of-the-art CRL algorithms from the literature. | https://openreview.net/pdf/c6d5c73ad71d17c7a0d816c227738b96c959bc7e.pdf | [
{
"confidence": 4,
"rating": 6,
"review_id": "Gi31VZl0c5",
"review_text": "The paper presents an intuitive way to apply curriculum learning using diffusion based models to learn a goal distribution that can interpolate between the state-visitation distribution to states with high-value and high-intrinsic reward. As a result, the curriculum generates goals that lie at the edge of the states with non-zero occupancy, and higher value/ closeness to the target-goal.\n\nThe technical details are mostly complete and seem sound upon initial reading, I did not delve into the proofs/ derivations in the appendix. But the exposition of how we go from diffusion-models, to AIM, to visitation-count modelling, and to the newly proposed DiCuRL method, is mostly clear. \n\nAlthough there are multiple points of improvements, I think many practitioners will appreciate the author's work.\n\nThe technical details are quite clear upon initial reading, even if one is not familiar with AIM or diffusion models. I.e., the new method should be clear enough to reproduce from reading the paper.\n\n### Major comments\n - The introduction dumps too much related work together to find the actual point and criticism that the authors want to make on the current state of the field.\n - Before reading the background/ technical details, motivation DiCuRL 1) is unclear how noising/ denoising is unique to helping exploration? Why can't another method (like a VAE, or GAN) do this through modelling the state-visitation distribution. Aren't we just choosing another, perhaps more powerful, generative method? **After reading the full paper:** I disagree that this is a sound motivation, any stochastic method over the state-visitation distribution could achieve this. I agree that modelling the state-visitation distribution is useful as it allows learning of goals that the agent has seen and can reach. \n - 4.0 Line 222, it is not clear from the text what problem the authors are trying to solve through the graph construction and the optimization of the curiculum goal (Eq. 12). How is the 'optimal' curriculum goal even defined? Eq 12 of course shows the objective, but why do we need this? How is the graph even constructed (meaning the edges), is this fully-connected? Initial reading of this paragraph gives the impression of severe over-engineering of the goal-sampler. \n - Figure 1 overlaps with table 1 and contains too many overlapping lines to draw a conclusion. This must be improved for presentation. Reduce the number of unnecesary baselines, show these in the appendix.\n - The results section spends most of its time speculating why the baselines perform in a certain way but does not focus on the authors' method. Line 281, states that there is a difference between OUTPACE and DiCuRL, however, neither method statistically significantly outperforms the other. Too much of the experimental setup is moved to the appendix.\n - It is unclear from figure 3 at what point during training this plot was made. Now the baseline methods look arbitrarily bad compared to the authors' method. It is color-coded, but maybe add a colorbar to figure 3 indicating the training episodes.\n\n### Technical comments\n - 3.3 Slight confusion on the reward $r^\\pi_\\phi$, it's good to mention that you're actually learning $f(s)$ and using this to compute $r$.\n - 4.0 Explanation on the mixing parameter $\\bar{\\alpha}_k$ is omitted. Shortly state it in the main text.\n - 4.0 The definition of $g_d$ is too hidden. I infer from Alg.2 that this is supposed to represent the *true* goal distribution. \n - Results, figure 1, table 2. Why plot the standard-deviations? Why not a non-parametric tolerance interval to get a sense of spread, or plot a confidence interval for the expected success-rate?\n\n\n### Minor comments\n - Intro paragraph 1 should be split into separate paragraphs making distinct points. Not a lumpsum of information.\n - Intro paragraph 1, maybe make a distinction between hierarchical RL + curriculum RL for goal-generation. Even if HRL can implicitly generate curriculums, the motivation is often slightly different.\n - Direct reference to papers should be done with the author: 'Person et al., (year) showed ...', not '[1, 2] showed ...'. Or you could write, 'Other studies [1, 2, 3], investigated ...' or something similar.\n- Intro paragraph 2 is not a paragraph but 1 sentence.\n- Figure 3, since DiCuRL is mostly on par with OUTPACE this should be compared in the plot for comparing curriculum goals\n\n1) Could the authors revise the current version and improve upon (most) of my critiques, then I'd be willing to raise my score.\n2) Will the authors share code?"
},
{
"confidence": 4,
"rating": 5,
"review_id": "YVjBfqjLo7",
"review_text": "This paper studies curriculum reinforcement learning (RL) in the context of multi-goal RL, which aims to generate a series of goals with increasing difficulty to facilitate guiding learning policies. To this end, the paper proposes a framework that employs a conditional diffusion model that learns to generate a goal conditioned on the current state. The experiments in three maze navigation tasks show that the proposed method can reliably solve the tasks and perform them comparably to existing methods. This work studies a meaningful problem and proposes a reasonable framework. Yet, I am concerned with the limited domain (navigation) and tasks (maze) used for evaluation, the significance of the results, and the limited applicability beyond multi-goal RL, etc. Therefore, I am slightly leaning toward rejecting this paper, but I am willing to adjust my score if the rebuttal addresses my concern.\n\n**Motivation and intuition**\n- The motivation for studying curriculum learning for multi-goal RL is convincing.\n- Leveraging diffusion models to generate goals is reasonable.\n\n**Clarity**\n- The overall writing is clear. The authors utilize figures well to illustrate the ideas. \n\n**Related work**\n- The authors provide comprehensive descriptions of existing works in curriculum RL.\n\n**Experimental results**\n- The experimental results show that the proposed method performs comparably to existing methods.\n\n**Reproducibility**\n- The code is provided, which helps understand the details of the proposed framework.\n\n**Clarity**\n- The first paragraph of the introduction is unnecessarily long, making it very difficult to follow.\n- While the related work section describes several existing works in detail, it fails to differentiate these works from the proposed method exactly.\n\n**Limited to goal-conditioned RL**\n- The proposed method is limited to multi-goal RL, which requires a given goal. However, in many real-world applications, specifying a goal could be difficult or even impossible, making using the proposed method undoable. I feel it is entirely possible to extend the proposed method to the general RL setup, where only the current state is given. This will greatly increase the applicability of the proposed method.\n\n**Evaluation is limited to the Maze navigation**\n- The proposed method was only compared to existing methods in the Maze navigation tasks, where goals are represented as coordinates. It would be a lot more convincing if the evaluation was also conducted in other domains, such as robot arm manipulation, locomotion, and games. Additionally, evaluating in grid-world navigation tasks can add value to the paper by exploring discrete state and action spaces. \n\n**Significance of the results**\n- According to Figure 1, I am not entirely convinced that the proposed method performs significantly better than the baselines. Also, the plotting scheme makes it difficult to interpret when many curves overlap.\n\n**Related work**\n- The related work section focuses on existing works in curriculum RL yet fails to discuss many works that use diffusion models for RL or imitation learning, including but not limited to\n\t- \"Learning Universal Policies via Text-Guided Video Generation\"\n\t- \"Diffusion Policy: Visuomotor Policy Learning via Action Diffusion\"\n\t- \"Learning to Act from Actionless Video through Dense Correspondences\"\n\t- \"Goal-conditioned imitation learning using score-based diffusion policies\"\n\t- \"Diffusion model-augmented behavioral cloning\"\n\t- \"Imitating human behaviour with diffusion models\"\n\n**Algorithm 2**\n- While Algorithm 2 is titled RL Training, Lines 15-21 are for evaluation/testing, which is a bit confusing.\n\n**Minor errors**\n- L282: It seems that a non-break newline is used here, which gives no space between this paragraph and the next paragraph starting from Line 283.\n\nSee above"
},
{
"confidence": 4,
"rating": 7,
"review_id": "b6CqQLktWj",
"review_text": "This work presents a novel diffusion model-based curriculum learning approach, called DiCURL, for multi-goal reinforcement learning, namely goal-conditioned RL. The proposed conditional diffusion model leverages a Q-function and a learned reward function based on the Adversarial Intrinsic Motivation principle to incentivize goals that are reachable yet challenging to an RL agent. The paper evaluates DiCURL against state-of-the-art curriculum learning approaches in maze environments with differing maps. In PointUMaze and PointNMaze, DiCURL matches or slightly outperforms OUTPACE, which seems to be the best-performing method in these maze environments. In the most challenging map, PointSpiralMaze, DiCURL outperforms OUTPACE, while the rest of the methods fail to yield an optimal policy at the end of the training.\n\n- The related work section is extensive in terms of content and covers most of the recent advances in automatic curriculum learning for RL. The background and methodology sections are also detailed, and the problem setting and the proposed approach are explained clearly.\n\n- The proposed curriculum learning approach is novel as it employs a conditional diffusion model. The idea of leveraging a Q-function and a learned intrinsic reward function to select achievable but challenging goals is intuitive, as well.\n\n- Table 1 highlights the advantages of DiCURL, and the introduction section also supports this table.\n\n- The curricula generated by DiCURL in Figures 2 and 3 (as well as the ones in the appendix) illustrate how DiCuRL yields optimal policies and outperforms some existing methods in evaluated environments.\n\n- The introduction section should be improved in terms of writing. Content-wise, it is informative but also too dense. Some of the paragraphs are either too long or too short. Restructuring this section and making it more to the point would improve the readers' experience immensely. \n\n- OUTPACE is the second best-forming automatic curriculum learning method in the evaluated environments. However, the paper does not demonstrate the curricula generated by OUTPACE, unlike the curricula of GRADIENT and HGG in Figure 3, which do not perform as well.\n\n- All environments (point maze domain in MuJoCo with different maps) in the empirical validation section have the same dynamics, low-dimensional state, and action spaces. Although DiCuRL's advantages seem apparent as the map gets more complex, the empirical validation is insufficient to conclude that DiCuRL can outperform state-of-the-art methods in various goal-conditioned domains.\n\n- The roles of loss components related to the Q-function and AIM reward function sound intuitive, yet they are explained briefly. I suggest the authors run an ablation study to highlight their separate contributions.\n\n- How do Q and AIM rewards differ in a goal-conditioned environment that provides a (sparse) reward for reaching the goal? Could you please give me an illustrative example to highlight how including both in the loss function of the diffusion model is better?\n\n- What is g_d that initializes g_c in Algorithm 2?\n\n- What do colors in figures illustrating curricula stand for specifically?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "UCnEZHqo5b",
"review_text": "This work introduces DiCuRL, a novel approach that uses diffusion models to generate curriculum goals for reinforcement learning agents. The method trains a model to capture the distribution of visited states, focusing on those with higher Q-values and intrinsic motivation rewards (i.e., AIM rewards). This approach aims to generate goals at an appropriate difficulty level while guiding the curriculum closer to the desired final goal. DiCuRL employs the Minimum Cost Maximum Flow algorithm to solve a bipartite matching problem to select curriculum goals.\n\n- Strong empirical evaluation against competitors (Fig. 1)\n- The paper is information-dense but reasonably well-written. It helps with the comprehension of the proposed ideas\n\n- The approach is quite complicated and possibly unnecessarily so. I'd like to emphasize that I did not find any faults with the proposed method. It's just that I do not see how it will scale to more challenging, realistic environments.\n- They missed citing a rich literature on exploration and curriculum RL. For example, see papers [1-5].\n- The reward function for the Maze envs is not provided. Is this dense or sparse reward env? Note that, dense reward would not be a justifiable choice in this case.\n\n\n*References*\n1. Riedmiller, M., Hafner, R., Lampe, T., Neunert, M., Degrave, J., Wiele, T., Mnih, V., Heess, N., and Springenberg, J. T. (2018). Learning by playing solving sparse reward tasks from scratch. In International conference on machine learning, pages 4344–4353. PMLR.\n2. Hertweck, T., Riedmiller, M., Bloesch, M., Springenberg, J. T., Siegel, N., Wulfmeier, M., Hafner, R., and Heess, N. (2020). Simple sensor intentions for exploration. arXiv preprint arXiv:2005.07541.\n3. Nair, A. V., Pong, V., Dalal, M., Bahl, S., Lin, S., and Levine, S. (2018). Visual reinforcement learning with imagined goals. Advances in neural information processing systems, 31.\n4. Korenkevych, D., Mahmood, A. R., Vasan, G., and Bergstra, J. (2019). Autoregressive policies for continuous control deep reinforcement learning. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, pages 2754–2762.\n5. Narvekar, S., Peng, B., Leonetti, M., Sinapov, J., Taylor, M. E., & Stone, P. (2020). Curriculum learning for reinforcement learning domains: A framework and survey. Journal of Machine Learning Research, 21(181), 1-50.\n\n- In Fig. 3, what do the colours represent? Please be more elaborate. It is not clear at all at the moment\n- In the appendix, it's mentioned that \"The agent starts each episode from an initial state of [0, 0].\" In RL environments, environmental resets can implicitly help exploration [1]. How would DiCuRL + fixed start state fare against SAC only + random start states?\n- How does SAC only perform in the comparisons in Fig. 1?\n- How important is the AIM reward? It is a bit weird to sum the Q value and one-step intrinsic motivation reward. This results in different scales/magnitudes of values, which is why the authors needed to tune the coefficients.\n- To ask the previous question differently, can the AIM reward be substituted with simpler intrinsic motivation rewards like RND [2] or TD-error?\n- It seems SAC + HER would be a lot simpler to use computationally and algorithmically. How does DiCuRL compare against SAC + HER?\n\n\n*References*\n1. Vasan, G., Wang, Y., Shahriar, F., Bergstra, J., Jagersand, M., & Mahmood, A. R. (2024). Revisiting Constant Negative Rewards for Goal-Reaching Tasks in Robot Learning. arXiv preprint arXiv:2407.00324.\n2. Burda, Y., Edwards, H., Storkey, A., & Klimov, O. (2018). Exploration by random network distillation. arXiv preprint arXiv:1810.12894."
}
] | |
yRRCH1OsGW | Generative Modeling of Molecular Dynamics Trajectories | Molecular dynamics (MD) is a powerful technique for studying microscopic phenomena, but its computational cost has driven significant interest in the development of deep learning-based surrogate models. We introduce generative modeling of molecular trajectories as a paradigm for learning flexible multi-task surrogate models of MD from data. By conditioning on appropriately chosen frames of the trajectory, we show such generative models can be adapted to diverse tasks such as forward simulation, transition path sampling, and trajectory upsampling. By alternatively conditioning on part of the molecular system and inpainting the rest, we also demonstrate the first steps towards dynamics-conditioned molecular design. We validate the full set of these capabilities on tetrapeptide simulations and show preliminary results on scaling to protein monomers. Altogether, our work illustrates how generative modeling can unlock value from MD data towards diverse downstream tasks that are not straightforward to address with existing methods or even MD itself. Code is available at https://github.com/bjing2016/mdgen. | https://openreview.net/pdf/da6d5f4f8604d64cfdf93e0c217c30eb2526a5cd.pdf | [
{
"confidence": 2,
"rating": 7,
"review_id": "tYSN9REPTg",
"review_text": "The paper suggests a flow-based generative framework on molecular trajectories, with various downstream tasks such as forward simulation and transition path sampling. Additionally, the model is trained in a transferable setting, across tetrapeptides.\n\n1. Extensive experiments over various downstream tasks\n2. Transferable settings for tetrapeptides\n\n1. Experiment baselines\n\nThe baselines of experiments are mostly the Markov State Models. I think it would also be good if there were some comparison between other models, though I understand that many prior works targeted Alanine dipeptide not tetrapeptides.\n- Forward simulation: ITO$^{[1]}$, Timewarp$^{[2]}$\n\n- Interpolation (Transition path sampling): PIPS$^{[3]}$\n\n2. (Minor) Necessity of additional tasks\n\nThe necessity of additional tasks relatively seems weak compared to tasks such as forward simulation, TPS, specially the inpainting design. Rather than additional tasks, ablations for stability might be a better? One can obviously see that scaling to long trajectories shows the stability against the time scale, and protein simulation shows the stability against space complexity.\n\n**Minor typos, suggestions**\n\n- Definition of S-MPNN only exists in the Appendix. It would great to point out that more details are in the appendix, in the first paragraph of section 4.4\n- Figure 6 is not referenced in the main paper, only the appendix\n- Figure 2F, reference of that blue indicates the side chains and orange indicates the backbones seems to be missing\n\n[1] Implicit transfer operator learning: Multiple time-resolution surrogates for molecular dynamics, NIPS 2023\n\n[2] Timewarp: transferable acceleration of molecular dynamics by learning time-coarsened dynamics, NIPS 2023\n\n[3] Stochastic Optimal Control for Collective Variable Free Sampling of Molecular Transition Paths, NIPS 2023\n\n1. Difference between downstream tasks\n\nCould the upsampling task be seen as a superset of interpolation? Since upsampling with two given frames would the same as interpolation.\n\n2. Training on one tetrapeptide\n\nJust curious, though the authors has presented a transferable setting, are there any results when the model is trained for a specific tetrapeptide and tested on downstream tasks?"
},
{
"confidence": 4,
"rating": 7,
"review_id": "DygF786bk1",
"review_text": "The authors propose MDGen -- a generative model to sample molecular dynamics trajectory conditioned on key frames. This is a direct application of video generation techniques to solve domain challenges in protein modeling. Specifically, SiT and flow matching models are used to sample SE(3)-invariant representation of all-atom protein representations. This work demonstrates the effectiveness of MDGen primarily on tetrapeptide systems, where the authors showcase four downstream application tasks including forward simulation, interpolation, upsampling, and dynamics-conditioned inpainting. \n\nIn general, I find this manuscript well-written and easy to follow. The model performance looks reasonable on tetrapeptides, yet the results are proof-of-concept in nature and generalization to larger proteins remain challenging. However, it is one of the pioneering work in AI protein modeling to directly emulate MD simulation trajectories using data-driven approaches. To that end, I think it would be beneficial for this work to gain visibility across the research community to inspire future studies.\n\n- It is one of the pioneering work to adopt video generation techniques for MD trajectory generation. Although conceptually straightforward, good practices to generate time-coherent MD trajectories across different protein systems remain underexplored.\n- The authors demonstrated a variety of downstream tasks using the same model architecture. The underlying modeling framework seems versatile and transferrable across different applications..\n- Performance benchmark and analysis on tetrapeptides are comprehensive and provides insights to modeling these peptide systems.\n- I think it is a good idea to model residue offsets relative to the key frames in order to bypass the need to learn sequence-to-structure mapping. MD simulations always start from a seed structure, so I do not think this is a key limitation as mentioned in L#310-312.\n\n- Benchmark and evaluation results on tetrapeptides, although comprehensive, are proof-of-concept in nature. It may not be sufficient to demonstrate transferability to general protein systems.\n- Performance on ATLAS (i.e., larger proteins instead of short peptides) does not seem promising. MDGen performance is worse than AlphaFlow in Table 4. I wonder if the main bottleneck is training data quality/availability, or model architecture?\n\n- L#141, when $K > 1$, how to ensure roto-translation prediction consistency across $K$ key frames and obtain a final $\\hat{g}_j$?\n- Table 2, with 100 ns being the ground truth, the non-zero JSD in the last column originates from subsampling the simulation trajectory?\n- Figure 2F. My understanding is that sidechains exhibit faster dynamics while backbone motions are slower. The low correlation for backbone suggests that MDGen is not good at learning slower dynamics, which are typically more interesting to researchers?\n- Temporal coherence between generated protein conformations is mainly evaluated using auto-correlation in this work. Is it possible to show other metrics to capture detailed structural quality and variation during time evolution?\n- Why is MDGen more effective at sequence recovery than MPNN? More explanation and analysis would be helpful here.\n- Would it be possible to emulate MD simulation trajectory of the 12 fast folding proteins from [Shaw 2009](https://dl.acm.org/doi/abs/10.1145/1654059.1654126)? They are smaller than ATLAS proteins and longer than tetrapeptides, with much longer simulation time and rich dynamics.\n- It would be nice to see if MDGen could infer a trajectory given an [apo/holo pair](https://arxiv.org/abs/2304.02198)."
},
{
"confidence": 4,
"rating": 4,
"review_id": "XSDFwbQNut",
"review_text": "The paper presents a new framework for generating trajectory of molecular geometries, ie, generative modeling for molecular dynamics. The paper proposes tokenization methods to tokenize the trajectory and learn flow models on the data. Experiments demonstrate the effectiveness of several tasks including forward sampling, interpolation, and up sampling.\n\n1. The paper tackles a new problem in molecular dynamics generation, which has not been explored in existing literature.\n\n2. The paper is in good structure and easy to follow.\n\n3. The paper provides a detailed analysis of several domain tasks on interested molecular structures, which demonstrate the critical usage in some scenarios.\n\n1. Limited ML technical contribution, as all components exist in previous molecular generative models.\n\n2. The experiment is comprehensive from a domain perspective. However, I feel the experiments lack some benchmarking comparison with state-of-the-art molecular generative models for related tasks. See my question below.\n\nI think existing methods can also tackle several tasks. For example, for the forward sampling task, previous generative MD models like Timewarp (Klein et al., 2024) and ITO (Schreiner et al., 2024) can also be used for the task. A numerical comparison with these baselines can help to justify the effectiveness of the proposed method."
},
{
"confidence": 3,
"rating": 6,
"review_id": "5iKIq4YDUd",
"review_text": "In this work, the authors proposed MDGen, a new framework that aims to model molecular dynamics trajectories via generative modeling techniques. By properly encoding the Protein MD trajectories according to the characteristics of key frames, MDGen adopts flow matching techniques (both continuous and discrete flow matching) to generatively model MD trajectories. As a unified framework, MDGen is able to perform diverse tasks including forward simulation, interpolation, upsampling and inpaiting. Extensive experiments are conducted to demonstrate the effectiveness of MDGen.\n\n1. The problem this work aims to tackle is of great significance in scientific domains lie computational biology.\n2. The formulation of molecular (protein) trajectories by using key frame references is reasonable and compact for reducing the modeling difficulties.\n3. The experiments are comprehensive.\n4. The paper is well-written and easy to follow.\n\n1. Lack of discussion on related works. This work does not discuss related works on the same topic. Some works are mentioned in the Introduction section, but I still recommend that there should be an independent Related Works section for comprehensive discussion. Here are also several works that are worth discussing: (1) EGNO, which uses neural operator learning approach to also model the trajectory dynamics of molecules; (2) DiffMD, which uses diffusion models to simulate molecular dynamics. The quality of this work should be further improved if the authors could carefully discuss the differences between MDGen and these works and the strengths of MDGen compared to these works.\n\n2. Lack of ablation studies. MDGen is composed of several parts, including the design of the backbone model, the design choices of flow matching framework, and the adoption of Hyena architecture for efficiency consideration. In addition to the aimed tasks, it would further improve the quality of this work if the authors could conduct ablation studies on these aspects to help readers know what the influence of each part of MDGen is.\n\nN/A"
},
{
"confidence": 3,
"rating": 5,
"review_id": "UNZ4f7c61A",
"review_text": "The paper presents a novel generative model for molecular dynamics (MD) trajectories called MDGEN. This model aims to serve as a flexible surrogate for MD simulations by generating entire trajectories conditioned on initial frames. It addresses tasks such as forward simulation, transition path sampling, trajectory upsampling, and dynamics-conditioned molecular design. The model is evaluated on tetrapeptide simulations and demonstrates its capability to generate reasonable ensembles of protein monomers.\n\nNovelty and Scope: The approach introduces a novel paradigm for surrogate modeling of MD, extending the capabilities of existing models to handle a variety of tasks that are not straightforward with current methods.\nGenerative Framework: The use of generative modeling for entire MD trajectories is a significant advancement, as it allows for a broader range of applications including forward and inverse problems.\nComprehensive Evaluation: The paper evaluates MDGEN on several tasks, demonstrating its effectiveness in forward simulation, interpolation, upsampling, and inpainting. The results show promising performance in terms of distributional similarity, dynamical content, and computational efficiency.\nTechnical Implementation: The detailed description of the tokenization process and the flow model architecture provides a clear understanding of how the model operates. The use of SE(3)-invariant tokens and the scalable interpolant transformer (SiT) backbone are well-motivated choices.\n\nComplexity and Accessibility: The model’s complexity might pose challenges for reproducibility and accessibility for researchers who are not deeply familiar with both molecular dynamics and advanced generative modeling techniques.\nEvaluation on Larger Systems: While the paper provides proof-of-concept evaluations on proteins, the primary focus remains on smaller tetrapeptides. The model's scalability and effectiveness on larger and more complex molecular systems need further exploration.\nDependence on Key Frames: The reliance on key frames for conditional generation limits the model’s ability to perform unconditional generation or inpainting of residue roto-translations, which could be a significant limitation in certain applications.\nComputational Resources: The paper lacks detailed information on the computational resources required for training and inference, which is crucial for understanding the practical implications of using MDGEN in various research settings.\n\nHow can the model be adapted or improved to reduce its reliance on key frames?\n\nExploring techniques for unconditional generation or alternative ways to handle the roto-translations without predefined key frames could enhance the model's flexibility and applicability.\nWhat architectural changes or enhancements could improve the model's performance on larger molecular systems such as proteins?\n\nInvestigating more scalable architectures or hybrid approaches that combine the current method with other techniques tailored for large systems could address this limitation.\nHow does the computational cost of training the model compare to traditional MD simulations, and what are the implications for its practical use?\n\nProviding detailed information on computational requirements and potential optimizations could help in assessing the model's feasibility for widespread use.\nWhat alternative tokenization strategies could be explored to extend the model's applicability to a wider range of molecular systems?\n\nResearch into tokenization methods that can handle diverse molecular structures and dynamics could broaden the model's utility.\nHow can additional conditioning types (e.g., textual descriptions, experimental data) be incorporated into the model, and what benefits might they provide?\n\nExperimenting with and integrating various forms of conditioning could enhance the model's ability to generate more accurate and contextually relevant trajectories.\nWhat are the potential impacts of data quality and availability on the model's performance, and how can these challenges be mitigated?\n\nAddressing data-related challenges through techniques like data augmentation, transfer learning, or synthetic data generation could improve the model's robustness and applicability.\nCan additional evaluation metrics be developed to provide a more comprehensive assessment of the generated trajectories' quality?\n\nIdentifying and implementing new evaluation criteria could offer deeper insights into the strengths and limitations of the model's output."
},
{
"confidence": 4,
"rating": 7,
"review_id": "sOvr0i0Q2A",
"review_text": "The authors introduce MDGen as a novel approach for modeling MD trajectories. They demonstrate the capabilities of this method in tasks such as interpolation, upsampling, and inpainting of small peptides. The accuracy as well as speed of the new approach compared to the ground truth baseline is quantitatively evaluated. Initial experiments toward upscaling to small proteins are shown.\n\nThe idea of MDGen is novel and very well presented in this manuscript. The results are convincing and interesting.\n\n1. Parts of Sections 3.1 and 3.2 are very condensed and hard to follow. A more detailed description in the SI would be helpful, where the most important aspects of the cited work is also repeated.\n2. The suitability of the chosen representation for longer amino acid chains is questionable. This is also mentioned in the manuscript, but nonetheless, proteins are mentioned many times (more than 30) in the manuscript, while almost all experiments are actually performed on very small peptides. It should be stated in a more prominent place that upscaling to proteins is not trivial.\n3. The representation limits the model to learn MD trajectories of natural amino acids, as no all-atom representation is used directly. This should be made clearer in the manuscript.\n\nMinor points: A lot of figures have no proper axis labels (e.g. Fig 3, 4, 5, 6). This should be fixed. The best models in Table 4 should be indicated in bold.\n\n1. How often do clashes and other high-energy structures occur in the generated trajectories?\n2. When comparing to other methods and approaches in the experimental section - do all of them use a similar reduced representation or do the other methods generate all-atom representations?"
}
] | |
yQL5tutdaH | Toward a Stable, Fair, and Comprehensive Evaluation of Object Hallucination in Large Vision-Language Models | Given different instructions, large vision-language models (LVLMs) exhibit different degrees of object hallucinations, posing a significant challenge to the evaluation of object hallucinations. Overcoming this challenge, existing object hallucination evaluation methods average the results obtained from a set of instructions. However, these methods fail to provide consistent evaluation across instruction sets that generate image descriptions of significantly different lengths. In this paper, we present the first systematic investigation of the effect of instructions on object hallucinations in LVLMs, with a specific focus on the role played by image description lengths. A valuable finding is that instructions indirectly affect hallucinations through the length of image descriptions. The longer the image description, the higher the object hallucination degree. Accordingly, we fit an informative length-hallucination curve, upon which a fine-grained evaluation framework named LeHaCE is introduced for evaluating object hallucinations at any given image description length. LeHaCE evaluates the object hallucination degree at a uniform image description length to mitigate the effect of description lengths, promoting stability and fairness. Moreover, LeHaCE incorporates the curve slope as an innovative hallucination evaluation metric, reflecting the extent to which the object hallucination degree is affected by the image description length, achieving a more comprehensive evaluation. Experimental results demonstrate that LeHaCE provides a more stable, fair, and comprehensive evaluation of object hallucinations in LVLMs compared to existing methods. | https://openreview.net/pdf/e83f0d24d3251d65852146a10845a58594271455.pdf | [
{
"confidence": 5,
"rating": 7,
"review_id": "K8uC3tSPep",
"review_text": "This paper explores the stable evaluation of object hallucinations, which is a crucial challenge in large vision-language models. The authors provide the first systematic analysis of the underlying mechanism through which instructions affect hallucinations, based on comprehensive experiments. They report a linear correlation between the length of descriptions and the levels of object hallucinations. Furthermore, the authors propose a curve-based framework that incorporates description lengths to enable a stable evaluation of hallucinations. What I find particularly novel is that the slope of the curve is incorporated as a metric, which achieves a more comprehensive evaluation.\n\n1. This work might provide valuable insights to the community. Firstly, while the impact of instructions on hallucinations is widely recognized, this work unveils a crucial aspect by demonstrating that instructions exert their influence through the modification of description lengths. This finding illuminates the previously unexplored mechanism underlying instruction-affected hallucinations. Secondly, they employ a curve-based evaluation method instead of relying solely on a single metric, which goes a new way in addressing hallucination evaluation. Thus, this work has the potential to inspire further research and exploration in hallucination evaluation. \n2. The proposed curve-based hallucination evaluation method in this paper is intuitively reasonable, and the author provides substantial experimental evidence to support the motivation behind this method. The experimental results are clearly presented, and the corresponding analyses further enhance the persuasiveness of this work. Overall, the combination of the intuitive approach, extensive experiments, clear presentation of results, and insightful analyses makes this work convincing.\n\n1. The proposed method realizes consistent evaluation by calculating the hallucination rate at a uniform length. However, the length distributions of descriptions generated by different LVLMs exhibit variations. In other words, some models tend to produce shorter descriptions while others generate longer ones. In light of this, I have concerns regarding the ability of this method to maintain its effectiveness under such circumstances.\n2. In my view, the hallucination evaluation of a LVLM in practical requires a large instruction set that could simulate real-world applications of the LVLM. If the authors can build such a large instruction set as the benchmark, it would yield a significant contribution to the community.\n3. The authors claim that their proposed evaluation method is fairer compared to other evaluation methods. However, the paper appears to lack experimental results to support this assertion.\n4. The analysis of the stability of the “LeHaCE_GR” is lacking.\n5. The selection of instructions may have a substantial impact on the fitted curve. It would be beneficial for the authors to provide further discussion on this aspect.\n\n1. Considering that shorter descriptions tend to have fewer hallucinations, have the authors explored the possibility of generating multiple concise descriptions with distinct focuses for the same image, and subsequently merging them into a comprehensive and detailed description?\n2. What factors determine the slope of the length-hallucination curve for the model?\n3. Since the authors introduce the slope of the length-hallucination curve as a valuable evaluation metric, it raises the question of what the intercept of the curve signifies. Is it feasible to incorporate the intercept into the evaluation framework?\n4. Why does the average length of the image description generated by the Otter model, specifically under instruction I12, amount to only 2? Is there any misunderstanding here?"
},
{
"confidence": 4,
"rating": 5,
"review_id": "jY0TCs0BR0",
"review_text": "This work aims to establish a stable, fair, and comprehensive evaluation method for object hallucinations in large vision-language models. The authors discovered a positive correlation between the length of image descriptions and the degree of object hallucination. Building upon this observation, they developed a hallucination evaluation method named LeHaCE by fitting a length-hallucination curve. LeHaCE enables the evaluation at any given image description length, ensuring stability and fairness in the evaluation process. Additionally, LeHaCE involves the curve slope as a metric to evaluate the influence of image description length on the degree of object hallucination, thereby achieving a comprehensive evaluation. The motivation behind this work is reasonable, and the authors provide many experiments to support their claims. However, it is worth considering that the use of the linear fitting scheme, although straightforward, does somewhat diminish the novelty of the proposed method.\n\nThe experimental analysis conducted on instructions and hallucination is compelling and provides strong support for the main argument that the hallucination degree is positively correlated with the length of the description. While previous research (Yifan et al., 2023) has already shown the influence of instructions on hallucinations, this work takes it a step further by proposing that instructions indirectly influence hallucinations through the length of image descriptions. This sheds light on the reason behind the limitations of previous approaches that relied on average-based methods. Overall, this paper offers valuable insights into the evaluation of consistent hallucinations.\n\n1. Although the rationale behind the length-hallucination curve is compelling, it is fitted using a relatively simplistic linear approach. Exploring more flexible and intricate fitting approaches is worth considering, as it has the potential to achieve higher fitting accuracy and more effective hallucination evaluation.\n2. Since the proposed method relies on a fitted curve, it needs at least two instructions to evaluate LVLMs and cannot be used with just one instruction.The authors should discuss this limitation.\n3. Lack of indepth discussion on the shortcomings of the proposed method. For instance, as shown in Table 2, why does LeHaCE exhibit poor stability on a few LVLMs when the number of instructions is three?\n4. It seems that the selection of instructions might affect the stability of LeHaCE. It would be helpful to include more discussion on this aspect.\n5. The current paper seems to have lots of results and experiments. As a reader, it is not very easy for me to get the main conclusion for each experiment. It would be good to highlight the conclusions so that the readers can understand the point easier.\n6. Some typos need to be corrected: Line 79: lrv-instruction -> LRV-instruction. Line 92 Nope -> NOPE. Line 81 chatgpt -> ChatGPT. Table 2: Minigpt-4 -> MiniGPT-4.\n\n1. Does the complexity of the image content, such as the number of objects, influence the extent of hallucination in the model? It would be valuable to investigate additional factors that impact hallucination degrees.\n2. Intuitively, the average-based framework can also be effective as long as there are enough instructions, such as 200 instructions. I'm wondering if this viewpoint is accurate?\n3. Is the relative standard deviation an appropriate approach to evaluate stability, considering that stability in this context essentially refers to the consistency of multiple evaluation results?\n4. Why does this work exclusively focus on object hallucinations? Is this a choice made by the authors or a limitation of the proposed method?\n5. In Figure 5, why does LeHaCE show higher instability on LLaVA and Qwen-VL when the image description length is less than 20 words?"
},
{
"confidence": 4,
"rating": 5,
"review_id": "tvGzE2hR1R",
"review_text": "The paper identifies a pitfall regarding the length of image descriptions in the current average-based LVLM hallucination evaluation framework. To address this, they propose a new Length-Hallucination Curve Based evaluation framework to enhance the fairness of evaluations. The paper observes that the degree of object hallucinations is primarily influenced by the length of image descriptions, with instructions indirectly affecting hallucinations through their impact on description lengths. They suggest using a linear regression curve for evaluation and develop two metrics based on this curve. Extensive experiments on multiple LVLMs with different instruction sets demonstrate the stability of their proposed new evaluation metrics.\n\n- The observation is intuitive and validate by extensive experiments\n\n- The paper is clearly written and easy to follow\n\n- The evaluation is comprehensive in terms of numerous instructions and LVLMs\n\n- Although paper observe the linear relation between the length of the image description and objection hallucination, there are still unanswered questions regarding the justification of the claim. Please see questions below.\n\n- Some minor inconsistent typo, for example, the AEF and ABF in Figure 4.\n\n- The evaluation only use CHAIR scores and scores of other aspects is not evaluated, for example, the detail or the coverage of the real objects in the description as in AMBER.\n\n- The paper grouped 25 different instructions to 5 instruction set. What’s the grouping strategy? How do the author group these instructions? \n\n- The paper claimed that object hallucination is primarily influenced by the length of image descriptions, with instructions only indirectly affecting hallucinations through their effect on description lengths. How is this claim being validated? Specifically, how do the author validate that the length of the image description is the primary cause and is not also affecting the hallucinations indirectly through their effect on some hidden factors ? The observation could be due to the spurious correlation.\n\n- Does the increased length of the image description also capture more real objects, or does it mainly consist of rephrasing and hallucinatory sentences?"
},
{
"confidence": 5,
"rating": 7,
"review_id": "zjiJ1J9yYI",
"review_text": "This work presents comprehensive experiments to study the relationship between description lengths and hallucinations in LVLMs. Based on the observed positive correlation, authors propose an approach of fitting a length-hallucination curve to evaluate object hallucinations. Speciffically, the curve allows for fair comparisons that are not influenced by varying lengths, through providing the hallucination degree corresponding to any given description length. Furthermore, the curve slope reflects the extent to which a LVLM's hallucination degree is affected by description lengths. The evaluation, considering both the value and slope, demonstrates stability and comprehensiveness, as supported by the conducted experiments. The authors' thorough and meticulous research on this issue is highly convincing, and the proposed method effectively showcases its effectiveness.\n\nHallucinations evaluation is a realistic and crucial task in the field of LVLMs, as hallucinations usually introduce misleading conclusions or even have disastrous outcomes. In this context, the authors perform a detailed experimental analysis on the impact of instructions on hallucinations, providing convincing evidence to support their motivation. Moreover, the proposed curve-based method is a simple yet effective approach, which is well-motivated by the observed linear correlation between description lengths and hallucination rates. The paper is well-written and effectively communicates its main contributions and techniques. Overall, the paper exhibits technical solidity.\n\n1. The authors conduct experiments using only the beam search setting. Although I understand that beam search is widely used in hallucination evaluation of LVLMs/LLMs, it remains uncertain whether the observed correlation between the hallucination degree and the description length holds true under different decoding strategies. Thus, I recommend that the authors explore additional commonly used decoding strategies, such as greedy decoding, to provide a more comprehensive analysis. \n2. The paper lacks a study about the influence of the instruction number on the length-hallucination curve. The fitted curve is directly affected by the number of samples, which corresponds to the number of instructions provided. It is therefore essential to thoroughly investigate the minimum number of instructions necessary for the proposed method. \n3. The authors mention in the paper that the proposed method can \"evaluate object hallucinations at any given image description length.\" In reality, when the given length deviates too much from the existing data, the fitting is likely to fail, leading to inaccurate results. The authors should use more cautious wording.\n4. In my opinion, the impact of length might be mitigated by simply controlling the maximum generation lengths.The authors only mention this method in a footnote and believe it does not align with the actual usage scenarios of LVLMs. More in-depth discussions should be provided.\n5. Some minor errors need to be corrected. For example, in line 42, \"Figure 2&3\" should be \"Figures 2&3\".\n6. It appears inappropriate to represent a variable using only two letters. Consider replacing \"hr\" with \"h_r\".\n\n1. Why is the proposed method limited to large vision-language models? Could it be extended to large language models as well? It would be beneficial for the authors to provide a clear explanation or justification for this limitation. \n2. Similarly, are the finding and method presented in this paper applicable to other forms of hallucination beyond object hallucinations, or other tasks, such as VQA?\n3. What could potentially explain the phenomenon observed in Figure 2, where longer output lengths result in higher object hallucination degrees?\n4. How are the 25 instructions used in experiments designed? Are they generated randomly or based on specific rules? Besides, why is it 25, and what difference would there be if there are more or less instructions?"
}
] | |
yPPNi7vc7n | Local Curvature Smoothing with Stein's Identity for Efficient Score Matching | The training of score-based diffusion models (SDMs) is based on score matching. The challenge of score matching is that it includes a computationally expensive Jacobian trace. While several methods have been proposed to avoid this computation, each has drawbacks, such as instability during training and approximating the learning as learning a denoising vector field rather than a true score.
We propose a novel score matching variant, local curvature smoothing with Stein's identity (LCSS). The LCSS bypasses the Jacobian trace by applying Stein's identity, enabling regularization effectiveness and efficient computation. We show that LCSS surpasses existing methods in sample generation performance and matches the performance of denoising score matching, widely adopted by most SDMs, in evaluations such as FID, Inception score, and bits per dimension. Furthermore, we show that LCSS enables realistic image generation even at a high resolution of $1024 \times 1024$. | https://openreview.net/pdf/952d80f8d8e4120ffa7bc5db0426136ea9a8fdb5.pdf | [
{
"confidence": 3,
"rating": 6,
"review_id": "GhCfUMRLMl",
"review_text": "The paper proposes a novel score matching variant called Local Curvature Smoothing with Stein’s Identity (LCSS). This method addresses the computational challenges associated with the Jacobian trace in score matching, particularly for high-dimensional data, by leveraging Stein’s identity. LCSS aims to bypass the expensive computation of the Jacobian trace, offering both regularization benefits and efficient computation. The method is validated through experiments on synthetic and real datasets.\n\n1. the idea of LCSS is novel\n2. Jacobian is not computed directly, but implicitly respected.\n3. Experiments on high and low resolution are performed.\n\n1. In lines 161-162, interchangeability is assumed. However, in the analysis, interchangeability requires some properties of the interested function. The reason why the assumption holds is missing.\n2. This paper does not approximate the Jacobian but instead circumvents the Jacobian. The empirical and theoretical differences against the method using Jacobian should be discussed, such as the difference in the estimated error bound. \n3. In Tab. 3, the improvement seems to be marginal, while in figures, such as Fig. 4, the selected picture is much better under LCSS. The discrepancy should be discussed.\n\nsee weakness"
},
{
"confidence": 3,
"rating": 6,
"review_id": "VM8OIYL26Y",
"review_text": "This paper provides a new way for score matching with the purpose of resolving some of the limitations of the existing methods such as high variance of sliced score matching and Gaussian constraints of denoising score matching (DSM). The new method is based on the local curvature smoothing proposed in [15]. A new score matching objective function is proposed by combining the Stein's Identity with the local curvature smoothing. The authors empirically show that the new method is more efficient in training than DSM and also has comparable performance to DSM.\n\nAlthough DSM is the default method used nowadays for score matching, the authors provide a nice novel alternative which may have some advantages over DSM. I'm interested to see more theoretical study in the future of this new method.\n\nI think some parts of the paper are not stated clearly and further clarification is needed. See questions for more details.\n\n- In section 2.4, the authors criticize the DSM method for having a Gaussian constraint. However, later there is no clarification showing how the new method is different from DSM in this regard. Can you please clarify this?\n- In line 108, the authors criticize the DSM for having 0 numerator and denominator. However, in the final LCSS (equation (16)), the denominator can also be 0 and be problematic. Can the authors provide more discussion on why the new method is better in this regard?\n- In Corollary 2, there is an assumption that an integral must be 0. How restrictive is this assumption? It seems to me that later on when designing LCSS objective, formula (14) is directly used without any further discussion on this assumption. Can the authors explain why this assumption can be dropped?"
},
{
"confidence": 2,
"rating": 6,
"review_id": "gCX2eoETr7",
"review_text": "The paper proposes to use Stein's lemma to obtain a computationally efficient way in implementing a local-curvature regularized variant of the score matching objective. The main idea is to rewrite the Jacobian-trace term in a way that requires no Jacobian evaluations. In numerical experiments, the effectiveness of this approach is clearly demonstrated.\n\n- The paper is well-written and the main idea is clear and easy to understand. \n- Other works which the paper builds upon are referenced and fairly attributed. \n- Experiments on small-scale data clearly demonstrate the effectiveness of the approach. \n- Also on larger datasets, the method appears to give strong empirical results.\n\n- Approximating Jacobian trace through Stein's identity potentially leads to an estimator with large variance -- I found the claims that it solves Hutchinson's high variance problem to be a bit misleading.\n\nCan there be a formal argument that the proposed estimator has lower variance than random projections? Essentially, the gradient is estimated through random (zero-order) sampling, which is not exactly low-variance?"
},
{
"confidence": 3,
"rating": 6,
"review_id": "1k0yc04OhR",
"review_text": "This manuscript proposes a new score matching method that bypasses the Jacobian trace by applying Stein’s identity, enabling effective regularization and efficient computation.\n\n1. The method is computationally efficient compared to other SSM variants.\n2. Experimental results demonstrate the effectiveness of the proposed method.\n\n1. The advantage of the proposed method compared to denoising score matching (DSM) is unclear. The manuscript mentions that it restricts the SDE to be affine, but it does not clarify the benefit of using a non-affine SDE. Furthermore, the influence of the SDE on the generative model needs to be elaborated.\n2. The experimental results do not show significant improvements over DSM. The proposed method achieves comparable sample quality, as shown in Table 3.\n\nPlease refer to weaknesses."
}
] | |
yOe6ajdslI | AUC Maximization under Positive Distribution Shift | Maximizing the area under the receiver operating characteristic curve (AUC) is a popular approach to imbalanced binary classification problems. Existing AUC maximization methods usually assume that training and test distributions are identical. However, this assumption is often violated in practice due to {\it a positive distribution shift}, where the negative-conditional density does not change but the positive-conditional density can vary. This shift often occurs in imbalanced classification since positive data are often more diverse and time-varying than negative data. To deal with this shift, we theoretically show that the AUC on the test distribution can be expressed by using the positive and marginal training densities and the marginal test density. Based on this result, we can maximize the AUC on the test distribution by using positive and unlabeled data in the training distribution and unlabeled data in the test distribution. The proposed method requires only positive labels in the training distribution as supervision. Moreover, the derived AUC has a simple form and thus is easy to implement. The effectiveness of the proposed method is shown with four real-world datasets. | https://openreview.net/pdf/6babeb80637c127be43e9a61c520ffe601db0123.pdf | [
{
"confidence": 5,
"rating": 6,
"review_id": "sb9O6MTkWa",
"review_text": "Due to a positive distribution shift, training and test distributions are not identical. However, existing AUC maximization methods don’t take it into account. To address this shift, this paper theoretically shows a new way to maximize the AUC on the test distribution by using positive and unlabeled data in the training distribution and unlabeled data in the test distribution. Finally, four real-world datasets validate the effectiveness of the proposed method.\n\n-\tThe proposed setting is novel and practical in AUC optimization. The distribution of negative data is generally stable but the distribution of positive data is more diverse or time-varying in medical diagnosis, intrusion detection, and visual inspection. \n-\tThe method presentation is easy to understand. This paper first introduces basic AUC fundamental knowledge. Then, it gives the problem setting of the proposed positive distribution shift. Based on this setting, the final expression is obtained through some intuitive and simple derivation. To be specific, the AUC maximization on the test distribution can be accomplished by using positive and unlabeled data in the training distribution and unlabeled data in the test distribution.\n\n- The effect of the proposed methods on MINST and Fashion MINST datasets is not significant, which is inconsistent with those on the other datasets. The authors don’t give any explanation.\n- The authors do not fully compare their method with the latest ones. For example, \n - Positive-Unlabeled Learning with Label Distribution Alignment. (TPAMI 2023)\n - Dist-PU: Positive-Unlabeled Learning from a Label Distribution Perspective. (CVPR 2022)\n - Positive-unlabeled learning using random forests via recursive greedy risk minimization. (NeurIPS 2022)\n- All theoretical derivations are only based on the sigmoid surrogate loss. As far as I know, square loss is also popular. Can the theoretical results extend to the other losses?\n- There are some typos. For example,\n - In line 105, “However, these all methods assume that” should be “However, all these methods assume that”.\n\nplease refer to Weakness"
},
{
"confidence": 3,
"rating": 6,
"review_id": "kHrM20p0pj",
"review_text": "The paper proposes a method for AUC maximization in binary classification problems under positive distribution shift. They introduce their method, which is simple and easy to implement/understand, and then show it works well in some experiments.\n\n- The paper is well written and easy to understand;\n- The paper proposes a well-motivated method and show how it can be easily implemented in practice;\n- The experiments are convincing.\n\n- The authors do not discuss how the classification threshold can be chosen in a practical situation under positive distribution shift.\n\nHow should the practitioner choose classification threshold after training their classifiers using your method?"
},
{
"confidence": 4,
"rating": 5,
"review_id": "GWG6fAkiwb",
"review_text": "This paper considers AUC maximization when the conditional probability distribution of the positive class changes in the test phase. To this end, the unbiased loss function is derived. The loss is approximated by positive and unlabeled data from training distribution, unlabeled data from test distribution, and class-prior of training distribution. In experiments, the proposed method outperformed the existing methods over the four benchmark datasets.\n\n- This is the first study on AUC maximization for positive distribution shift.\n- The proposed method outperformed the existing methods.\n- The proposed method does not require the class-prior of the test distribution.\n\n- Unlike the existing study [15, 42], the negative distribution is not considered.\n- It lacks theoretical analyses of the proposed method.\n- The extension of the proposed method is discussed but not evaluated in the experiments.\n\nIs the proposed loss function unbiased to its supervised counterpart? \n\nIt would be valuable if there were discussion or experimental results showing the effect of the number of unlabeled data from the test distribution. In some applications, collecting a lot of unlabeled data from the test distribution might be difficult. In such a situation, the experimental results would help practitioners understand how many samples are necessary to collect.\n\nAccording to the literature, the non-negative risk estimator plays a crucial role in training deep neural networks. However, the proposed method does not mention the non-negativity of the risk estimator. Did the authors encounter that the risk estimator went to a large negative value in experiments? If not, what points in the proposed method avoid the issue?\n\nRegarding the Extension in Section 4.4, it would be nice to cite the existing work."
},
{
"confidence": 4,
"rating": 4,
"review_id": "WBpncE1xiG",
"review_text": "This paper addresses the challenge of maximizing the Area Under the Receiver Operating Characteristic Curve (AUC) in imbalanced binary classification problems where there is a positive distribution shift--this shift is where negative data remains constant, but positive data varies. A new method is proposed that utilizes labeled positive and unlabeled data from the training distribution, along with unlabeled data from the test distribution, to maximize the AUC effectively in the presence of such shifts.\n\nThis paper introduces a new loss function designed for AUC maximization under positive distribution shifts. Previous research has focused separately on AUC maximization and positive distribution shifts, but this study found the intersection of these two areas. The authors have successfully identified and explored this new research niche. The proposed loss function, derived from mathematical foundations, can be readily integrated into neural network training, offering a practical application for enhancing model performance. This paper is well-structured and clearly written, making it easy to follow.\n\nDespite its strengths, this research primarily offers a simple proposal of a loss function, suggesting its contributions to the field might be limited. An expansion to include various metrics, such as F-1 and G-mean of TPR and TNR, which are also relevant for imbalanced data classification, could enrich this paper. Additionally, the experimental validation is somewhat restricted, utilizing only four datasets, all of which are image datasets. A more comprehensive evaluation using a broader range of datasets is necessary to fully assess the proposed loss function's effectiveness. Therefore, the reviewer believes that the contribution of this research may not be substantial enough for acceptance at a top-tier conference.\n\nQ1: Please specify the scenarios where both class imbalance and positive distribution shift occur. Providing detailed examples will help readers grasp the practical significance of this research problem.\n\nQ2: Why did the authors choose to conduct their experiments exclusively with image datasets? Are there any other real-world problems?\n\nQ3: AUC maximization can be implemented not just as a loss function for neural networks, but across various machine learning methods. Why did you choose to focus on proposing a loss function?\n\nQ4: The reviewer is not convinced that Lines 4-5 in Algorithm 1 sufficiently demonstrate the training process. There needs to be a more detailed and mathematical explanation of how model parameters are updated using the proposed loss function."
}
] | |
yO5DVyCHZR | A Simple and Optimal Approach for Universal Online Learning with Gradient Variations | We investigate the problem of universal online learning with gradient-variation regret. Universal online learning aims to achieve regret guarantees without prior knowledge of the curvature of the online functions. Moreover, we study the problem-dependent gradient-variation regret as it plays a crucial role in bridging stochastic and adversarial optimization as well as game theory. In this work, we design a universal approach with the *optimal* gradient-variation regret simultaneously for strongly convex, exp-concave, and convex functions, thus addressing an open problem highlighted by [Yan et al. [2023]](https://openreview.net/forum?id=AA1xrgAP5z). Our approach is *simple* since it is algorithmically efficient-to-implement with a two-layer online ensemble structure and only $1$ gradient query per round, and theoretically easy-to-analyze with a novel and alternative analysis to the gradient-variation regret. Concretely, previous works on gradient variations require controlling the algorithmic stability, which is challenging and leads to sub-optimal regret and less efficient algorithm design. Our analysis overcomes this issue by using a Bregman divergence negative term from linearization and a useful smoothness property. | https://openreview.net/pdf/a10687149c5522371473b053ba79e8a1fa64c75b.pdf | [
{
"confidence": 4,
"rating": 7,
"review_id": "C1uUQ9dTKF",
"review_text": "This paper studies universal Online Convex Optimization (OCO) with gradient-variation-dependent regret bounds. That is, to design one single algorithm that is unaware of but is able to adapt to both two following groundtruth: 1) the type of curvatures: the loss functions could be convex, strongly convex, or exp-concave; 2) the curvature coefficient: exp-concavity $\\alpha$ or strong convexity $\\lambda$. As a result, the regret guarantee achieved by the algorithm scales with the cumulative gradient variation $V_T$ (which depends on the loss function sequence), rather than the time horizon $T$, as well as the corresponding curvature type of the underlying loss functions.\n\nThis paper proposes a new simple algorithm, that for the first time achieves optimal gradient-variation bounds for all three curvature types. Note that the gradient-variation bounds immediately imply small-loss (aka first-order) regret bounds as well as worst-case bounds. The #base learners is also improved from $(\\log T)^2$ to $\\log T$ due to the two-layer structure. The main result also finds broad applications including the SEA model and dynamic regret bounds.\n\nTechnique-side, the improvement comes from an alternative way to analyze the the empirical gradient variation w.r.t. surrogate losses and utilizing a negative Bregmen divergence term (due to linearization) to cancel other positive terms, which is often omitted in the analysis.\n\nI overall like such results. The authors present their observations and insights (from the regret analysis) in detail, leading to improved (and indeed optimal) regret bounds and even (conceptually) simpler algorithm design.\n\nI didn’t spot any significant technical issues, and I'm just suggesting some minor “weakness”.\n\n1. When the authors introduce the notion of $F_T$ and small-loss bound for the first time (around Eq. (1.2)), they may want to add that now the loss functions are non-negative (which I think should be necessary for all small-loss/first-order bounds?). Obviously, one can’t take squared root or logarithmic to a negative number.\n\n2. In the application to dynamic regret, the problem setup is not clearly defined. What is the type of loss function? Is the strong-convexity/log-concavity known? It is particularly confusing since it’s right after the universal OCO setup.\n\n1. The idea of utilizing negative Bregmen divergence terms also appeared in other problems, such as high-probability regrets in adversarial bandits [1]. Could the authors comment on the connection (if any) between the use therein and this work?\n\n2. Under the universal OCO setup, is it possible to handle time-varying curvatures, just like in [2]?\n\n3. Seems that a universal OCO algorithm cannot be anytime? The reason is that the number of base learner (for discretization) depends on $T$.\n\nReferences\n\n[1] Lee, Chung-Wei, Haipeng Luo, Chen-Yu Wei, and Mengxiao Zhang. \"Bias no more: high-probability data-dependent regret bounds for adversarial bandits and mdps.\" Advances in neural information processing systems 33 (2020): 15522-15533. https://arxiv.org/abs/2006.08040\n\n[2] Luo, Haipeng, Mengxiao Zhang, and Peng Zhao. \"Adaptive bandit convex optimization with heterogeneous curvature.\" In Conference on Learning Theory, pp. 1576-1612. PMLR, 2022. https://proceedings.mlr.press/v178/luo22a/luo22a.pdf"
},
{
"confidence": 3,
"rating": 5,
"review_id": "7iQvSgUhrj",
"review_text": "The paper studied the problem of regret minimization of a set of functions $\\{f_t\\}_{t=1}^{T}$ over a compact and convex constraint set $\\mathcal{X}$, i.e.,\n$\\sum{t=1}^{T}f_{t}(x_t) - \\text{min}{x\\in\\mathcal{X}}\\sum{t=1}^{T}f_{t}(x),$\nwhere $x_t$ is the output of the proposed algorithm at round $1\\leq t\\leq T$.\nThe set of functions ${f_t}_{t=1}^{T}$ potentially satisfy certain curvature assumptions, e.g., strong convexity, convexity, or exp-concavity. In the paper, it is unknown which curvature assumption the function satisfies. The main goals of the paper are the following:\n\n1. To construct a universal algorithm that adaptively acts on the curvature property of the function and achieves a proper regret bound.\n2. For the case where the function is $\\lambda$-strongly convex or $\\alpha$-exp-concave, the algorithm should be adapted with respect to the curvature parameter, $\\lambda$ to $\\alpha$.\n3. The algorithm should achieve a good problem-dependent regret bound: The goal of the paper is to attain a regret bound that depends on the following quantities:\n$V_T = \\sum_{t=1}^{T}\\text{sup}_{x\\in \\mathcal{X}} \\| \\nabla f_{t}(x) - \\nabla_f{t-1} (x) \\|^2, \\quad \\text{and}\\quad F_T = \\text{min}{x\\in\\mathcal{X}}\\sum{t=1}^{T}f_t(x).$\n\nThe proposed algorithm of the paper is a modification of the algorithm proposed by [1]. Similar to the approach introduced by [1], in Algorithm 1 (page 5) of the paper, the authors proposed $\\emph{base learners}$ that are aggregated by a meta-algorithm, which outputs the final output of the algorithm at round $t$, $x_t$. The contribution of the paper mainly concerns the technical aspects that outperform the performance of [1] from the following points of view:\n\n1.The paper improves the number of required base learners from $\\log(T)^2$ (in [1]) to $\\log(T)$.\n\n2.This improvement of the algorithm outperforms the algorithm proposed by [1] up to a logarithmic factor for the situation where the loss functions $f_t$ are convex.\n\n[1] Y.-H. Yan, P. Zhao, and Z.-H. Zhou. Universal online learning with gradual variations: A multi-layer online ensemble approach. In Advances in Neural Information Processing Systems 36 (NeurIPS), 2023.\n\nThe paper uses simple but interesting technique that contributes in tigher bounds for the case of convex losses. Inspired by [2] the authors used that exploit the imposed smoothness assumption of the loss function and Bregman divergence negative term from linearization of loss function, explained in Section 3.2.\n\n\n\n[2] P. Joulani, A. Raj, A. Gyorgy, and C. Szepesvari. A simpler approach to accelerated optimization: iterative averaging meets optimism. In Proceedings of the 37th International Conference on Machine Learning (ICML), 2020.\n\nThe main weakness of the paper lies in its presentation. The content is too dense, and the last section on dynamic regret could be moved to the appendix. Some key parts of the paper are not well explained. For instance, it is unclear how the authors managed to outperform the number of required base learners in [1] by a logarithmic factor. Was this achieved through the application of a Bregman divergence negative term?\n\nThe contribution of the paper is limited to a simple technical improvement that enhances the achieved regret up to a logarithmic factor for convex functions.\n\nThe optimality of the result with respect to $V_T$ and $F_T$ has not been discussed by the authors.\n\n\n[1] Y.-H. Yan, P. Zhao, and Z.-H. Zhou. Universal online learning with gradual variations: A multi-layer online ensemble approach. In Advances in Neural Information Processing Systems 36 (NeurIPS), 2023.\n\n1. I do not understand the comment on the small $\\alpha$ and $\\lambda$ in lines 146-147. Can these cases be considered convex? For the convex case, the rate is of the order of $\\sqrt{T}$. How do the optimal minimax results hold for this regime, which indicates that the regret is linear?\n\n2. I would appreciate it if the authors could explain the question I raised in the weakness section and outline the main difference that helps them improve the number of base learners.\n\n3. I do not understand the comment in line 305: \"which can be easily canceled by the negative term from curvatures in the meta regret.\" For this cancellation to occur, the coefficient in the equation after line 304 really matters. Could the authors explain this further?\n\n4. Could the authors explain if the final result is optimal with respect to $V_T$ and $F_T$?\n\nMinor Points:\n\nThe terms $\\sigma_{\\text{\\max}}$ and $\\Sigma_{\\text{\\max}}$ are not defined in Theorem 3."
},
{
"confidence": 3,
"rating": 5,
"review_id": "FcUUJhYeVP",
"review_text": "This paper investigates the problem of universal online convex optimization to achieve problem dependent regret guarantees for different classes of convex functions (strongly convex, exp-concave, and convex) simultaneously. Problem/function/data dependent regret guarantees have become popular in literature to bridge stochastic and adversarial guarantees.\n\nS1) The paper is well written and easy to understand.\n\nS2) The literature review is comprehensive and up to date.\n\nS3) Simplicity of the incorporation of Bregman divergence is a plus.\n\nW1) The contribution seems limited in that the improvement is only logarithmic for both efficiency and regret results.\n\nW2) While the regret analysis is novel, algorithmic contribution is very limited, which leads me to believe this paper is more suitable to be a technical note.\n\nQ1) Why is $\\log^2 T$ computational complexity claimed to be inefficient throughtout the paper? In Table 1, the number of gradient queries and base learners are given as part of efficiency, however, a decrease on the number of queries seems much more significant to me.\n\nQ2) Improvement over the results of Yan et al. [2023] seems incremental. Are there scenarios where this improvement becomes significant?\n\nQ3) Is your approach the same as Zhang et al. [2022] but using Optimistic ADAPT-ML-PROD [Wei et al., 2016] instead of ADAPT-ML-PROD [Gaillard et al., 2014]?"
},
{
"confidence": 3,
"rating": 6,
"review_id": "j01V6h0eLO",
"review_text": "The authors study the regret minimization problem in online convex optimization without access to curvature information. They tackle the task of achieving problem-dependent optimal regret while requiring no prior knowledge of the function class (convex, exp-concave, or strongly convex). They propose an efficient-to-implement two-layer online ensemble structure that requires only one gradient query within each round. Their main technical novelty lies in providing a novel approach for gradient-variation bounds.\n\nThe authors tackle a very interesting problem in online convex optimization. The paper is well-written and the presentation makes it easy to follow. The main novelty lies in Sections 3.2 and 3.3, where they provide a new way of tackling gradient variations by utilizing the Bregman divergence term. They also make clever use of Proposition 1 in their analysis. The overall method utilizes techniques from several existing works and cleverly combines them to achieve an impressive bound on the regret.\n\nThe proposed approach seems reasonable to me. While I have not gone through the technical details very carefully, I seek one clarification on the proof of Theorem 1. In my opinion, the bottleneck of the proof is in showing the existence of an appropriate choice of $C_3$ and $C_4$ (page 17, line 594, 596). Can the authors comment if such a setting always exists? I would at least expect certain conditions like $\\alpha_i^* > G^2/9L $ or $\\lambda_i^* > 1/9L$ for results to hold.\n\nAnother small thing: I understand the authors ignore very small terms like $\\log \\log T$ from the order notation. It might be good to put a note in the introduction about it while presenting the result. I understand that it is there in Section 3.1 -- it might be good to move it earlier.\n\nSee above."
}
] | |
yMS7ansbr6 | Lips Are Lying: Spotting the Temporal Inconsistency between Audio and Visual in Lip-Syncing DeepFakes | In recent years, DeepFake technology has achieved unprecedented success in high-quality video synthesis, but these methods also pose potential and severe security threats to humanity. DeepFake can be bifurcated into entertainment applications like face swapping and illicit uses such as lip-syncing fraud. However, lip-forgery videos, which neither change identity nor have discernible visual artifacts, present a formidable challenge to existing DeepFake detection methods. Our preliminary experiments have shown that the effectiveness of the existing methods often drastically decrease or even fail when tackling lip-syncing videos.
In this paper, for the first time, we propose a novel approach dedicated to lip-forgery identification that exploits the inconsistency between lip movements and audio signals. We also mimic human natural cognition by capturing subtle biological links between lips and head regions to boost accuracy. To better illustrate the effectiveness and advances of our proposed method, we create a high-quality LipSync dataset, AVLips, by employing the state-of-the-art lip generators. We hope this high-quality and diverse dataset could be well served the further research on this challenging and interesting field. Experimental results show that our approach gives an average accuracy of more than 95.3% in spotting lip-syncing videos, significantly outperforming the baselines. Extensive experiments demonstrate the capability to tackle deepfakes and the robustness in surviving diverse input transformations. Our method achieves an accuracy of up to 90.2% in real-world scenarios (e.g., WeChat video call) and shows its powerful capabilities in real scenario deployment.
To facilitate the progress of this research community, we release all resources at https://github.com/AaronComo/LipFD. | https://openreview.net/pdf/e4d265e599081369073e1436266147e9fe673842.pdf | [
{
"confidence": 4,
"rating": 6,
"review_id": "4yIi2bI104",
"review_text": "This paper tackle deepfake detector problem with audio-visual data focusing on lipsync fake which generally a higher quality fake data. For that, this paper propose a dataset and a method. The dataset (AVLips) is formed using available datasets and 3 methods for the lipsync methods. The method (LipFD) extracts global and local features. For global features, transformer is utilized to get the context from all regions. For local regions, the video is cropped to different face areas: face + background, face, lips areas; and extract feature from each area. Weighting is used to select which cropped areas are more important.\n\n1. Contributions (dataset and method) in addressing lipsync based deepfake look sufficient.\n\n2. Fine-grained features are considered. \n\n3. Analysis in real scenarios is interesting.\n\n1. Ablation removing one or two out of the 3 branches for local feature extraction is missing. Figure 8 is just showing the important weight extracted from the overall framework doesn't show exactly how much the performance drop if the branches are not included.\n\n2. Details are not clear (see Questions).\n\n1. Line 267: is there statistics how often the latency below 100ms happen?\n\n2. Does this work utilize audio data or not? Figure 4 bottom-left and Line 83-84 indicate audio data is used but I can't find anything related to audio input in the equations.\n\n3. Eq. (5): Where is j index in the equation? And what is none-cropped region?\n\n4. Line 185: \\textit{notice} from where? \n\n5. Figure 8: why lip is not as important in real data? I assume synchronized lip is also sign of real."
},
{
"confidence": 3,
"rating": 6,
"review_id": "PX9NQIE1dn",
"review_text": "This paper focuses on a new setting in Deepfake detection called lip-syncing fraud, which only contains fewer minor cues on the leap region. To tackle this issue, the authors provide a novel method called LipFD to obtain the features from both a global view and a regional view. Also, with the new AVLips dataset, this method shows a SOTA result compared to the recent methods.\n\n1. This work provides a new setting on Deepfake called AVLips with a large number of high-quality samples.\n2. The method mainly focuses on generating the features from the lips region which is novel.\n\nAlthough the proposed method shows a good result, there are some confused expresses which may bring a hard understanding to readers:\n1. For equation 3, what is $RA(\\cdot)$ mean? What is $[F_G|\\{F_R\\}^i_j]$ means? There lack an explanation of these operations.\n2. It will be better to have an ablation study on the selection of a vision transformer. Including the pretrain, the structure, etc.\n3. It could be better to have more details about the dataset, including the number of samples, the visualization of samples with different methods, etc.\n\nSee weakness."
},
{
"confidence": 4,
"rating": 6,
"review_id": "CnxwfWw7qB",
"review_text": "The proposed work introduces a pioneering method for detecting lip-syncing forgery, an often overlooked threat in current research. By leveraging discrepancies between lip movements and audio signals, a dual-headed detection architecture significantly enhances detection accuracy. This work also contributes to the first large-scale audio-visual LipSync dataset, comprising nearly one hundred thousand samples, and conducts extensive experiments that demonstrate our method's efficacy. Results show up to 94% average accuracy in LipSync detection, with robust performance in real-world scenarios.\n\n1. this work proposes a new research problem -- lip forgery detection, which is meaningful and useful. A dataset for this research problem is also proposed.\n\n2. The anonymous github makes this work very convincing.\n\n3. The real-life applications shown in Fig. 6 is very impressive.\n\nthe proposed algoritm, LipFD does not have a strong techincal novelty in learning region and global features from the multi-modal input.\n\nN/A"
},
{
"confidence": 4,
"rating": 5,
"review_id": "qz2PyAMKkZ",
"review_text": "The paper introduces a novel method, LipFD, dedicated to detecting lip-syncing forgeries by exploiting temporal inconsistencies between lip movements and audio signals. This unique approach addresses a significant gap in existing DeepFake detection methods. Experimental results demonstrate that LipFD achieves high accuracy across multiple datasets, showcasing its effectiveness and robustness.\n\n- This paper addresses a novel problem by focusing on specific DeepFake types that are challenging to detect with current DeepFake detection algorithms but perform quite well in state-of-the-art models.\n- The paper is well-written and easy to follow. Experimental results indicate the effectiveness of the proposed method.\n- The proposed dataset provides a solid foundation for further research in this field.\n\n- The diversity of fake videos in the training set is limited, as it only includes three methods: MakeitTalk, Wav2Lip, and TalkLip. This limitation can lead to overfitting, as the classifier may easily learn the distinct patterns of these methods. For example, Wav2Lip produces blurry lip images and shows obvious artifacts when fusing lip and facial images. To demonstrate generalizability, testing on additional state-of-the-art generation methods is encouraged.\n- While the method performs well on the proposed LipSync dataset, there is some variability in performance across different datasets like FF++ and DFDC. This indicates potential limitations in generalizability across diverse datasets, possibly due to the limited variety of fake videos in the training set. A robust model should be capable of detecting both LipSync and general DeepFake videos effectively.\n\n- There is a question about the spectrogram in Figure 2. How is the spectrogram obtained? From Figure 4, it seems to be the audio spectrogram. However, the audio of the fake video is real, so why are there unexpected characters like \"the darkest part of the spectrum\"?\n- What is the meaning of \"static\" and \"dynamic\" in Line 122?\n- There is a typo: LRS2 in Table 1 should be AVLips.\n- Why use MakeitTalk? MakeitTalk generates the whole face instead of only the lip region, which does not align with the definition of LipSync as outlined in this paper."
}
] | |
yKvHJJE9le | Safe Time-Varying Optimization based on Gaussian Processes with Spatio-Temporal Kernel | Ensuring safety is a key aspect in sequential decision making problems, such as robotics or process control. The complexity of the underlying systems often makes finding the optimal decision challenging, especially when the safety-critical system is time-varying. Overcoming the problem of optimizing an unknown time-varying reward subject to unknown time-varying safety constraints, we propose TVSAFEOPT, a new algorithm built on Bayesian optimization with a spatio-temporal kernel. The algorithm is capable of safely tracking a time-varying safe region without the need for explicit change detection. Optimality guarantees are also provided for the algorithm when the optimization problem becomes stationary. We show that TVSAFEOPT compares favorably against SAFEOPT on synthetic data, both regarding safety and optimality. Evaluation on a realistic case study with gas compressors confirms that TVSAFEOPT ensures safety when solving time-varying optimization problems with unknown reward and safety functions. | https://openreview.net/pdf/8ccc16f75fc2b8ad413d7dc3ee80d673efeeab6c.pdf | [
{
"confidence": 3,
"rating": 6,
"review_id": "ae4td0N9sA",
"review_text": "The authors propose a time-varying extension of SAFEOPT to overcome the problems of time-varying rewards under time-varying safety constraints.\n\nUnder stationarity conditions, optimality guarantees are provided and the numerical simluation shows a (favorable) comparison to the SAFEOPT.\n\n1. The paper is very well written and easy to follow.\n\n2. Based on related work, the problems of time-varying rewards under time-varying safety constraints are an open problem in literature, and his paper addresses that.\n\n3. The paper provides formal safety guarantees for their TVSAFEOPT algorithm.\n\n1. *Some delineation to related work seems rather vague and requires stronger justification.* An example for TVSBO: the time-variable and temporal aspect of the kernel can just as well be interpreted as context using existing results. Perhaps a table would help here to highlight key aspects. \n\n2. *Lack of real-world data experiments and comparison to related work.* To support the downsides of existing approaches, an empirical comparison to existing TVSBO approaches mentioned in the related work section would be needed.\n\n3. The *empirical results could be more convincing* by adding a variety of initial safe sets and revised plots. The current plots/results are hard to parse. \n\n4. It would be beneficial if the *theoretical/technical challenge of extending safety to the time-varying case were more detailed*. This would streamline the presentation and help in assessing the impact of the contribution.\n\n1. *In the Appendix, a spatio-temporal SE kernel is introduced. How is this construction different from using an SE-ARD kernel with a composite variable $z = [x^T,t]^T$?* If I am not mistaken, for the SE-ARD kernel there would be no different than defining a single kernel with $z$. \n\n2. It is mentioned that the Lipschitz constants are to be known beforehand. However, while commonly assumed, *how do you get a hold of an RKHS norm bound $B$ (related to Assumption 2.1) to compute the UCB?* \n\n3. *Could you provide Figure 1 sooner in the manuscript?* It would be super helpful to see this central illustration already on page 2."
},
{
"confidence": 3,
"rating": 5,
"review_id": "84Okown8ES",
"review_text": "This paper presents a safe Bayesian optimization algorithm TVSAFEOPT with a spatial-temporal kernel and time Lipschitz constants, which improves on SAFEOPT with time-varying reward and safety constraints. The optimality guarantee is proved for the stationary case and the safety guarantee for more general settings. The method is tested on a synthetic problem and gas compressors.\n\n1. The use of a spatio-temporal kernel in Bayesian optimization for time-varying safety constraints is novel.\n2. A formal proof of safety and optimality guarantee under certain assumptions.\n\n1. More discussion on how to make a tradeoff between optimality and safety is encouraged. \n2. Will this conservatism in safety become too large in high-dimensional problems?\n2. The method to choose the proper initial safe set and kernel parameters is unclear.\n\n1. How to find an initial safe set for complex problems?\n2. How to find the kernel parameter for each task?\n2. What is the computational complexity compared to other BO baselines?"
},
{
"confidence": 2,
"rating": 5,
"review_id": "A4IEyk6Dur",
"review_text": "The paper introduces the TVSAFEOPT algorithm, which is based on Gaussian processes with spatio-temporal kernels, designed specifically for optimizing time-varying rewards under time-varying safety constraints. The algorithm provides formal safety guarantees in a general time-varying setting, ensuring safety even when exploring non-stationary safe regions. It robustly subtracts safety margins to prevent unsafe decisions, adapting in real-time to changing environments. Furthermore, they provide optimality guarantees for locally stationary optimization problems, ensuring near-optimal solutions when the optimization problem becomes stationary.\n\nThey provide formal safety guarantees in dynamic environments, ensuring safe decision-making even in non-stationary settings. \n\nAdditionally, the algorithm offers optimality guarantees for stationary optimization problems, enhancing its reliability and performance\n\nExtensive numerical simulations were provided to validate the proposed approach.\n\nThey extend the Safeopt algorithm from literature. However, it is clear on what are the additional contributions and difference between these two different approaches.\n\n-"
}
] | |
yDo1ynArjj | Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion | This paper presents Diffusion Forcing, a new training paradigm where a diffusion model is trained to denoise a set of tokens with independent per-token noise levels. We apply Diffusion Forcing to sequence generative modeling by training a causal next-token prediction model to generate one or several future tokens without fully diffusing past ones. Our approach is shown to combine the strengths of next-token prediction models, such as variable-length generation, with the strengths of full-sequence diffusion models, such as the ability to guide sampling to desirable trajectories. Our method offers a range of additional capabilities, such as (1) rolling-out sequences of continuous tokens, such as video, with lengths past the training horizon, where baselines diverge and (2) new sampling and guiding schemes that uniquely profit from Diffusion Forcing's variable-horizon and causal architecture, and which lead to marked performance gains in decision-making and planning tasks. In addition to its empirical success, our method is proven to optimize a variational lower bound on the likelihoods of all subsequences of tokens drawn from the true joint distribution. Project website: https://boyuan.space/diffusion-forcing/ | https://openreview.net/pdf/5a6e9d157a4d33dc36773c5c32370c3c7941d6c2.pdf | [
{
"confidence": 4,
"rating": 7,
"review_id": "USXEJoBrUA",
"review_text": "This work presents Diffusion Forcing, a new framework for probabilistic sequence modeling that combines diffusion models with Bayesian filtering. This framework builds on state of the art approaches to sequence modeling using diffusion models, but has several novel contributions.\nFirst, it allows the model to use *independent* noise levels per element in the sequence, which is a key factor for the stability of autoregressive generation and conditional, guided generation.\nSecond, this work casts the proposed method for sequential decision making, by defining a new guidance technique that allows the generation of the next toke by guidance on the full distribution of future tokens.\nThird, the authors go at great length in demonstrating empirically that their proposed framework is general and can be applied beyond text generation, as opposed to related work.\nDiffusion forcing relies on simple ideas: noising is understood as (partial) masking, and it is cast for sequential data, giving rise to a causal variant of diffusion forcing. In practical terms, we have a dynamical system modeled with a simple RNN, in which hidden states follow the Markovian principle: the next hidden state depends on the previous hidden state and a current observation. Previous, next and current are to be intended as indexes in the sequence. Observations are obtained by running a diffusion model with independent noise levels per sequence index, and noisy observations can be used to transition to the next hidden state. A connection with Bayesian filtering is made clear in the paper. Then, we end up with an observation model (for state transitions) and a denoising model (for the diffusion of the observations).\nThe authors provide a sound development of the training procedure and objective, by showing that their training algorithm optimizes a weighted ELBO on the expected log-likelihood.\n\n* This work presents substantial improvement over the literature on the joint application of diffusion models and autoregressive models\n* The proposed methodology is technically sound, and well supported by intuition, formal proofs and a wide range of experiments\n* The experimental section expands over the literature by focusing on several domains including long-range video generation, planning, compositional generation and multivariate time series forecasting\n\n* The intuition of the effects of noising on long-horizon generation (appendix B.2) is very similar to the ideas described in a related work AR-Diffusion [62]. This does not highlight the contribution of *independent* noise levels per sequence index\n* Experiments do not compare (at least to the best of my understanding) Causal Diffusion Forcing to AR-Diffusion, which would be the natural competitor. Nevertheless, I understand that this would require considerable amount of adaptation work, since AR-Diffusion tackles language modeling mainly\n* I liked Appendix B.6, but it is not referenced in the main, and I think this would be more helpful than the figures in sec 3.1\n\nQ.1: could you please provide a clear summary of why *independent* noise levels are key for your method, and substantiate the difference with respect to AR-Diffusion [62]? I have read Appendix B.2 and Appendix C, where you attempt at clarifying, but I think the benefits for stability, and conditioning on corrupted observations is not spelled out sufficiently\n\nQ.2: is there a way to compare your work to AR-Diffusion that would not require a substantial re-factoring of their code, such that it can be applied to one (e.g. video) use case in your experiments? Another way to go would be to modify your CDF method and use linearly dependent noise levels, to ablate on the requirement for independent noise levels\n\nMinor (no need to answer):\n* typos: I could spot one typo in line 186: $[x_1^0, x_2^0, x_3^{K/2}]$\n* please check the proofs in the appendix as there are some typos that slipped there, as well as the text in the appendix that has several grammar problems, missing verbs and the like\n\n\n====== POST REBUTTAL MESSAGE ======\n\nThank you for the rebuttal. I have raised my score."
},
{
"confidence": 4,
"rating": 8,
"review_id": "khcIDlwvi4",
"review_text": "The authors introduce Diffusion Forcing (DF), a method for diffusion of sequential data where the noise level at each token can be different (“independent”). The authors show that DF provides more flexible steerability properties and more stable rollouts compared to full-sequence diffusion and teacher forcing. Experimentally, these enable stable video prediction along several timesteps, improved performance on planning tasks, and more robust robotic visuomotor control, relative to the respective relevant baselines.\n\nThe following is a more detailed summary.\n\n### Overview of the method\n\nDiffusion Forcing (DF) denoises a sequence of noisy tokens $x^k\\_1, \\cdots, x^k\\_T$ at noise level $k$, starting at $k=K$ (maximum noise) and finishing at $k = 0$. A sequence of hidden states $z\\_1, \\cdots, z\\_T$ is also maintained throughout. Importantly, different tokens can be denoised by different amounts at each denoising step.\n\nThe architecture has two main components: \nan encoder $p\\_\\theta(z\\_t | z\\_{t-1}, x^{k\\_t}\\_t, k\\_t)$ mapping the previous hidden state $z\\_{t-1}$, the current noisy token $x^{k\\_t}\\_t$ and the noise level $k$ to the new value of the current hidden state $z\\_t$;\na denoiser $\\epsilon\\_\\theta(z\\_t, x^{k\\_t}\\_t, k\\_t)$, which is used to denoise $x^{k\\_t}\\_t$.\n\nAt training time, the noise levels $(k\\_t)\\_{1 \\leq t \\leq T}$ are sampled independently, and the encoder and denoiser are trained jointly using the usual diffusion loss on the output of $\\epsilon\\_\\theta$.\n\nAt inference time, the tokens are initialized with independent Gaussians. They are then denoised by first computing hidden states from left to right (via an RNN, in this case) using $p\\_\\theta$, and then by updating the values of the tokens using their current values and the hidden states.\n\nThe authors provide an ELBO interpretation for their loss function in the appendix.\n\n### Features of Diffusion Forcing \n\n- The authors highlight the following features of DF:\n- It supports classifier guidance, like ordinary diffusion;\n- It allows for keeping the noise level higher for future tokens. This makes intuitive sense in an auto-regressive setting, where future tokens depend on past tokens.\n- It supports a flexible planning horizon, as tokens are denoised sequentially.\n- It supports a more flexible form of classifier guidance (or reward guidance): past tokens can be guided by rewards that depend on future tokens, due to DF’s autoregressive architecture.\n\nWhen doing reward guidance, the authors propose drawing many samples of possible future trajectories, and averaging their rewards, rather than using a single sample as in ordinary classifier guidance. They term this approach Monte Carlo Tree Guidance (MCTG).\n\n### Overview of experimental findings\n\n- The authors evaluate Diffusion Forcing on video prediction, planning and robotics tasks. Their findings can be summarized as:\n- In video prediction (datasets: Minecraft gameplay and DMLab), DF provides more stable rollouts than full-sequence diffusion and teacher forcing. In particular, DF’s rollouts do not diverge as the number of tokens increases.\n- In planning (environment: Maze2d from D4RL), DF produces more more consistent trajectories, and executing the generated actions indeed produces a trajectory similar to that given by the generated states. - In addition, DF with MCTG significantly outperforms Diffuser on Maze2d environments.\n- In robotics, DF is robust to missing or noisy observations and can perform imitation learning with memory (as it maintains a hidden state, rather than directly mapping observations to actions).\nIn the appendix, the authors provide additional experiments on compositionality and time series prediction.\n\n1. The authors propose an original and performant method combining strengths of diffusion (steerability, robustness to noise, high-quality gradual sample generation) and auto-regressive sequence modelling (flexible horizons, temporal causality, memory in the case of RNNs).\n\n1. In addition, the authors provide a theoretical justification of their loss function in terms of an evidence lower bound (ELBO).\n\n1. The paper is written clearly, providing a clear motivation for the authors’ approach, contextualizing DF relative to existing work (especially Diffuser, AR-Diffusion and Diffusion Policy), and highlighting the main contributions of the method conceptually and experimentally.\n\n1. Trajectory inconsistency is a major limitation of Diffuser, which I have contended with in my own research. Mitigating this limitation is an important enabler of bringing the strengths of diffusion to bear in sequential decision making.\n\n1. Monte Carlo Tree Guidance can be seen as maximizing an empirical estimate of the expected future reward. From a policy optimization perspective, this seems more principled than doing gradient ascent on the realized cumulative reward of a given trajectory, as is done in full-sequence diffusion (e.g. Diffuser). As the authors explain in Appendix B.3, this technique relies on the architecture of DF to be effective.\n\n1. The results on video prediction, available in an anonymized project website provided in the abstract, are particularly impressive in terms of stability and 3D consistency. This, together with results on planning and robotics, indicates DF might contribute to advances in diffusion world models; a research area of established relevance that has received significant attention recently.\n\n1. Clarification on classifier guidance term $\\nabla\\_x \\log c (x^{\\textrm{new}}\\_{1:H})$: If this term is to be understood as the gradient of $x \\mapsto \\log c(x)$ evaluated at $x^{\\textrm{new}}\\_{1:H}$, then the gradients of $c$ on future tokens would not flow to previous tokens, as the inputs $x$ are “frozen” before being fed into $\\log c$. It seems that what the authors mean to say is that future tokens are treated as a differentiable function of past tokens when computing the gradients. It would strengthen the exposition if the authors either clarify this point in the paper, or update the notation to avoid confusion, as the current notation might lead the reader to believe that the gradients from future tokens do not flow into past ones.\n\n1. The naming of Monte Carlo Tree Guidance seems to misleadingly suggest a similarity with Monte Carlo Tree Search (MCTS). However, the method consists of sampling several future trajectories independently and averaging their guidance gradients, which seems quite divorced from MCTS, which involves actual search on a tree of states and actions and backpropagation of rewards through this tree. As such, I believe naming the technique Monte Carlo Guidance would be more appropriate.\n\n1. High-dimensional control evaluation: Janner et al. (2022) evaluate Diffuser on high-dimensional control locomotion tasks from D4RL. It would be interesting to see an evaluation of Diffusion Forcing in this setting, in particular regarding the consistency between states and actions. I recall from my own experience that executing longer plans from Diffuser in these locomotion environments in an open-loop fashion (i.e. no re-planning) led to trajectories diverging from the generated states, as noted by the authors. It would be interesting to see whether this is addressed by Diffusion Forcing on these higher-dimensional environments.\n\n1. The compositional generation environment referenced in Section 4.3 is very similar (if not identical) to the one used by Janner et al. (2022) in Figure 1b of their paper. I believe it is likely worth mentioning this in Section 4.3.\n\n1. Minor formatting problems\n \n 1. Line 186: $x^{K/2\\_3}$ -> $x^{K/2}\\_3$\n \n 1. Table 1 caption: “Diffusion Forcingkeeps” -> “Diffusion Forcing keeps”; “Diffusion Forcingachieves” -> “Diffusion Forcing achieves”\n \n 1. Line 495: “in full abstraction” -> “in full generality”\n Line 503: “likelihood for likelihood of all” -> “likelihood for all”\n \n 1. Equation A.3: superscript $k\\_2$ on the LHS should be $k\\_s$\n Line 516: revise the bracketing of the expression involving $p\\_\\theta$.\n \n 1. Line 522: “under uniform levels” -> “under uniformly sampled levels”\n \n 1. Line 524: “in the sequel” -> “in the following section”\n Equation A.5: $s \\leq T$ -> $1 \\leq s \\leq T$\n \n 1. Line 592: specify range for $s$ on the first expectation\n \n 1. Line 598: correct superscripts $t\\_k$ to $k\\_t$\n \n 1. Line 608: revise bracketing of the numerator inside the $\\ln$\n \n 1. Line 616: In the last and penultimate lines, replace $\\frac{\\ln p(...)}{q(...)}$ by $\\ln \\frac{p(...)}{q(...)}$\n \n 1. Line 628: “we” -> “we have”\n \n 1. Line 631: correct superscript of $x\\_t$ on the second line\n \n 1. Line 634: expression with $p\\_\\theta$ broken between lines\n \n 1. Line 635: capitalize Dirac\n \n 1. Equation B.1: include \\left[ \\right] in the brackets\n \n 1. Line 664: “we are” -> “we use”\n\n1. Does the flexible planning horizon (line 209) of Diffusion Forcing not derive from the choice of an RNN as the architecture, rather than e.g. a UNet? Would implementing existing methods such as Diffuser (Janner et al. 2022) not allow for a similar property?\n\n1. In the paragraph “Benefit of Modeling Causality”, the authors highlight that states and actions produced by DF are causally related, which does not hold in practice for Diffuser. Do the authors claim this is due to DF explicitly incorporating temporal structure into its architecture? Could it not also be due to the use of an observation model $p\\_\\theta(x^0\\_t|z\\_t)$ to predict the noise-free token $x^0\\_t$ from a hidden state $z\\_t$?\n\n1. At first sight and in its current form, the method seems tailored to the use of an RNN architecture, rather than a Transformer. For example, the denoiser is applied token-wise, with the information from previous tokens affecting the current token only via the hidden states $z\\_t$. How would the method have to be adapted, if at all, to work with transformers, in case one wants to scale up Diffusion Forcing?\n\n1. Janner et al. (2022) showcase in Section 5.4 show how to apply Diffuser with a variable planning budget, and study how the resulting performance varies with the planning budget. Can Diffusion Forcing also be run with a variable planning budget, through warm-starting (as for Diffuser) or otherwise? If so, it would strengthen the paper if the authors described how, and included a similar budget vs. performance analysis, especially in planning and robotics tasks."
},
{
"confidence": 4,
"rating": 7,
"review_id": "qF6W9kNF0j",
"review_text": "This paper proposes to augment autoregressive models with diffusion. Specifically, rather than generating every token in one shot (one neural network evaluation), the paper proposes to gradually denoise the tokens following an autoregressive order. That is, every token is given a different noise level (lower for former tokens and higher for latter ones), and the tokens are jointly denoised to generate better samples. Compared to pure autoregressive prediction, diffusion forcing allows the model to refine the samples through the diffusion process. Compared to diffusion models, the proposed model is capable of variable-length generation and extrapolation.\n\nThe authors also demonstrate additional potential generation tasks that can be done by diffusion-forcing models such as guided autoregressive sampling. \n\nEmpirical results demonstrate that diffusion forcing performs well on video prediction and various planning tasks.\n\nThis paper proposes an interesting combination of autoregressive models and diffusion models and demonstrates that the combination of both outperforms both individual models in terms of performance. Further, the diffusion-forcing paradigm offers many more applications that are otherwise impossible. For example, while doing variable-length generation, the model can leverage classifier-based/-free conditions. This provides much better flexibility to inference-demanding tasks such as planning and control.\n\nThe authors propose a training objective of diffusion forcing models based on noise prediction. The objective is proved to be a reweighted version of the evidence lower bound and thus is sound.\n\nDiffusion forcing achieves much better performance compared to autoregressive models and diffusion models in long-horizon generation tasks.\n\nA more detailed discussion of the noise schedule is desired to better understand the effectiveness of diffusion forcing. Is it necessary to use different noise schedules in different tasks to achieve good performance? Further, can we train the model with various/arbitrary noise schedules and at evaluation time find a good schedule? If any of these is possible it will greatly reduce the training complexity and extend diffusion forcing to more applications.\n\nTheorem 3.1 states that the proposed objective is equivalent to a reweighting of the evidence lower bound. However, it is unclear how the noise schedule biases the reweighting since a very badly balanced ELBO can render the training process unstable.\n\nHow diffusion forcing balances efficiency and performance. In the extreme case where only one denoising step per token is allowed, diffusion forcing reduces to autoregressive generation. How much performance gain can we expect if we allow for more computation time?\n\nHow easy or difficult can diffusion forcing be applied to non-autoregressive generation? Although diffusion forcing improves the performance in the autoregressive generation regime, some tasks (e.g., constrained text generation) require awareness of future tokens to generate the current ones. I wonder if diffusion forcing can be extended to this regime."
},
{
"confidence": 4,
"rating": 6,
"review_id": "87XcPcyG1o",
"review_text": "This paper introduces Diffusion Forcing, a novel training paradigm for sequential generative modeling using diffusion models. Diffusion Forcing learns from sequential tokens with varying independent noise levels, enabling more flexible sampling strategies and general capabilities such as guidance. The experimental results demonstrate that Diffusion Forcing outperforms existing methods, including full sequence diffusion and teacher forcing, across various tasks.\n\n1. The proposed Diffusion Forcing method is general and flexible, making it applicable to various tasks.\n2. The paper provides a comprehensive discussion on the capabilities of Diffusion Forcing.\n3. The experiments are well-designed and effectively demonstrate the proposed method's effectiveness.\n\n1. **Writing clarity and organization.** \n The writing style impacts readability, making the paper challenging to follow. It would benefit from a clearer organization. The paper primarily covers three points: (a) the proposed Diffusion Forcing (DF) method with independent noise levels and its theoretical analysis, (b) the capabilities of DF, including flexible sampling strategies, and (c) experimental results on various tasks. However, the current structure does not clearly present these points, particularly the DF method. Separating the design of DF and the intuitive explanation from the Bayesian filtering perspective, and listing the resulting capabilities in a separate section, would enhance clarity.\n\n2. **Clarity of figures.** \n The figures are not well-explained and are difficult to understand without referring to the text. For instance, Figure 1 omits latent states in the sampling process for both Diffusion Forcing and Teacher Forcing, which is confusing.\n\n3. **Minor issues and typos.** \n - Line 97: missing a \")\"\n - Line 139: \"nevel\" should be \"level\"\n - Line 186: \"$x^{K/2_3}$\" should be \"$x^{K/2}_3$\"\n - Line 178, 184, etc.: paragraph titles are inconsistently formatted\n - Line 522: missing a \"(\"\n\n1. **Consistency between training and sampling algorithms.** \n In Algorithms 1 and 2, there appear to be inconsistencies between the training and sampling algorithms. Can the authors provide an intuitive explanation for these inconsistencies? Specifically:\n - During training, the predicted noise $\\hat{\\epsilon}\\_t$ is calculated using the latent from the previous step $z\\_{t-1}$, whereas during sampling, $\\hat{\\epsilon}_t$ is calculated using the latent from the current step $z_t^{\\text{new}}$.\n - Similarly, during training, $\\hat{\\epsilon}\\_t = \\epsilon\\_\\theta(z, x\\_t\\^{k\\_t}, k\\_t)$ uses the same noise level $k\\_t$ of the noisy observation $x\\_t\\^{k\\_t}$, but during sampling, $x\\_t$ has a noise level $\\mathcal{K}\\_{m+1,t}$ instead of $k = \\mathcal{K}_{m,t}$.\n\n2. **Stabilizing auto-regressive generation.** \n The authors propose conditioning on the (latent of) slightly noisy previous tokens with a noise level $0 < k \\ll K$ to stabilize the auto-regressive generation. How were the values of $k$ chosen in the experiments? Could the authors provide ablation studies on the impact of using this trick?\n\n**I promise to raise the score once all the weaknesses/questions are solved.**"
}
] | |
yDjojeIWO9 | Transferable Adversarial Attacks on SAM and Its Downstream Models | The utilization of large foundational models has a dilemma: while fine-tuning downstream tasks from them holds promise for making use of the well-generalized knowledge in practical applications, their open accessibility also poses threats of adverse usage.
This paper, for the first time, explores the feasibility of adversarial attacking various downstream models fine-tuned from the segment anything model (SAM), by solely utilizing the information from the open-sourced SAM.
In contrast to prevailing transfer-based adversarial attacks, we demonstrate the existence of adversarial dangers even without accessing the downstream task and dataset to train a similar surrogate model.
To enhance the effectiveness of the adversarial attack towards models fine-tuned on unknown datasets, we propose a universal meta-initialization (UMI) algorithm to extract the intrinsic vulnerability inherent in the foundation model, which is then utilized as the prior knowledge to guide the generation of adversarial perturbations.
Moreover, by formulating the gradient difference in the attacking process between the open-sourced SAM and its fine-tuned downstream models, we theoretically demonstrate that a deviation occurs in the adversarial update direction by directly maximizing the distance of encoded feature embeddings in the open-sourced SAM.
Consequently, we propose a gradient robust loss that simulates the associated uncertainty with gradient-based noise augmentation to enhance the robustness of generated adversarial examples (AEs) towards this deviation, thus improving the transferability.
Extensive experiments demonstrate the effectiveness of the proposed universal meta-initialized and gradient robust adversarial attack (UMI-GRAT) toward SAMs and their downstream models.
Code is available at https://github.com/xiasong0501/GRAT. | https://openreview.net/pdf/57573eaf34a55e1f4cc6ab0db0b428f8ee35133a.pdf | [
{
"confidence": 4,
"rating": 8,
"review_id": "C9cma5vwTW",
"review_text": "This work discusses an interesting security issue of deploying a model fine-tuned on a large foundational model in private downstream applications. It proposes a universal meta-initialized and gradient robust adversarial attack (UMI-GRAT) to break the powerful SAM and its various downstream models, without requiring prior knowledge of the specific downstream task and data distribution. The author explores the challenges associated with threating transfer-based adversarial attack without the task-related prior knowledge and provides the theoretical insights on the deviation in updating the adversarial perturbation when using the open-sourced model as the surrogate model. An extensive evaluation of UMI-GRAT's performance, transferability, and efficiency was conducted across five datasets and three different downstream tasks (medical image segmentation, shadow segmentation and camouflaged object segmentation), demonstrating the high effectiveness of the UMI-GRAT approach.\n\n1. This work discusses a critical adversarial issue of deploying large foundation model in real-world applications and for the first time considers a more challenging and practical scenario where the adversarial attacker breaks SAM and its downstream models in the absence of prior knowledge on the task and data distribution.\n2. This work provides the in-depth analysis on the challenge of threating the transferable adversarial attack via open-sourced SAM and proposes the corresponding theoretical insights and solution. \n3. The work establishes a detailed experimental framework and the proposed UMI-GRAT shows superior performance on misleading various SAMs’ downstream models compared with previous methods, which serve as a preliminary exploration for future research.\n\n1.\tIt’s recommended to give more comprehensive analysis of the UMI noise, including the size of the natural image dataset and the effect of various hyperparameters.\n2.\tThere are more metrics such as $E_\\phi$, $F_\\beta^\\omega$ in the camouflaged object detection task. It would be beneficial if the author could provide further data pertaining to these evaluation metrics to enrich the analysis.\n\n1.\tIn Figure 3, the peak cosine similarity of the generated adversarial perturbation is observed between the 20th and 30th iteration. So will increasing the iterative step of generating adversarial perturbation enhance the transferability? \n2.\tModel ensemble is an effective method to enhance the adversarial attacks’ transferability. Will the ensemble of different SAMs benefit the UMI-GRAT?"
},
{
"confidence": 3,
"rating": 4,
"review_id": "SxWzavlAfX",
"review_text": "In this paper, the authors present a new approach for adversarial attacks on Segment Anything Model (SAM)-based downstream models, addressing the challenge of attacking without prior knowledge of the downstream task or data distribution. Their key contribution is a universal meta initialization-based algorithm that exposes inherent vulnerabilities in the foundation model. The authors also introduce a gradient robust loss, which simulates uncertainty through gradient-based noise augmentation. This loss is derived from a theoretical formulation of adversarial update deviation between the open-sourced SAM and its fine-tuned downstream models. The authors provide an analytical demonstration of how their proposed method enhances attack transferability. The effectiveness of their approach is thoroughly validated through comprehensive experiments.\n\nOriginality: This is the first work to explore the feasibility of adversarially attacking various downstream models fine-tuned from the Segment Anything Model (SAM). The introduction of a universal meta initialization-based algorithm to uncover intrinsic vulnerabilities in foundation models is both effective and efficient. Additionally, the formulation of adversarial update deviation and the proposal of a gradient robust loss that simulates uncertainty with gradient-based noise augmentation further enhance the transferability of adversarial examples.\n\nQuality and Clarity: The writing is generally clear but has room for improvement. The methodology and results are well-structured, though some technical sections could benefit from additional clarification.\n\nSignificance: This work is highly significant given the increasing prevalence of foundation models like SAM. The proposed methods for enhancing attack transferability have important implications for AI system security and could influence future directions in both offensive and defensive strategies in adversarial machine learning for SAM.\n\n1 - My major concern is related to the novelty of the proposed approach. Although I agree that this is the first work in the context of SAMs, the main components, such as downstream agnostic adversarial examples and meta learning-based fast initialization, have already been proposed in the literature.\n\n2 - The authors, in line 45, briefly highlight downstream agnostic examples in just one line. They should clarify in the related work section how their work is different from references 55 and 56 of the main paper, beyond just applying it to SAM. Similarly, another related work that the authors missed is [1] (given below), in which the generated adversarial examples are agnostic to downstream tasks.\n\n3 - Similarly, the authors did not mention any work related to meta-learning-based adversarial examples in the paper. There are multiple works that use meta-learning to craft universal adversarial examples, such as [1, 2] below. The authors use these meta-learning-based methods for initialization of adversarial examples, but this has already been explored in [3] below. The authors should mention these meta-learning-based approaches in their paper and discuss how their method is different from these approaches, beyond just the application to SAMs.\n\n4 - It is not clear to me when the authors claim in line 8 that they are attacking \"without accessing the downstream task.\" What is the task here? Is it not the segmentation task? In [1], their task-agnostic adversarial examples are effective against classification, detection, and segmentation. Since the downstream task here is segmentation-based, is it not obvious what the task is? Please clarify this.\n\n5 - The authors should include some specific aspects of SAM to make their attack more unique. Currently, they are utilizing the SAM image encoder, which, in my opinion, is not much different from the previous works listed below.\n\n6 - For experiments, why have the authors compared their method with intermediate-level feature-based approaches? They should also compare it with different downstream agnostic adversarial approaches as listed below.\n\n7 - In Equation 8, how did the authors choose the threshold lambda? \n\n[1] A Self-supervised Approach for Adversarial Robustness (CVPR-2020)\n\n[2] Learning to Generate Image Source-Agnostic Universal Adversarial Perturbations (IJCAI22)\n\n[3] Meta Adversarial Perturbations (AAI2022-Workshop)\n\n[4] Adversarial Initialization with Universal Adversarial Perturbation: A New Approach to Fast Adversarial Training\n\nPlease see the weakness section. While the paper presents an approach to attacking SAM-based downstream models, it largely combines existing methods rather than introducing new techniques. The current strategy, though effective, does not fully exploit SAM's unique architecture."
},
{
"confidence": 4,
"rating": 7,
"review_id": "qlZmlzla2X",
"review_text": "This paper proposes an adversarial attack against fine-tuned derivatives to a publicly available foundation model, such as the Segment Anything Model (SAM). In the proposed threat model, attackers can potentially manipulate these downstream models even without knowing the specific task or data they are used for. Under this threat model, proposes a new attack method called UMI-GRAT (Universal Meta-initialized and Gradient Robust Adversarial Attack). Through a bi-level optimization procedure, this method leverages the information from the open-source SAM to create adversarial examples that can fool mislead the original SAM and its fine-tuned versions. Finally, this paper demonstrates the effectiveness of the proposed UMI-GRAT attack against SAM through extensive experiments.\n\n1. The paper is motivated by real-world safety concerns for fine-tuning a public foundation model on private domain-specific datasets.\n2. The figures and tables are well-polished and generally reflect the overall message of the paper.\n3. The proposed UMI-GRAT attack method is unique and backed by theoretical analysis.\n\n1. The effectiveness of the proposed attack is only demonstrated by attacking the SAM model. However, more experiment settings (e.g. against pretrained MAE models) are warranted to demonstrate the generalizability of the proposed attack.\n\n1. How does the domain gap between the natural image dataset used to obtain the universal adversarial trigger and the downstream dataset influence the effectiveness of the attack?\n2. How effective is the proposed method against adaptive defense? For example, if the downstream victim model has gone through adversarial training, how effective would the adversarial trigger obtained on the unguarded pretrained SAM be against the guarded victim model?"
},
{
"confidence": 4,
"rating": 5,
"review_id": "9wtoTQ5pyF",
"review_text": "This paper investigates the vulnerability of Segment Anything Model (SAM) and its downstream models to transferable adversarial attacks. The authors propose a novel attack method called Universal Meta-Initialized and Gradient Robust Adversarial attack (UMI-GRAT) that leverages the open-sourced SAM to generate adversarial examples effective against fine-tuned downstream models, even without access to the downstream task or dataset.\n\n1. The paper tackles a practical and challenging problem of attacking downstream models fine-tuned from a publicly available foundation model without knowledge of the downstream task or data.\n2. The proposed UMI-GRAT method is well-motivated and technically sound. The authors provide theoretical insights into the gradient deviation problem and propose a robust solution using gradient noise augmentation.\n3. The paper presents extensive experiments demonstrating the effectiveness of UMI-GRAT in attacking SAM and its downstream models\n\nSee Questions.\n\n1. How does the performance of UMI-GRAT vary with different choices of hyperparameters, such as the perturbation bound ε and the number of iterations in UMI and LGR. Especially, line 278 mentions that the perturbation bound is 10, which is a bit too large.\n2. According to the latest benchmark **[R1]**, baselines used in the paper are not SOTA methods. How does it compare with NCS **[R2]**, ANDA **[R3]**, DeCowA **[R4]** and L2T **[R5]**?\n3. Is the proposed method a universal transfer attack method? Although this question is mentioned on line 548, can the performance of UMI-GRAT and SOTA be compared under a general transfer attack test setting?\n4. UMI-GRAT is a gradient-based attack method. How does the proposed method perform when the model has a certain robustness (such as adversarial training)?\n\n---\n**[R1]** Devling into Adversarial Transferability on Image Classification: A Review, Benchmark and Evaluation. \n**[R2]** Enhancing Adversarial Transferability Through Neighborhood Conditional Sampling. \n**[R3]** Strong Transferable Adversarial Attacks via Ensembled Asymptotically Normal Distribution Learning. CVPR. 2024. \n**[R4]** Boosting Adversarial Transferability across Model Genus by Deformation-Constrained Warping. \n**[R5]** Learning to Transform Dynamically for Better Adversarial Transferability"
},
{
"confidence": 3,
"rating": 5,
"review_id": "jeQw521yMz",
"review_text": "In this paper, the authors propose an adversarial attack method that can contaminate downstream tasks from the perspective of adversarial transferability. They address the problem that SAM models do not have similar optimisation routes after fine-tuning for different downstream tasks by designing universal meta initialization. In this paper, the authors address the problem that SAM models do not have similar optimisation routes after fine-tuning for different downstream tasks by designing UMI noise. The authors introduce the idea of meta-learning to allow their algorithm to quickly adapt to different situations, i.e., downstream tasks.\n\n1. The theoretical part of this paper is detailed, the experiments are sufficient. The comparison with other methods shows the sophistication of their approach.\n\n\n\n2. The attacks proposed in this paper are novel. It contributes to the topic of attacking downstream tasks of large models. A discussion on adversarial transferability is introduced under this topic.\n\n1. The readability of the Methodology section of this article is somewhat poor. The authors define the problem to be solved through the form of propositions. Similarly, if the authors could summarise the formulas as Theorem and put the proof process (both formulas and reasoning) specifically in the supplementary material, it would make the article more coherent.\n\n\n\n2. The randomness of the experimental results is unknown. I understand that due to the larger computational effort, it is not practical to report error lines on all major experiments. But it would be better for the authors to report a set of randomness on a smaller dataset and simpler settings, which will influence the reviewers' opinion of the results of this method.\n\nTwo questions listed, see Weaknesses for details. Note that if the authors can demonstrate the randomness of their algorithms, that will help to get a higher rating."
}
] | |
yCh1z6Dcto | Stepping Forward on the Last Mile | Continuously adapting pre-trained models to local data on resource constrained edge devices is the \emph{last mile} for model deployment. However, as models increase in size and depth, backpropagation requires a large amount of memory, which becomes prohibitive for edge devices. In addition, most existing low power neural processing engines (e.g., NPUs, DSPs, MCUs, etc.) are designed as fixed-point inference accelerators, without training capabilities. Forward gradients, solely based on directional derivatives computed from two forward calls, have been recently used for model training, with substantial savings in computation and memory. However, the performance of quantized training with fixed-point forward gradients remains unclear. In this paper, we investigate the feasibility of on-device training using fixed-point forward gradients, by conducting comprehensive experiments across a variety of deep learning benchmark tasks in both vision and audio domains. We propose a series of algorithm enhancements that further reduce the memory footprint, and the accuracy gap compared to backpropagation. An empirical study on how training with forward gradients navigates in the loss landscape is further explored. Our results demonstrate that on the last mile of model customization on edge devices, training with fixed-point forward gradients is a feasible and practical approach. | https://openreview.net/pdf/6c685c934218042e4f6892abccb9b4a82a86be97.pdf | [
{
"confidence": 3,
"rating": 7,
"review_id": "TbpBmqCCXq",
"review_text": "**Context**. The focus of the present paper is on-device fine-tuning (gradient computation and weight update **starting from a pre-trained model**) under limited memory budget. One way to cut the memory cost of storing the computational graph for gradient computation by standard backprop is the Memory Efficient Zeroth Order (MeZO) optimizer [Malladi et al, 2023], whereby a directional gradient is computed via weight perturbation (a.k.a SPSA): computing the loss $L$ difference yielded by two forward passes with weights differing by $\\epsilon u$ estimates $\\nabla L \\cdot u$. Since it is a purely forward procedure, it obviates the need to cache activations to execute a backward pass.\n\n**Core contribution**. The present paper proposes a quantized version of MeZO where weight perturbation, gradient computation and weight update are carried out on quantized quantities. The proposed algorithm, coined QZO-FF (Alg. 1), is tested against a variety of fine-tuning tasks (few-shot learning, cross-domain and in-domain adaptation), modalities (image and audio data), and architectures (convolutional, attention-based, recurrent), with several variants being explored (with fp8 / fp32 activations) and benchmarked against standard backprop. The efficiency of QZO-FF, both in terms of resulting performance and memory usage, is demonstrated.\n\n**Paper outline**. More precisely:\n- Section 2 provides background knowledge on memory-efficient backprop (2.1), forward-mode differentiation (2.2) and quantized training (2.3).\n- Section 3.1 and 3.2 formalizes further \"forward gradients\" (3.1) and the SPSA / weight perturbation procedure to estimate them (3.2). A hardware-friendly extension of SPSA coined as \"Sign-m-SPSA\", which estimates $\\text{sign}(\\nabla L \\cdot u) u$, , is introduced along with the resulting SGD update (3.2).\n- Section 3.3 presents the core algorithmic contribution by combining SPSA / weight perturbation and weight quantization (Alg. 1). More precisely, weights and perturbation are statically, symmetrically quantized (e.g. their range are estimated and set once, before fine-tuning), with one scale for each ($\\Delta_w$ and $\\Delta_q$). Therefore: i) $\\Delta_w$ and $\\Delta_q$ are fixed, with weights and perturbations quantized with 16 and 8 bits respectively, ii) the integer part of the perturbed weights is accumulated in 32 bit, iii) the dequantized perturbed weights is quantized-dequantized back into 16 bits using the same $\\Delta_w$ scale (Eq. 6). The Sign-m-SPSA gradient estimator is applied and quantized-dequantized using the perturbation scale ($\\Delta_z$, Eq. 7) . Finally, the weight update itself is quantized, such that it happens in the quantized integer part and is rescaled by $\\Delta_w$ (Eq. 8). Alg. 1 summarizes the procedure in the case where the number of perturbed directions at use is 1 ($m=1$).\n- Section 3.4 presents several algorithmic \"enhancements\" of the QZO-FF algorithm to improve the optimization procedure itself or its memory footprint.\n- Section 4 presents experimental results. First, few-shot learning is considered (4.1) on visual and audio data. Here, \"FF\" refers for short to \"QZO-FF\". A quantized version of FF, where 8 bits activations are used, is also tested. On vision, three architectures are tested (ResNet12, 18 and ViT tiny) on 5 different standard few-short learning datasets. Two scenarii are considered: full fine-tuning and linear probing. It is shown overall that FF always yields better performance than the zero-shot baseline and within 5%, accuracy-wise, to the BP baseline on 26/30 experiments, and that the ViT backbone yields the least degradation. On audio, a similar experiment is done with two architectures (CRNN, AST) on two audio datasets. On 11/16 experiments, FF accuracy is 5% off the BP baseline. Then, a cross-domain adaptation task (4.2) is considered, where the different algorithmic enhancements previously introduced (e.g. quantized FF, gradient averaging, \"sharpness aware\" scheme...) are tested. Most importantly, it is observed that quantizing weights to 8 bits jeopardize the FF algorithm. Finally, sector, 4.3 presents in-domain OOD adaptation using the same fine-tuning schemes (LP, D-VPT) with three levels of corruptions of the CIFAR-10 dataset as OOD datasets. In this setting, FF achieves comparable performance with BP.\n\n- The problem tackled is highly relevant to on-device training, pragmatic and builds upon recent work [Malladi et al, 2023].\n- The proposed algorithm is sound and well-explained.\n- There are a lot of experimental settings, data modalities and architectures being explored.\n- The proposed technique is effective in providing a learning signal, effectively training models and yielding relatively good performance compared to the BP baseline.\n\n- It is unclear what is kept in full precision in the proposed procedure (see my questions below).\n- On a related note, it is also unclear that the proposed algorithm enhancements don't offset the advantages of manipulating statically quantized quantities (see my questions below). \n- The experiments aren't all sufficiently well explained, neither in the main nor in the appendix, which is frustrating because there is a lot of work done there and we fail to deeply understand the proposed setups. I would even say that there are almost too many different experimental setups. Under constrained time budget to write the paper, I would have prioritized a lesser number of better detailed experiments rather than a lot of them left unsufficiently explained.\n- **There aren't any error bars in any table and figures**, although the authors ticked in their checklist that they reported error bars and provided appropriate information about the statistical significance of their experiments (L. 520). For lack of this, it is very hard to draw any clear conclusion in terms of comparison between the different algorithms at use, e.g. is there a statistically significant gap here, or are these two results within error bar? We don't know.\n- I don't understand what the 2D plot of the loss landscape really brings here in terms of insights.\n\n- L. 135: \"in order to mitigate the noisy component of forward gradients estimated by SPSA, we propose sign-m-SPSA\": do you have evidence that sign-m-SPSA results in less noisy gradients? \n- Section 3.3: could you please clarify what is kept in full precision? I see at least three different quantities not being quantized: i) the scales $\\Delta_w$ and $\\Delta_z$, ii) the loss for each perturbed weights and therefore its difference, iii) the averaged gradient (Eq. 7). Most importantly, do you confirm that you need to accumulate gradients across each direction ($i=1 \\cdots m$) in higher precision (32 bits I guess?) and then quantize-dequantize it using the scale of the perturbation $\\Delta_z$? Your pseudo-algorithm only treats the case $m=1$ so it remains unclear how this all work when $m>1$ and you need to average gradients. **Could you please write a new pseudo-algorithm**, alike Alg.1, **in the case $m>1$**, highlighting with **two different color codes** the quantized (int8 and int16) and full precision (fp32) quantities?\n- L.193-198 (momentum-guided sampling): I think that incorporating momentum into your approach is crucial. However, I don't understand neither how it works. What do you mean by \"as training progresses, a history of the momentum $z$ is incorporated to guide the new sampling process\"?\n- L.199 (sharpness-aware perturbation): you mean an \"extra step of **directional** gradient ascent\"?\n- L.204-210 (sparse updates): which sparsity scheme did you employ? top-k magnitude based scheme may be quite costly, if it boils down to ranking all the weights by their magnitude.\n- L.211 (kernel-wise normalization): in this case we agree that $\\hat{g}$ needs to be stored in full precision? Also, the computation of the norms of $z$ and $w$ is computationally expensive ($O(d)$, where $d$ denotes the dimension of $w$ or $z$), as expensive as it would be to dynamically recompute $\\Delta_z$ and $\\Delta_w$, which you avoided by statically quantizing them. Don't you lose the advantage here of using static scales if in any case you need to perform this $O(d)$ operations?\n- L.216 (few-shot learning): \"a few labeled samples are available\", but how many? Could you please clarify the experimental setup?\n- L.222: \"16w8a\" means 16 bits for weights and 8 for activations, correct? I may not take this for granted and clearly define this notation.\n- Table 2: I would rather compute the **relative** accuracy degradation (acc_BP - acc_qFF / acc_BP) rather than the **absolute** accuracy degradation.\n- Table 2: the accuracy degradation when employing FF in the FT setting compared to BP in the same setting is quite severe (11.08% gap), although you are using a relatively small architecture (ResNet12) with relatively small input dimensionality (32x32). Why is this the case?\n- Table 2: **there are not any error bars**, which makes it hard to make any sense of a $\\sim 0.2/0.5$ difference between two experiments.\n- Could you please define precisely what you mean by \"zero shot\" (I assume no training at all?), \"linear probing\" (I assume only the last linear layer is learned?) and \"full fine-tuning\" (all parameters are learned)? \n- L. 238 (audio benchmark): could you please detail the few-shot setup for this task, and the tasks themselves? It is important for people not familiar with this literature.\n- L. 252: I really did not understand what \"cross-domain adaptation\" really is about. Could you please explain better what it is? \n- L.256: what is \"visual-prompt tuning with deep prompts\"?\n- Fig. 2: except for large discrepancies between bars, it is difficult to draw any conclusion from this figure **for lack of error bars**. Could you please add them?\n- Which conclusions / insights do you really gain from plotting the 2D contours of the loss landscape in the different settings?"
},
{
"confidence": 3,
"rating": 5,
"review_id": "BhPlMQicsC",
"review_text": "This paper explores the feasibility of on-device training using fixed-point forward gradients. The authors propose methods including sign-m-SPSA, Momentum Guided Sampling, Sharpness-aware Perturbation, Sparse Update, and Kernel-wise Normalization to reduce memory footprint and accuracy gaps and conduct experiments across various deep learning tasks in vision and audio domains. Key contributions of this paper include formulating forward gradients in the quantized space, demonstrating the feasibility of on-device training, and visualizing the neural loss landscape during training. The study shows that training with fixed-point forward gradients might be a practical approach for model customization on edge devices.\n\n++ This paper proposes an improved method for forward gradients, called Quantized Zeroth-order Forward Gradient (QZO-FF), which enables forward gradients training using quantization. \n\n++ QZO-FF is quantized and does not require backpropagation, thereby reducing memory overhead and eliminating the need for processors to have training capabilities. However, I doubt this because even though forward gradients do not require backpropagation, they still need to update weights and possibly save momentum and they need to perform additional quantization for $z$ in QZO-FF. Therefore, we may need some hardware adaption to assist feed-forward training.\n\n++ The experiments across various benchmarks show that there is only a slight degradation in accuracy while the memory cost is reduced.\n\n1. Some results are missing in the experiment. For example, (1) the memory cost of (BP, LP, fp16) is not measured. I think the memory cost of LP is important because it seems that the reduction of memory cost mainly comes from LP instead of FF and Quant in Figure 3 and Figure 4, and I think the claim that \"this number is further reduced to only 0.28MB\" and \"the saving increases to 8.1× when sparse update and fixed-point are enabled\" in Appendix B is totally misleading and unfair. (2) The accuracy of (BP, LP, quant) is not measured so there is no baseline for (FF, LP, Quant). (3) The accuracy of (FF, FT, Quant) and (BP, FT, Quant) is not measured. (BP, FT, Quant) should be some BP fixed-point training methods like Quantization-Aware Scaling (QAS) mentioned in related work.\n\n2. Lack of ablation studies. The effects of techniques proposed in Section 3.4 are not well-studied. (1) There is no ablation study for Section 4.1. (2) The effect of sharpness-aware and kernel-wise normalization is not measured separately in Section 4.2. (3) I want to know __which__ of these techniques work in __what__ experiment settings. I believe that, as a new algorithm with many enhancement techniques, the authors should inform the readers about which parts of the algorithm are useful under which circumstances.\n\n3. The model size (100K - 80M) is somewhat small compared to the concept of \"pretrained models\". How does the proposed method perform for larger models and how does the model size affect the effectiveness of the method?\n\n1. Although one can understand the meaning of these symbols after a careful reading, the notation in equation (6) is somewhat confusing because $1_q$ and $\\epsilon_q$ have the same subscript but different scaling factors. I think it would be better to add a notation related to the scaling factor above them.\n\n2. typo in line 271: extenteded"
},
{
"confidence": 3,
"rating": 5,
"review_id": "KfagMIs5Dq",
"review_text": "The authors investigate fixed-point forward gradients for quantized training. They conduct experiments across various deep learning tasks in vision and audio to assess if this method yields competitive models while conserving memory and computational resources.\nThey introduce algorithm enhancements to reduce memory usage and accuracy gaps compared to backpropagation, using fixed-point precision for forward gradients during training or adaptation.\nTheir findings demonstrate the feasibility of on-device training with fixed-point forward gradients across diverse model architectures (e.g., CNN, RNN, ViT-based) and parameter sizes (100K to 80M), offering practical solutions for model adaptation on edge devices.\nThe authors also visualize neural loss landscapes and training trajectories, providing insights into the dynamics of training with forward gradients for efficient on-device model adaptation.\n\n1 .They understand quantization and tried not to leave anything float\n\n2. Experimenting with SAM and ZO is nice \n\n3. The paper is well written\n\n1. Sadly no experiments on LLMs on which most fine tuning is done today\n\n2. Marginal novelty: generally they just added quantization to ZO-FF – is that enough?\n\n1. why you loop over w can’t you just do it vector wise?\n\n2. Can you specify m (the number of pertubations) used for each experiment?\n\n3. Can you calculate the memory consumption (MB/GB) and computation complexity (in FLops) compared to QLoRA with BP."
},
{
"confidence": 4,
"rating": 5,
"review_id": "WwAGaeMsnc",
"review_text": "The paper proposes a quantization approach for fine-tuning pretrained data to new local data on resource-constrained devices. In particular, the weights perturbation, gradients estimation, and weights updates are quantized to either 8-bit or 16-bit. This quantization approach is combined with Momentum Guided Sampling, Sharpness-aware Perturbation, Sparse Update, and Kernel-wise Normalization to enhance fine-tuning performance. The proposed approaches are evaluated on various AI benchmarks. The results of this study indicate that quantized forward gradients are a good candidate for a fine-tuning approach that can be deployed on edge devices.\n\n1- The paper is well-written and well-organized.\n\n2- The quantized approach is evaluated on a variety of tasks that show the generalizability of the new approach.\n\n3- The Sign-m-SPSA-SGD approach is interesting and novel.\n\n1- The author are recommended to discuss the accuracy degradation of quantized forward gradients compared to the backpropagation algorithm. In some cases, the accuracy degradation is high (more than 5%). A comparison of performance versus hardware complexity (FLOPs or another metric) is recommended, as seen in [1].\n\n2- Evaluating the efficacy of quantized forward gradients on fine-tuning LLM models such as LLaMA-3 is recommended. \n\n[1] Carmichael, Zachariah, et al. \"Performance-efficiency trade-off of low-precision numerical formats in deep neural networks.\" Proceedings of the conference for next generation arithmetic 2019. 2019.\n\nWhy is the random perturbation vector z sampled from a normal distribution with zero mean and standard deviation? Is it possible to sample from a log-normal distribution since activation gradients are shown to be distributed near log-normal [1]?\n\n[1] Chmiel, Brian, et al. \"Neural gradients are near-lognormal: improved quantized and sparse training.\" arXiv preprint arXiv:2006.08173 (2020)."
}
] | |
yBrxziByeG | Text-DiFuse: An Interactive Multi-Modal Image Fusion Framework based on Text-modulated Diffusion Model | Existing multi-modal image fusion methods fail to address the compound degradations presented in source images, resulting in fusion images plagued by noise, color bias, improper exposure, etc. Additionally, these methods often overlook the specificity of foreground objects, weakening the salience of the objects of interest within the fused images. To address these challenges, this study proposes a novel interactive multi-modal image fusion framework based on the text-modulated diffusion model, called Text-DiFuse. First, this framework integrates feature-level information integration into the diffusion process, allowing adaptive degradation removal and multi-modal information fusion. This is the first attempt to deeply and explicitly embed information fusion within the diffusion process, effectively addressing compound degradation in image fusion. Second, by embedding the combination of the text and zero-shot location model into the diffusion fusion process, a text-controlled fusion re-modulation strategy is developed. This enables user-customized text control to improve fusion performance and highlight foreground objects in the fused images. Extensive experiments on diverse public datasets show that our Text-DiFuse achieves state-of-the-art fusion performance across various scenarios with complex degradation. Moreover, the semantic segmentation experiment validates the significant enhancement in semantic performance achieved by our text-controlled fusion re-modulation strategy. The code is publicly available at https://github.com/Leiii-Cao/Text-DiFuse. | https://openreview.net/pdf/1870be1308452cfd34778a0947c89002562387ed.pdf | [
{
"confidence": 5,
"rating": 7,
"review_id": "k52PWwQjHg",
"review_text": "A new paradigm of multi-modal image fusion named Text-DiFuse is introduced, based on the diffusion model. The paradigm embeds a mechanism for aggregating feature-level multi-modal image information into the diffusion process of degrading multi-modal images, addressing the optimization gap between \"degradation removal\" and \"multi-modal information fusion\". Additionally, a zero-shot model is introduced to modulate the fusion strategy based on user-input target text, enhancing the saliency of the target of interest. The conducted experiments suggest significant improvements in both human visual perception and advanced computer vision tasks.\n\n1)\tEmbedding the mechanism of aggregating feature-level information into multiple diffusion processes to fuse multi-modal information is interesting. It is foreseeable that this diffusion paradigm produces fused images with better fidelity compared to methods based on likelihood-constrained diffusion models. \n2)\tThe coupled approach effectively resolves the issue of compound degradation in the process of multi-modal fusion, as evidenced by experimental results demonstrating significant advantages over the sequential approach.\n3)\tThe authors emphasize the importance of foreground targets in advanced visual tasks and propose enhancing target saliency through zero-shot assisted re-modulation. This approach diverges from traditional uniform fusion rules, demonstrating effectiveness.\n4)\tThis approach shows strong applicability. It demonstrates superior performance in multiple tasks including infrared and visible image fusion, medical image fusion, and polarization image fusion.\n\n1)\tAfter the diffusion model is effectively trained, the sampling process can follow different step intervals. The information fusion in this method is integrated into the diffusion process, but the article does not seem to specify the sampling interval at which the results are obtained. Also, this article does not discuss the impact of the sampling interval on the fusion performance.\n2)\tThe presentation is slightly unclear. For example, from Equation 2 to Equation 6, both the features and the images carry the condition N that represents the degradation. Why does equation 7 no longer include N? Why can it be considered that the degradation has been removed at this point?\n3)\tIn Table 2 and Figure 4, some existing image restoration methods are cascaded in front of the fusion method to promote fairness in comparison, such as low-light enhancement (CLIP-LIT), denoising (SDAP), and white balance (AWB) algorithms. \nPlease explain the choice of the order in which they are connected in series, i.e. why low light enhancement first, then denoising, and finally white balance. \n4)\tModulating the salience of targets of interest in the fusion process through language is novel. Intuitively, I think the improvement in semantic properties brought about by this modulation is widespread. Currently, the effectiveness of language modulation has only been verified in the semantic segmentation scenario. It is recommended to provide an evaluation in the object detection scenario to further verify its role.\n\nPlease refer to the weaknesses part."
},
{
"confidence": 5,
"rating": 7,
"review_id": "tXmEjcGCYh",
"review_text": "This work focuses on the topic of multi-modal image fusion. Two innovations enhance the performance of the fusion. One is the clever integration of information fusion into the diffusion process. This coupling way enables the fusion function to resist degradation. The other is the introduction of a text-based fusion remodulation strategy. This changes the limitation of previous fusion methods that could only use fixed mappings, allowing for the dynamic adjustment of the fused image based on specific requirements. This remodulation also enhances semantic attributes, improving the scores of the semantic segmentation task.\n\n1. Integrating information fusion into the diffusion process is novel. Especially, each sampling step triggers an information fusion, which enhances the sufficiency of information fusion. This coupling can ensure the robustness of information fusion, addressing challenges such as low light, noise, and color cast. \n2. The introduction of multi-modal large models is interesting, particularly the ability to remodulate fused images using textual commands. This capability could potentially facilitate the flexible deployment of the proposed method across different application requirements. The demonstration of enhanced semantic attributes and improved semantic segmentation performance is good. \n3. Overall, the experiments are relatively sufficient. The comparative experiments include both baseline comparisons and pre-enhancement comparisons, which are important for ensuring fairness. \n4. The code is provided, which helps in reproducing the performance.\n\n1. On page 5, line 174, the source data used for fusion contains degradation, [{Xb,Y}|N]. My question is, in Equations (9) and (10), where do the clean {Xb,Y} used to guide the fusion come from? Is there a multi-modal dataset that contains paired degraded and clean data? The paper seems to lack an explanation for this. \n2. The forward process of the diffusion model involves T steps of noise addition, while the reverse process consists of T steps of iterative sampling. Is the Z0 obtained in equation (8) a hypothetical Z0 derived from the diffusion relation at each sampling, or is it the Z0 after completing the full T steps of sampling? This determines the object of the constraints in the loss functions (9) and (10). It would be better to provide a detailed discussion on this. \n3. Only after the T steps of sampling can the data without degradation be obtained. So why can Z_{t-1}^b in equation (7) be considered free from degradation N? \n4. It's understandable that using textual modulation to control the desired targets of interest can enhance semantic attributes. My question is whether these enhanced semantic attributes can be generalized. In other words, can it also be effective in other high-level visual tasks besides semantic segmentation? \n5. Typo: The Zt on the left side of equation (8) seems to have a missing superscript b.\n\nPlease answer the question raised in Weaknesses."
},
{
"confidence": 5,
"rating": 6,
"review_id": "9bi7G96pt1",
"review_text": "This paper addresses two primary challenges in multimodal image fusion: the mixed degradation of modalities and the insufficient salience of target objects. It proposes two methods to tackle these challenges: feature-level fusion diffusion and the re-modulation of fusion rules in target areas using a zero-shot segmentation model. They implement adequate experiments for evaluation, and the results demonstrate this method's advanced performance across various aspects, including the visual and semantic.\n\n+ The mixed degradation of modalities and the insufficient salience of target objects are two interesting problems in multimodal image fusion. This paper’s discussion and solution of these two problems may promote the usability of fusion methods in real scenarios. \n+ The information fusion at the feature level is integrated into the diffusion process, which effectively realizes the degradation removal.\n+ The customized object highlighting strategy based on the zero-shot segmentation model is flexible. In particular, its gain in semantic attributes will increase the usability of the fused image in downstream tasks.\n+ This paper conducts lots of comparative experiments and ablation studies on the overall method. \n+ The narrative of this paper is comprehensive and clear. For me, it's easy to follow.\n\n- This paper mentioned that the diffusion model is pre-trained to enable the denoising network to have the degradation removal function. However, details about the construction of the data used to train the diffusion model are missing. They need to describe this process to make the overall approach clearer.\n- This paper focuses on multimodal image fusion, being reflected in the title. In the main text, the proposed method is evaluated in two scenarios: infrared and visible image fusion and medical image fusion. In the supplementary materials, they further provide experiments on polarization image fusion. I am curious whether the applicable scenarios of the proposed method can be further expanded, such as the typical fusion of near-infrared and visible bands.\n- The experiments on polarization image fusion only provide visual results, and it would be better to add a quantitative evaluation.\n- I noticed that the proposed method separates the chrominance component and the brightness component, and then performs de-degradation on them separately. An explanation of why this operation is needed should be given. Perhaps an ablation experiment could more intuitively show the effect of this operation.\n- There are some minor typos, such as potential misspellings of dataset names in Tables 1 and 2. In addition, there seems to be a lack of underline on AG's second place.\n\n1. How were the degradation condition data constructed, were paired supervised datasets used or synthetic datasets?\n2. Has there been an attempt to evaluate the fusion on the fusion of near-infrared and visible bands?\n3. Could you provide the quantitative results of polarization image fusion?\n4. The separation of chrominance and brightness requires more explanation."
},
{
"confidence": 2,
"rating": 6,
"review_id": "U0ZtZOJ5Pr",
"review_text": "This paper proposes an interactive framework that can exploit the intrinsic connection between image restoration and multi-modal image fusion.\nThe authors embed information fusion within the diffusion process and address the \"composite degradation challenge\" i.e., multi-modal information integration with\neffective information restoration from degradation like colour casts, noise, and improper lighting. Particularly, first, independent conditional diffusion models are applied\nto each modality with compound degradation -- the degradation removal priors are embedded into the encoder-decoder network. A fusion control module (FCM) sits in\nthe multi-step diffusion process to manage the integration of multi-modal features and remove degradation during T-step sampling. Next, to interactively enhance\nfocus on objects of interest during diffusion fusion, the authors designed a text-controlled fusion re-modulation strategy that incorporates a text and a zero-shot OWL-ViT to\nidentify the objects of interest. In other words, this step performs a secondary modulation with the built-in prior to enhance saliency.\n\n- It is interesting to see the effect of combining image restoration and multi-modal image fusion in a single framework.\n - The proposed method is well-motivated and the authors provide a clear explanation of the method.\n - The Text-controlled fusion re-modulation strategy could be useful in many applications.\n - The authors provide the code in the supplementary material (although I have only dry run the code and not tested it).\n - Extensive experiments are conducted to validate the proposed method.\n - The authors provide ablation studies to show the effectiveness of each component of the proposed method.\n\nFor now, I have minor concerns and mostly questions (as listed in the next section).\n - The authors should add a brief discussion on the competitors in supplementary material. For example, differences between TarDAL, DeFusion, LRRNet, DDFM, and MRFS.\n - Typo in Eq. 2: $\\Theta_{t}^{X^{B}}$ should be $\\Theta_{t}^{X^{b}}$.\n - Improve the caption of Figure 2. I had to read the entire paper to understand the figure (it should be self-explanatory).\n - Not much of a weakness, but the authors could improve the clarity of the paper if they added the tensor dimension of each variable in Figure 2.\n\n- In the proposed method, input visual image X is broken into brightness and chroma components. I wonder if this step is absolutely necessary -- or can we skip $\\eta^{c}_{\n theta}$ and directly combine both $X$ and $Y$ as three-channel images $\\mathbb{R}^{H \\times W \\times 3}$.\n - What if I use InstructIR (for image restoration) followed by MaxFusion (for multi-modal fusion) -- how would it compare with the proposed method?\n - (InstructIR) https://arxiv.org/pdf/2401.16468 | Github: https://github.com/mv-lab/InstructIR\n - (MaxFusion) https://arxiv.org/pdf/2404.09977 | Github: https://github.com/Nithin-GK/MaxFusion\n - In the Limitation and Future work section, will a no-training approach be possible? For example, \"MaxFusion\"-like approach but with the proposed deep integration of image restoration and multi-modal fusion."
}
] | |
yBHbeSpwYS | In Pursuit of Causal Label Correlations for Multi-label Image Recognition | Multi-label image recognition aims to predict all objects present in an input image. A common belief is that modeling the correlations between objects is beneficial for multi-label recognition. However, this belief has been recently challenged as label correlations may mislead the classifier in testing, due to the possible contextual bias in training. Accordingly, a few of recent works not only discarded label correlation modeling, but also advocated to remove contextual information for multi-label image recognition. This work explicitly explores label correlations for multi-label image recognition based on a principled causal intervention approach. With causal intervention, we pursue causal label correlations and suppress spurious label correlations, as the former tend to convey useful contextual cues while the later may mislead the classifier. Specifically, we decouple label-specific features with a Transformer decoder attached to the backbone network, and model the confounders which may give rise to spurious correlations by clustering spatial features of all training images. Based on label-specific features and confounders, we employ a cross-attention module to implement causal intervention, quantifying the causal correlations from all object categories to each predicted object category. Finally, we obtain image labels by combining the predictions from decoupled features and causal label correlations. Extensive experiments clearly validate the effectiveness of our approach for multi-label image recognition in both common and cross-dataset settings. | https://openreview.net/pdf/03a5e626cf9b315df0a1676f88fc6226ff69ec95.pdf | [
{
"confidence": 5,
"rating": 8,
"review_id": "xQGIgVoEyS",
"review_text": "This paper proposes a simple yet effective method based to address the issue of contextual bias for multi-label image recognition. It utilizes the casual intervention theory to pursue causal label correlations and suppress spurious label correlations. It utilizes the k-means to model the confounders, and employs the cross-attention mechanism to achieve the causal intervention. Experimental results demonstrate the efficacy of this approach.\n\nThe paper is well-written and easy to understand.\n\nThis method seems easy to implement.\n\nThe approach achieves good results.\n\nThe problem is interesting in multi-label recognition tasks.\n\nWhy the k-means algorithm is used to build confounders, the author should give further explanation.\n\nIn the paper, the number of cluster centers is only calculated to 100, and what will happen if it continues to increase?\n\nRegarding inference time, how many forward passes does the method require?\n\nIn L191, how to obtain P(c) from the data?\n\nThe authors should provide more detailed explanations and experiments about confounders.\n\nThe authors should provide a description of the inference process.\n\nThe authors should clarify how to obtain a prior of confounders."
},
{
"confidence": 3,
"rating": 4,
"review_id": "C2BA0lmWea",
"review_text": "This paper presents a novel approach to addressing label correlations in multi-label image recognition by using causal intervention. The method involves decoupling features, modeling confounders, and implementing causal interventions to capture useful contextual information while suppressing spurious label correlations. This approach is highly innovative and has significant potential applications.\n\n1. **Innovative Approach:**\n The paper introduces a novel method that applies causal intervention to model label correlations in multi-label image recognition. This innovative approach addresses the challenge of spurious label correlations and captures useful contextual information, which is a significant advancement in the field.\n\n2. **Comprehensive Methodology:**\n The proposed framework integrates several complementary techniques, including feature decoupling with a Transformer decoder, confounder modeling through clustering, and causal intervention using cross-attention mechanisms. This comprehensive methodology enhances the robustness and accuracy of multi-label image recognition models.\n\n3. **Thorough Experimental Validation:**\n The paper conducts extensive experiments across multiple datasets, demonstrating the effectiveness of the proposed method. The results consistently show improvements over existing approaches, particularly in scenarios with contextual biases, underscoring the practical value of the method.\n\n1. **Lack of Hyperparameter Analysis:**\n The paper does not provide a detailed analysis of the hyperparameters involved in the proposed method, such as the number of clusters for confounders or the parameters of the cross-attention module. A sensitivity analysis of these hyperparameters would be beneficial to understand their impact on model performance and to guide practitioners in tuning the model effectively.\n\n2. **Insufficient Discussion on Method Limitations:**\n The paper lacks a thorough discussion on the limitations of the proposed method. It would be valuable to include an analysis of scenarios where the method might not perform well, such as when the selection of confounders is inaccurate or when the causal relationships between labels are weak. Addressing these limitations can provide a more balanced view of the method's applicability and robustness.\n\n3. **Limited Ablation Studies:**\n Although the paper includes some ablation studies, the number and depth of these experiments are not comprehensive enough. More detailed ablation studies are needed to analyze the independent contribution of each module (e.g., feature decoupling, confounder modeling, and causal intervention) to the overall performance. This would help in understanding the importance and effectiveness of each component of the proposed method.\n\n1. I noticed another paper titled \"Counterfactual Reasoning for Multi-Label Image Classification via Patching-Based Training\" that also employs causal inference to address multi-label image classification. The methods in these papers differ in implementation and theoretical basis. Could you further elaborate on the main differences and advantages of your approach compared to this work?\n2. Your paper does not provide a detailed analysis of the hyperparameters involved in the proposed method. Could you explain the rationale behind the chosen hyperparameters and their impact on the model's performance?\n3. There is a lack of discussion on the limitations of your proposed method. In what scenarios might your method underperform, and how could future work address these limitations?\n4. Could you explain why certain confounders were selected for modeling in your approach? How does the choice of confounders impact the effectiveness of causal intervention in your model?\n5. How does your method handle cases where the causal relationships between labels are weak or not well-defined? Does this affect the model's accuracy, and if so, how?\n6. How does your approach ensure robustness against noise and variability in the data? Are there any specific strategies employed to handle noisy or incomplete labels?\n7. Could you provide more details on how the feature decoupling using the Transformer decoder specifically contributes to reducing contextual biases in multi-label image recognition?"
},
{
"confidence": 4,
"rating": 4,
"review_id": "xLjHlqwVai",
"review_text": "This paper proposes a causal intervention mechanism for multi-label image classification, where causal label correlations are pursued and spurious label correlations are suppressed. To achieve this, the authors frame a pipeline consisting of a branch for decoupling label-specific features and a branch for summarizing causal label correlations. The results from both branches are combined for final predictions on image labels. Comparative experiments and ablation studies demonstrate the effectiveness of the proposed causal intervention mechanism.\n\n- The paper is generally well written with clear motivation and objectives.\n- Causal intervention is technically novel and well motivated in terms of multi-label image classification. \n- Experimental results are impressive, outperforming sub-optimal methods by a considerable margin. Ablation studies are aslo well designed to showcase the contribution.\n\n- Line 175: 'Correaltions' -> 'Correlations'.\n- Line 237: 'Transformer encoder' -> 'Transformer decoder'.\n- $f_{fc}$ in Eq.6 and $f_{fc}$ in Eq.11 should be different if their parameters are not shared.\n- In Figure 4, in the causal label correlation branch, the confounder features are added into label-specific features. However, the outputs are not seen to be used in subsequent steps, and it seems that only the label-specific features are utilized for causal intervention. The diagram of this module needs to be improved.\n- More experimental evidence should be provided to verify the effectiveness of the confounder modeling. For example: using random vectors to replace cluster centers as confounders. Only the feature visualization and ablation study on clustering center number are unconvincing.\n- Although this paper is well motivated, the modeling process, especially Equation 11, is confusing.\n\n- How is the operation $f_{merge}$ removed from the second line of Eq.11? Why is the summation over all confounders $c$ also removed? Even if it can be removed, which confounder does $c$ in the last line of the formula refer to? The Eq.11 is confusing and needs further clarification. It would be best if the authors add a pseudocode to illustrate this process.z\n- Are the results in Table 1 and Table 2 reported by re-training these models on the relevant datasets? If so, the authors should clarify the experimental details for fair comparison.\n- According to the Table 8, in terms of the intra-dataset comparisons on MS-COCO, Q2L achieves the same performance in mAP as the proposed method. However, Q2L only requires a Transformer decoder for decoupling label-specific features. Therefore, we question the generalizability of the proposed causal intervention mechanism on multi-label image classification task, wondering whether it is only effective on specific datasets."
}
] | |
yAa5l92TtQ | Proving Theorems Recursively | Recent advances in automated theorem proving leverages language models to explore expanded search spaces by step-by-step proof generation. However, such approaches are usually based on short-sighted heuristics (e.g., log probability or value function scores) that potentially lead to suboptimal or even distracting subgoals, preventing us from finding longer proofs. To address this challenge, we propose POETRY (PrOvE Theorems RecursivelY), which proves theorems in a recursive, level-by-level manner in the Isabelle theorem prover. Unlike previous step-by-step methods, POETRY searches for a verifiable sketch of the proof at each level and focuses on solving the current level's theorem or conjecture. Detailed proofs of intermediate conjectures within the sketch are temporarily replaced by a placeholder tactic called sorry, deferring their proofs to subsequent levels. This approach allows the theorem to be tackled incrementally by outlining the overall theorem at the first level and then solving the intermediate conjectures at deeper levels. Experiments are conducted on the miniF2F and PISA datasets and significant performance gains are observed in our POETRY approach over state-of-the-art methods. POETRY on miniF2F achieves an average proving success rate improvement of 5.1%. Moreover, we observe a substantial increase in the maximum proof length found by POETRY, from 10 to 26. | https://openreview.net/pdf/c496858b5797ffde1be425dcc94d5a7221a5dfb9.pdf | [
{
"confidence": 4,
"rating": 7,
"review_id": "NQNlvZd6T6",
"review_text": "This paper designs a novel hierarchical search algorithm (POETRY) for generating formal proofs with large language models step-by-step. In particular, POETRY will first search for proof steps with proof level 0 (these steps typically correspond to subgoals in the proof), and check the correctness of the level 0 proofs by assuming that all the subgoals can be proved. If and only if the level 0 proofs are correct, POETRY will recursively search for proofs to each of the proposed subgoals. Compared with the baseline best-first search methods with the same compute, POETRY significantly improves the pass@1 succ rate on both miniF2F valid and test set, as well as the PISA test set.\n\n-\tThe POETRY algorithm is neat and novel by mimicking how human write mathematical proofs hierarchically. \n-\tThis paper is well written and easy to follow.\n-\tThe POETRY algorithm has potentials to be further improved by incorporating premise-selection techniques such as sledgehammer or Magnushammer.\n\n-\tFrom Table 1, it seems to me that the improvement from the search algorithm is less significant than the beam search. A drawback of the beam search method is that the algorithm becomes deterministic, meaning that generating more samples per theorem does not improve its performance. Since this paper only shows pass@1 results, it is unclear how the POETRY algorithm scales with more computing resources.\n\n-\tIn the example shown in Figure 4, it seems to me that Path 1 is quite similar to the first part of the proof found by POETRY. Can the authors elaborate on why is the GPT-f baseline tries to utilize a more complex way to prove the first property?"
},
{
"confidence": 4,
"rating": 5,
"review_id": "V0azUuAuyE",
"review_text": "This paper proposes POETRY, a method for formal theorem proving using language models by training the model to iteratively decompose the problem into sketches, recursively. The authors focus on Isabelle. At each step, POETRY takes a proof state and goal and predicts either a formal sketch (a proof using sorry at each step), or a ground proof step (e.g. 'by ...') that requires no recursion. These intermediate states are visited within best-first search, where the score of a node is given by the log-probability of all predictions made so far to get to that node. Intuitively, POETRY works by recursively generating lower level steps / sketches, until finding a complete proof, getting feedback from the formal environment at each step. To train the LM, the authors introduce a simple method to decompose existing Isabelle proofs from the AFP as if they had been generated by this recursive proof generation process. Experiments on minif2f show improvements on top of GPT-f and a version of Thor without Sledgehammer.\n\nThe paper is well motivated and tackles a timely topic, using a standard, hard benchmark (minif2f) for methods in this space. The writing is mostly clear (though see some notes below).\n\nPOETRY is a simple, sound and novel method to structure the proof generation process. It should be adaptable to other interactive theorem provers with some work. POETRY allows the prover model to get more intermediate feedback from the environment compared to methods that try to produce the whole proof at once. It also uses this feedback in a way that is complementary to proof repair methods (generally what the standard when we consider using intermediate feedback).\n\nThe choice of baselines (for Table 1) seems a bit convoluted. In particular, I don't really understand why use Thor without sledgehammer. The main point of Thor is to learn when to use hammers in proofs. Removing this makes the model much more similar to the GPT-f baseline.\n\nAs for the choice of pass@1, even though POETRY only makes a single prediction at the end, it gets feedback from Isabelle at each node in its tree. So that doesn't seem like a fair comparison either, if POETRY makes many intermediate predictions and calls Isabelle during its search, whereas GPT-f and Thor w/o sledgehammer seem to only produce and test a single prediction. It might be more fair to match the methods based on some other metric, like number of tokens generated, or number of calls to Isabelle (whichever seems to be the most significant bottleneck).\n\nThe question of \"Can POETRY find longer proof?\" is a bit ill posed as is. It would be possible for a method to find very long proofs that do not *need* to be long, and do better on this analysis without really being able to prove more complex theorems. What I think the authors are trying to show here is that POETRY can solve harder problems, estimating hardness by looking at proof length. For this, you might want to compare success rate based on the length of the ground truth proof: perhaps the baselines perform very poorly on theorems where the human proof is longer, whereas POETRY might have a better success rate. Another option is to show that either POETRY generates proofs of similar length to the ground truth proof (so, when POETRY generates long proofs, you'd estimate that the ground truth proof would also be long), or that it generates proofs of similar length to the baselines in cases where they all manage to prove a theorem. Any of these would help show that this result is not trivial.\n\n* Is an intermediate sketch valid (accepted by Isabelle) as long as it's syntactically correct and the last step shows the thesis? Or do you manage to get richer feedback besides that the last step declares to show the thesis?\n* For problems that both POETRY and the GPT-f baseline solve, does POETRY tend to generate longer proofs?\n* As for the relationship with LEGO-Prover, you mention that in some cases it is impossible to decompose a proof into lemmas, but still possible to decompose it into sketches recursively. Do you have an example?\n* What exactly is the search algorithm used in the Thor w/o sledgehammer and GPT-f baselines? It is a one-shot prediction? Or do you use the (also best-first) search method described in the original GPT-f paper?\n* What is Thor without sledgehammer? It sounds like a different thing other than Thor.\n* I'm confused by what Figure 5 is trying to show. Fundamentally, reorganizing a proof into sketches shouldn't change its inherent complexity (e.g., the atomic proof steps). Is this just comparing the full proof length against the number of steps in the top-level sketch (without considering the deeper sketches recursively)? If you were to consider the steps in the deeper sketches recursively, I'm assuming you would not expect to see a reduction (if you do, where would that come from? do you have an example?)"
},
{
"confidence": 4,
"rating": 5,
"review_id": "RX4KMM1rUA",
"review_text": "The authors introduce a method called POETRY (proving theorems recursively) for constructing formal proofs in Isabelle/HOL. POETRY performs best-first search on proof sketches guided by a language model fine-tuned on proof sketches. POETRY outperforms other algorithms guided by language models that prove theorems step-by-step. POETRY also outperforms other methods that integrate automated theorem provers and language models.\n\nWhile the idea of a proof sketch is not novel, the combination of the data curation process to enable the construction of proof sketches is. This takes a step towards generating conjectures which would be crucial to making progress on neural theorem proving.\n\n1. It seems to me that the real reason for the success of POETRY is not the algorithm per say, but the data curation to construct proof sketches. In this vein, it would be instructive to have a before sorry and after having sorry to illustrate how the dataset is constructed. \n2. There should be more context explaining how to compare the Lean results against Isabelle/HOL. These are two different formal systems, with different proof methodologies.\n3. More details on success cases and failure cases would help understanding the pros and cons of the approach taken in POETRY. For instance, are there certain kinds of problems that POETRY performs well on, e.g., geometry problems? How does POETRY perform when existentials need to be instantiated? Is it the case that POETRY can prove the same theorems as previous step-by-step approaches and can additionally prove more theorems that are longer, or do the approaches prove different short theorems?\n\n1. The distinction between a proof sketch and the decomposition of a theorem into a tree of conjectures needs to be addressed. Is there any difference?\n2. In your training procedure, do you fine-tune on any theorems in the miniF2F dataset?"
},
{
"confidence": 3,
"rating": 7,
"review_id": "bV1JfItnZR",
"review_text": "This paper introduces POETRY, a new method to prove theorems recursively. The key ideas are to use a modified best first search algorithm for the search part, and a *sorry* tactic for assumptions at the current level (to be proven later). The authors provide the intuition that this recursive structure allows POETRY to prove theorems in a top-down fashion similar to humans, getting into the details of proving a conjecture only if it is actually relevant to the best overall proof being explored. The authors conduct experiments with two standard benchmarks, showing notable improvements over baselines and SOTA search-based methods (but not LEGO-Prover etc. which rely on substantially larger general purpose LLMs).\n\nThe paper is very well structured and clearly written. Intuitions, method details, connection to existing methods, limitations, and take away messages from experiments are all very well articulated.\n\nThe idea seems simple but is apparently novel (see 'weaknesses' below, related to this). \n\nThe gains over several baselines are notable, of 5% or more (absolute).\n\nI am assuming the authors will publicly release the code of their POETRY system for further research on this topic.\n\nNot being very familiar with the area, I am surprised none of the existing SOTA methods use a similar recursive, top-down search of in theorem proving. I will have to defer to other, more knowledgeable reviewers for assessing novelty of the present work.\n\nI did not fully follow why a *novel* recursive best-first search strategy is needed here. The description of this section (3.2) can probably use some clarification. E.g., why could one not account for the conjecture's level in the utility of the conjecture, and thus implicity enforce level-by-level proof search? On the same note, could the authors comment on the relationship between their proposed recursive best-first search and a combination of standard breadth-first search (i.e., staying within a level) and best-first search (i.e., preferring to explore the most promising node first)?\n\nJust for completeness, it would have been good to know how well very large LLM based methods, such as LEGO-Prover, do on the considered benchmarks.\n\nPlease see weaknesses section above."
}
] | |
yAKuSbIwR7 | Neural Synaptic Balance | For a given additive cost function $R$ (regularizer), a neuron is said to be in balance if the total cost of its input weights is equal to the total cost of its output weights. The basic example is provided by feedforward layered networks of ReLU units trained with $L_2$ regularizers, which exhibit balance after proper training. We develop a general theory that extends this phenomenon in three broad directions in terms of: (1) activation functions; (2) regularizers, including all $L_p$ ($p>0$) regularizers; and (3) architectures (non-layered, recurrent, convolutional, mixed activations). Gradient descent on the error function alone does not converge in general to a balanced state where every neuron is in balance, even when starting from a balanced state. However, gradient descent on the regularized error function must converge to a balanced state, and thus network balance can be used to assess learning progress. The theory is based on two local neuronal operations: scaling which is commutative, and balancing which is not commutative. Finally, and most importantly, given any initial set of weights, when local balancing operations are applied to each neuron in a stochastic manner, global order always emerges through the convergence of the stochastic algorithm to the same unique set of balanced weights. The reason for this convergence is the existence of an underlying strictly convex optimization problem where the relevant variables are constrained to a linear, only architecture-dependent, manifold. The theory is corroborated through simulations carried out on benchmark data sets. Balancing operations are entirely local and thus physically plausible in biological and neuromorphic networks. | https://openreview.net/pdf/90c70c96f1ab237e4f7621358dea12b51901014d.pdf | [
{
"confidence": 3,
"rating": 5,
"review_id": "XGeMf26AaS",
"review_text": "This paper aims to study and explain the phenomenon of neural synaptic balance, where a balanced neuron means that the total norm of its input weights is equal to the total norm of its output weights. Particularly, the authors study the reasons why and when randomly initialized balanced models (so, models whose neurons are balanced) tend to be balanced at the end of training as well. The study takes into account many different components of neural networks (activations, layer kinds, regularisers).\n\nThe study is very comprehensive, and sheds light on some interesting properties of deep neural networks.\n\nWhile it is true that, as the authors state in the conclusion, neural synaptic balance is a theory that is interesting on its own, I would encourage the authors to expand the discussion on possible application domains of this theory. Why is it interesting? What are the advantages that a complete understanding of such phenomenons could bring to the table?\n\nBackpropagation is not biologically plausible, and hence does it really make sense to state that the methods proposed by the authors are, if they are then applied to backdrop-based models? I would suggest to either remove such a discussion, or to expand on it, showing even empirically on small models, that the results extend to different kinds of neural networks, where both neural activities and synapses are updated locally in a bio-plausible way (PC). A third way of addressing this would be to add a discussion on it, and avoid to do the experiments."
},
{
"confidence": 5,
"rating": 3,
"review_id": "RiCMcfyeLf",
"review_text": "The authors present a theory of neural synaptic balance, defined as the condition in which a total loss achieves the same value for the input weights to a neuron and its output weights. This is different from the well studied E/I balance in neuroscience and machine learning literature. The authors show mathematical derivations of how to balance a neuron without affecting the outcome of the network and show that balancing a network is a convex optimization process.\n\nThe paper is overall clear and detailed, the mathematical proofs are sound and the paper structured well moving from straightforward claims to less trivial points.\n\nThe paper is about neural synaptic balance, but the authors do not provide convincing motivation why we should care about such balancing. As they mentioned, adding a simple L2 regularizer will balance the network naturally (in a distribution sense, not necessarily each neuron individually) during training and have other well-known benefits, so the elaborate mathematical derivations on the general balancing process seem redundant. In addition, in the authors' own plots, unbalanced networks sometimes outperform the balanced networks (e.g., fig 3E), which just emphasizes the point. One of the mentioned motivations is biological neurons, but they claim that biological neural data about synapses do not exist. However, they could test their hypothesis against the currently available connectomes e.g., from or the Drosophila fly brain. They mention spiking networks, but the notion of input-output homogeneity is unclear in spiking networks. Finally, physical neurons' energy consumption is mentioned without details.\n\nWhy is the energy consumption of physical neurons lower when they are balanced? Why not just have a regularizer to keep the overall activation low and weights small? Why does each neuron need to be balanced separately?"
},
{
"confidence": 4,
"rating": 5,
"review_id": "YAKBodWSHa",
"review_text": "This paper provides a thorough characterization of regularizers which lead to synaptic balance (when the \"cost\" of input weights to a neuron or pool of neurons is tied to the cost of output weights) in trained neural networks. Their results apply to many different activation functions and architectures.\n\nThe paper is very well-written and easy to follow. I was able to read everything, including the math, smoothly. The mathematical arguments themselves are crisp and correct, which I really appreciated.\n\nThe paper is strongly lacking in motivation. I never really understood *why* I should care about synaptic balance. Also, it is clear from the numerical experiments that synaptic balance only emerges in networks when it is enforced via a regularizer (expect in the case of infinitely small learning rate), but why is this surprising? It seems obvious that adding a regularizer for some property tends to result in that property. It would be shocking if synaptic balance occurred without some regularization towards the property. Thus, while the \"what\" and \"how\" of the paper are nicely addressed, I feel the paper is missing the \"why\". I believe if the authors could address this from the outset, it would make the paper much stronger, and I would of course be willing to increase my score.\n\n-It is claimed throughout the paper that \"network balance can be used to assess learning\" progress. I do not really understand how. If my total loss $\\mathcal{E}$ is the sum of a task loss $E$ and a regularizer $R$, then there is nothing preventing a situation where I get $E = 0$ and $\\mathcal{E},R > 0$, meaning that task loss is decoupled from the network balance loss. If the authors could clarify this point, that would be great. \n\nSmall typos:\n- Line 128: alpha is not rendered in latex\n- Figure 4 caption, subplot (D-F) \"CFAR10\" -> \"CIFAR10\""
},
{
"confidence": 3,
"rating": 5,
"review_id": "uRKIhWstnE",
"review_text": "The authors provide a theoretical approach to the analysis of balanced neurons and networks. Their theoretical work includes proof of the convergence of stochastic balancing. In addition, they investigate the effect of different regularizers and learning rates on balance, training loss, and network weights, including practical simulations for two classification problems.\n\nThe paper tries to reveal the inner structure of neural networks during the training phase. This is a very important but difficult problem; its solution could provide new insights for developing better training algorithms. The work proposed can ultimately be an important step toward more transparent networks as opposed to their current black box character.\n\nThe paper has some weaknesses, most notably how the material is presented and part of the evaluation.\n\nTheorem 5.1, dealing with the convergence of stochastic balancing, is arguably the central piece of the paper. However, its formulation is bulky and should be reduced to a shorter, more manageable size, potentially with the help of lemmata. This becomes apparent when seeing that its proof contains the proof of another proposition.\n\nIn Figure 4, the authors say that these panels are not meant for assessing the quality of learning. However, measuring not only the training loss but also the accuracy on a test set will give important insights. How does the classification performance relate to the degree of balancing? Why did the authors not include this analysis? It could give important insights into the relationships between overtraining, generalization capability, balance, and accuracy.\n\nThe author should discuss the consequences of their work on network training. They do not discuss the immediate practical consequences or any recommendations they can make based on their results.\n\nIt would help the paper's clarity if the authors answered their own questions in a brief summary at the end of the paper, as concise as possible:\n\nWhy does balance occur? Does it occur only with ReLU neurons? Does it occur only with L2 regularizers? Does it occur only in fully connected feedforward architectures? Does it occur only at the end of training? And what happens if we balance neurons at random in a large network?"
}
] | |
yAAQWBMGiT | Sketchy Moment Matching: Toward Fast and Provable Data Selection for Finetuning | We revisit data selection in a modern context of finetuning from a fundamental perspective. Extending the classical wisdom of variance minimization in low dimensions to high-dimensional finetuning, our generalization analysis unveils the importance of additionally reducing bias induced by low-rank approximation. Inspired by the variance-bias tradeoff in high dimensions from the theory, we introduce Sketchy Moment Matching (SkMM), a scalable data selection scheme with two stages. (i) First, the bias is controlled using gradient sketching that explores the finetuning parameter space for an informative low-dimensional subspace $\mathcal{S}$; (ii) then the variance is reduced over $\mathcal{S}$ via moment matching between the original and selected datasets. Theoretically, we show that gradient sketching is fast and provably accurate: selecting $n$ samples by reducing variance over $\mathcal{S}$ preserves the fast-rate generalization $O(\dim(\mathcal{S})/n)$, independent of the parameter dimension. Empirically, we concretize the variance-bias balance via synthetic experiments and demonstrate the effectiveness of SkMM for finetuning in real vision tasks. | https://openreview.net/pdf/342630203974cf3966cb02c9c856602a6fdba381.pdf | [
{
"confidence": 3,
"rating": 6,
"review_id": "fAPsAm13fq",
"review_text": "This paper addresses the problem of data selection for finetuning large pre-trained models. The key contributions are:\n\n1. A theoretical analysis of data selection for finetuning that reveals a variance-bias tradeoff in high dimensions.\n2. A provable result showing that gradient sketching can efficiently find a low-dimensional subspace that preserves fast-rate generalization.\n3. A practical two-stage algorithm called Sketchy Moment Matching (SkMM) that uses gradient sketching to explore the parameter space and moment matching to exploit the low-dimensional structure.\n4. Empirical validation on synthetic and real datasets demonstrating the effectiveness of the approach.\n\n1. The paper provides a rigorous generalization analysis for data selection in both low and high-dimensional settings. The proofs are detailed and appear sound.\n2. The proposed SkMM method is simple to implement and scalable to large models/datasets. Experiments on both synthetic and real data demonstrate the effectiveness of the approach.\n\n1. Some of the theoretical results rely on assumptions (e.g., low intrinsic dimensionality) that may not always hold in practice. More discussion of the implications when these assumptions are violated would be valuable.\n2. The method introduces new hyperparameters (e.g., sketching dimension, moment matching strength) without much guidance on how to set them optimally.\n\n1. Does the approach extend naturally to other finetuning scenarios beyond linear probing (e.g., adapters, full finetuning)?\n2. How does the computational cost of SkMM compare to other data selection methods as the dataset/model size increases?"
},
{
"confidence": 3,
"rating": 5,
"review_id": "3e1oer2fvh",
"review_text": "The authors study the task of data selection. They extend the classical variance reduction to the high dimensional case and provide a variance-bias tradeoff analysis. Based on the theoretical results, they propose sketchy moment matching, which first utilizes gradient sketchy to form a low-dimensional space and then uses moment matching to reduce the variance.\n\nThe proposal is a reasonable improvement over the baselines which often only consider bias or variance reduction. The theoretical analysis is also a decent contribution of the paper.\n\nThe experiment focuses on linear probing, which already limits the scope of the evaluation. Furthermore, even under this limited scope, the setting does not seem to be challenging. For the synthetic setup, the sample count is 2000 while the rank is 2500, so it seems not to be a very high-dimension setup (the rank is not so much larger than the sample count). Also, the cluster count seems to be low for both tasks, 8 for synthetic, while the number of class is 10 for Cifar-10.\n\nFor the convenience of the readers, could you list the number of parameter fine-tuned for the Cifar-10 linear-probing task and the number of samples of Cifar-10? (I know this is something that can be looked up online, but these seems to be important numbers to show that the experiment setting is high dimensional)\n\nAlso, could you scale up the synthetic experiment, like increasing sample count/rank/number of clusters? Testing on Cifar-100 is also another way to evaluate the performance on a larger number of clusters."
},
{
"confidence": 3,
"rating": 5,
"review_id": "KqSMi2CHYm",
"review_text": "This paper concerns the data selection problem: given a collection of $N$ embeddings of dimension $r$ for $r\\gg N$, the goal is to pick a subset $S$ of points of size $n$ so that one could run any downstream algorithm on $S$ with a regularization term, so that the empirical risk is small even on the entire finetuning set. Assuming the model is $y=\\phi(X) \\theta_*+z$ where $\\phi: \\mathbb{R}^d\\rightarrow \\mathbb{R}^r$ and $z$ is an i.i.d. noise vector with zero mean and bounded variance, then there exists a subspace that one could project onto and decompose the empirical risk as a bias and a variance term. Further, under the assumption that the second moment matrix has low intrinsic dimension, then one could find a good subspace via gradient sketching: draw a JL matrix $\\Gamma\\in \\mathbb{R}^{r\\times m}$ for $m\\ll r$, then as long as one has $\\Gamma^\\top \\Sigma^{\\phi} \\Gamma \\preceq c_S \\cdot \\Gamma^\\top \\Sigma^{\\phi}_S \\Gamma$, then the error could be decomposed into a bias, variance and a sketching error term. A sketching gradient, moment-matching algorithm is proposed, involves applying sketching to the gradient, form the Jacobian and solve a quadratic relaxation. Experiments are performed on both synthetic datasets and CIFAR10.\n\nThe main theoretical contribution is that for over-parametrized setting where $r\\gg n$, one could provably show the existence of a subspace that one could project onto and perform data selection on that subspace. Moreover, if the second moment in addition has low intrinsic dimension, then one could use standard dimensionality reduction techniques (in $\\ell_2$ norm) to sketch the high-dimensional gradient. In the sketchy moment-matching algorithm proposed in the paper, the authors first sketch the gradient then use uniform sampling to construct $S$.\n\nThe core results of this paper are not technically very novel and surprising, the algorithm could be interpreted as a generalization of the leverage score sampling via JL trick due to Spielman and Srivastava, STOC'08. The analysis largely draws inspirations from the over-parametrization literature, which makes sense as finetuning is essentially training in an over-parametrized setting. Another point that is a bit unsatisfactory is the sketchy moment-matching algorithm utilizes quadratic relaxation to solve the program efficiently with projected gradient descent, but all analysis is based upon *not solving the quadratic programs*. The authors should try to provide some theoretical justifications of sketchy moment-matching, as that's one of the key contributions of this paper.\n\nWhat is the runtime efficiency of your proposed method? It seems the performance is slightly better than ridge leverage score sampling, but ridge leverage score sampling could be implemented in input sparsity time, see the algorithm due to Cohen, Musco and Musco, SODA'17. Their algorithm is based on recursive uniform sampling, so could be implemented efficiently in practice."
},
{
"confidence": 3,
"rating": 6,
"review_id": "njJrk7R99V",
"review_text": "This paper studies the problem of data selection in the over-parametrized fine-tuning regime, i.e. when the number of fine-tuning parameters $r$ is larger than the amount $N$ of available examples. We want to subsample $n\\ll N$ examples that form a representative set to train on, and hopefully achieve quality as close as possible to fine-tuning on the whole set.\n\nThe idea is to compute the gradients $G\\in \\mathbb{R}^{N\\times r}$ of all examples wrt the fine-tuning params and then select a subsample $S\\subseteq [N]$ such that the Gram matrix of the gradients is approximated: $c\\cdot \\Sigma_S := c \\cdot G^\\top I_S G \\approx G^\\top G := \\Sigma$. However, this is not possible to achieve since the model is over-parameterized. Fortunately, if the spectral approximation holds on a low-dimensional subspace of the parameter space, this is good enough, so the authors project the gradients on a random low-dimensional space. The proof goes through under the assumption that the singular values of the gradient matrix are well-concentrated on a small enough (<10%) support.\n\nThe experimental results include fine-tuning on a synthetic linear task, as well as fine-tuning a vision transformer on CIFAR-10 image classification.\n\n- The authors study the data selection for fine-tuning problem from first principles\n- The writing is overall good and math looks sound, even though I didn't check details.\n- The experimental results look promising since SkMM beats a variety of algorithms including leverage scores.\n- The idea of spectral approximation on a subspace of the parameter space is interesting.\n\n- Important details on the experimental setup are missing or unclear. Specifically, what is the optimization process after the data is subsampled? For the image classification experiments, what is being fine-tuned, is it all the ViT parameters? For how many epochs? \n- The algorithm requires computing the gradients of all samples, which can be computationally expensive. Besides, if we are computing all gradients, why can't we just train one epoch on all datapoints? Why is data selection useful in this case?\n- The literature review could be expanded, including relevant papers such as BADGE [1], Coreset-based sensitivity sampling [2].\n- In the experimental results, the authors should also compare with margin sampling (in addition to entropy sampling), as well as uniform sampling for the image classification task.\n- Computing the moment-matching subset in Algorithm 3.1 seems overly complicated, see questions\n\n[1]: Deep Batch Active Learning by Diverse, Uncertain Gradient Lower Bounds\n\n[2]: Data-Efficient Learning via Clustering-Based Sensitivity Sampling: Foundation Models and Beyond\n\nIn Remark 3.2 the authors write that their goal is to achieve the constraint $\\tilde{G}^\\top \\tilde{G} \\preceq c_S \\cdot \\tilde{G}^\\top I_S \\tilde{G}$ (1). They subsequently relax this problem and solve the resulting constrained convex optimization problem using projected gradient descent. However, it seems to me that (1) might be equivalent to $U^\\top I_S U \\succeq c_S^{-1} \\cdot I$, where $U\\Lambda^{1/2} V^\\top$ is the SVD of $\\tilde{G}$. Here $U\\in {N\\times \\bar{r}}$ is a tall and thin matrix. This is a spectral sparsification task which could be solved using leverage score sampling on the rows of $U$. Furthermore, this is the same as sampling examples proportional to the squared $\\ell_2$ norms of the rows of $U$. Maybe, I'm missing something, so please correct me if I'm wrong."
}
] | |
y9zIRxshzj | Causal Discovery from Event Sequences by Local Cause-Effect Attribution | Sequences of events, such as crashes in the stock market or outages in a network, contain strong temporal dependencies, whose understanding is crucial to react to and influence future events. In this paper, we study the problem of discovering the underlying causal structure from event sequences. To this end, we introduce a new causal model, where individual events of the cause trigger events of the effect with dynamic delays. We show that in contrast to existing methods based on Granger causality, our model is identifiable for both instant and delayed effects.
We base our approach on the Algorithmic Markov Condition, by which we identify the true causal network as the one that minimizes the Kolmogorov complexity. As the Kolmogorov complexity is not computable, we instantiate our model using Minimum Description Length and show that the resulting score identifies the causal direction. To discover causal graphs, we introduce the Cascade algorithm, which adds edges in topological order. Extensive evaluation shows that Cascade outperforms existing methods in settings with instantaneous effects, noise, and multiple colliders, and discovers insightful causal graphs on real-world data. | https://openreview.net/pdf/58aefbdcb2bfb0c32b39ede7f68c3577797ad7e1.pdf | [
{
"confidence": 3,
"rating": 6,
"review_id": "ylNPJNriQ8",
"review_text": "This paper introduces a new causal model in which individual events of the cause variable trigger events of the effect variable with dynamic delays. The authors propose a cause-effect matching approach to learn a fully directed acyclic graph, named the CASCADE algorithm. The algorithm performs a topological search on observational data.\n\nThis paper presents a comprehensive theory and algorithm, and conducts extensive experiments, particularly with real data, to validate the effectiveness of the proposed method. The analysis and the algorithm are presented in a logical way.\n\nThe proposed method is the direct matching between a cause event and an effect event, which precludes modeling a single event causing multiple other events, as well as multiple events jointly causing a single effect event. This limits the applicability of the algorithm.\n\n**1.** \nAssuming ``an individual event ... causes an individual event'' in Line 64, and in Line 88: Equation 2.\nDo these indicate that an effect event can only be caused by a single cause event? If so, the use of $pa(*)$ later in the manuscript is confusing. Can an effect event be caused by more than one event?\n\nIn the results of real experiments, there is an event that is caused by more than one event, which contradicts the assumption. How are the results obtained?\n\n\n**2.**\nAre timestamps in events erased during actual use?\nAs far as I can see, the algorithm proposed in the manuscript does not use timestamps."
},
{
"confidence": 3,
"rating": 5,
"review_id": "V2Gyb0GEFU",
"review_text": "The article employs the Algorithmic Markov Condition alongside Kolmogorov\ncomplexity for causal discovery from event sequences. It focuses on a specific scenario in\nwhich the sequence of events is divided into source and effect variables. The principal\ncontribution of this study is its innovative application of Pearl's causality model with\ncombination of AMC method, in contrast to the traditional Granger causality approach,\nenabling the identification of both instantaneous and delayed effects.\n\n1. Originality: The author employs Pearl's model of causality, diverging from\ntraditional Granger causality, to innovatively incorporate instantaneous effects\ninto the analysis of sequential events for causal relationship discovery.\n2. Quality: The article is with good quality and honest about its strength and\nlimitation on their work.\n3. Clarity: The article presents its algorithm with well-defined logic and\nsubstantiated proofs.\n4. Significance: The article offers an innovative approach to integrating\ninstantaneous effects into the causal discovery of sequential events, proposing a\npotential method to enhance causal discovery techniques under such conditions.\nHowever, it imposes strict limitations on the scenarios involving event sequences.\n\n1. Significance: As mentioned in the limitation section by the author, strict\nassumptions like direct matching between a cause event and an effect event leads\nto challenges and possible violations in practical application, and it lacks\nflexibility.\n2. Section 3.3, which discusses the connection to Hawkes Processes, might be better\nplaced in an appendix or in a section dedicated to comparing different\nmethodologies. Its current placement in the theoretical part of the paper is\nsomewhat abrupt, especially since there is no direct focus on these processes in\nyour model.\n3. The experimentation section lacks depth. It would be beneficial to evaluate and\nreport on the robustness of your model when its assumptions are challenged\nduring real-world applications.\n\nN/A"
},
{
"confidence": 4,
"rating": 6,
"review_id": "FqUTUFTqWi",
"review_text": "In their work, the authors are concerned with recovering causal relations, where cause and corresponding effects occur in varying temporal distances. The authors leverage information theoretic formulations and properties of the algorithmic Markov condition to recover the causal graph via minimum description length principled. To this end, the authors present the 'CASCADE' algorithm, which recovers the topological ordering of the causal structure and proof identifiability results. The algorithm is evaluated on multiple synthetic data setups to examine the algorithm's performance under different varying noise, event type, and collider settings. Lastly, the algorithm is tested on a banking and daily activity data set to demonstrate robust performance on real-world data.\n\nThe paper is well-written and introduces the problem setup and formalisms intuitively. The authors consider the challenging problem of modeling causal event sequences. The information-theoretic treatise and causal modeling of the event-generating process via minimum description length encodings are well described and follow common notation from related work. While I am not an expert on the topic of time series event causality, relevant related work seems to be sufficiently discussed and compared to.\n\nThe overall intuition on all proofs is well described. To the best of my knowledge, proofs of theorems 1, 3 and 4 seem to be correct. (Please see minor comments on Thm. 2 below). The presented CASCADE algorithm seems to be sound and its robustness is evaluated via multiple real-world and synthetic experiments, varying the noise and number of event types.\n\nWhile the authors present strong theoretical identifiability results, these guarantees are tied to a restrictive set of assumptions (faithfulness, sufficiency, low noise) and hold only for a specific type of event process (single excitation, no suppressing effects). While the authors state all assumptions explicitly, the paper could be improved by discussing the possible implications and reasonability of real-world applications.\n\n\nProof of Theorem 2 (Sec. A.2; second line of l. 496): As all other terms seem to be taken over from the line above, it is unclear to me where the canceled term on the left side of the inequality is coming from. (Since all terms are positive, I believe the transformation to be still correct.) Furthermore, it is not obvious to me how the equation following l.497 and the noise ratio of $n_{i,j}/n_j$ leads to the desired result. The paper could be improved by providing a more detailed explanation of this step.\n\n\nThe experiments seem to demonstrate consistently better results compared to related algorithms. However, from the experimental description in B.1, it seems that the experiment on the especially challenging identification of colliders --due to unclear parent assignment-- only considers a setting with a single collider. The authors might want to demonstrate algorithm performance for settings where multiple colliers exist, to better examine the algorithm's robustness regarding unclear EM assignments.\n\n\nMinor:\n* It would be helpful to mention the definition of H() in Sec. A.1 as the entropy, which is only mentioned afterward in A.2.\n* Typos in the Proof of Thm. 2 (sec. A.2 l.490): \"dealys\", \"ofset\"; and the Conclusion (l.340) \"discovers\" -> \"discover\".\n* In Sec. 4.1 l.201 text and formula disagree on the complexity: \"[...] leading to an overall quadratic complexity $O(p^3)$\".\n\nMy questions mainly concern the weaknesses mentioned above. I would kindly like to ask the authors to comment on the following:\n\n1) How realistic are the assumptions made in the paper (e.g., low noise in real-world settings)? How would one test for them to hold true? How robust would the algorithm be in the presence of other event types, such as suppressing events or multi-effect events?\n\n2) Proof Thm. 2: Could the authors provide further details regarding the proof of theorem 2 - in detail, the derivation of the final step?\n\n3) Regarding my comments above, could the authors give further insights on the algorithm's performance with an increased number of colliders?\n\n4) Figures. 4, 6 and 8 seem to feature few colliders. This seems unreasonable to me, especially for the global banking data set, which I assume to be highly interconnected (possibly violating the DAG assumptions). Could the authors comment on this possible bias? Is it a result of the assumptions made, and how could it be reduced?"
},
{
"confidence": 3,
"rating": 5,
"review_id": "1EduMfmguG",
"review_text": "The paper introduces a method for identifying causal relationships in event sequences. The authors presents a causal model that handles both instantaneous and delayed effects, contrasting it with existing methods like Granger causality. This algorithm is evaluated on both synthetic and real-world datasets.\n\n1. The theoretical foundation based on the AMC and MDL principle is provided.\n\n2. The proposed CASCADE algorithm is evaluated through extensive experiments.\n\n3. The paper is well-organized, with clear explanations of the proposed model, theoretical underpinnings, and algorithmic steps. The use of illustrative examples and detailed proofs enhances understanding.\n\n1. The paper acknowledges assumptions such as the direct matching between cause and effect events and the focus on excitatory effects. However, it could provide more discussion on the impact of these assumptions and potential ways to address them.\n\n2. Scalability and computational complexity: The paper demonstrates the algorithm's performance on datasets with a moderate number of variables and events. An evaluation of its scalability to very large datasets, which are common in real-world applications, is less emphasized. The computational complexity of the algorithm, particularly for large datasets with many event types, is a concern. The quadratic complexity in the number of event types may limit its applicability to very large-scale problems.\n\n3. Parameter sensitivity is not provided: How sensitive is the CASCADE algorithm to the choice of parameters for the delay distribution and cause probability?\n\n1. How sensitive is the CASCADE algorithm to the choice of parameters for the delay distribution and cause probability?\n\n2. What are the practical limits of the CASCADE algorithm in terms of the number of event types and the size of the datasets?\n\n3. How does the algorithm handle high levels of noise in the data, and are there specific noise thresholds beyond which performance degrades significantly?"
}
] | |
y9sHKrdnRt | MC-DiT: Contextual Enhancement via Clean-to-Clean Reconstruction for Masked Diffusion Models | Diffusion Transformer (DiT) is emerging as a cutting-edge trend in the landscape of generative diffusion models for image generation. Recently, masked-reconstruction strategies have been considered to improve the efficiency and semantic consistency in training DiT but suffer from deficiency in contextual information extraction. In this paper, we provide a new insight to reveal that noisy-to-noisy masked-reconstruction harms sufficient utilization of contextual information. We further demonstrate the insight with theoretical analysis and empirical study on the mutual information between unmasked and masked patches. Guided by such insight, we propose a novel training paradigm named MC-DiT for fully learning contextual information via diffusion denoising at different noise variances with clean-to-clean mask-reconstruction. Moreover, to avoid model collapse, we design two complementary branches of DiT decoders for enhancing the use of noisy patches and mitigating excessive reliance on clean patches in reconstruction. Extensive experimental results on 256$\times$256 and 512$\times$512 image generation on the ImageNet dataset demonstrate that the proposed MC-DiT achieves state-of-the-art performance in unconditional and conditional image generation with enhanced convergence speed. | https://openreview.net/pdf/44798d431529adc7582ec95a03e0b069dec11d02.pdf | [
{
"confidence": 3,
"rating": 5,
"review_id": "DW3loLK2qa",
"review_text": "The paper introduces MC-DiT, a training paradigm for Diffusion Transformers (DiT) in the field of generative diffusion models for image generation. By utilizing the proposed clean-to-clean mask-reconstruction approach, the model can better leverage contextual information at different noise variances.\n\n- The paper provides a perspective on the limitations of noisy-to-noisy masked reconstruction, supported by theoretical insight and empirical analysis.\n- The method is overall reasonable.\n- The performance seems good.\n\n- Will the additional two branches of DiT decoders increase the training overhead compared with other baseline methods? How about the training cost of each iteration compared with baselines?\n- Comparing with MDT-XL / 2-, the improvements of MC-DiT-XL / 2-G seem to be marginal. \n- How is the natural information measured in Fig. 1?\n- Will the code be released?\n\nSee the weakness part.\nBesides, why is the IS of MC-DiT-XL / 2 much higher than other competitors?"
},
{
"confidence": 3,
"rating": 5,
"review_id": "2Ozcl8hIri",
"review_text": "This paper observes that reconstructing masked noisy patches from unmasked noisy patches harms contextual information extraction during the training of DiT and then proposes a novel training paradigm named MC-DiT with clean-to-clean mask-reconstruction. Two EMA branches of DiT decoders are designed to avoid model collapse.\n\n1. The manuscript adequately puts forth a number of propositions and commendably supports these with ample evidence and rigorous demonstrations, fostering a robust intellectual foundation for their arguments.\n2. The authors' perspective on applying a noisy-to-noisy mask reconstruction approach is convincingly articulated.\n\n1. The presentation of generated images for visualization is rather limited in quantity, necessitating an expansion to adequately illustrate the diversity and quality of the results. It is suggested to present generated results with the resolution of $512\\times 512$. This paper only provides visual results in Figure 5 with the resolution of $256\\times 256$ and it also claims superiority on $512\\times 512$ image generation.\n2. Lack of experiment details about training time, inference time and memory usage.\n\n1. Please clarify the reason why two extra EMA branches can address model collapse.\n2. It is suggested to provide visual comparisons compared with other SOTA methods.\n3. Investigating the impact of classifier-free guidance is recommended since it can improve the performance of many baselines such as ADM, DiT, and MaskDiT."
},
{
"confidence": 3,
"rating": 5,
"review_id": "n9PI8VtGXF",
"review_text": "This paper introduces MC-DiT, a novel training paradigm for Diffusion Transformers (DiT) in image generation. It addresses the limitations of current masked-reconstruction strategies, which fail to effectively extract **contextual information** due to noisy-to-noisy reconstruction. MC-DiT employs clean-to-clean reconstruction, allowing for better contextual information utilization during diffusion denoising. The authors also design dual decoder branches to prevent model collapse. Theoretical and empirical analyses validate their approach, and experiments on the ImageNet dataset show that MC-DiT achieves state-of-the-art performance in both unconditional and conditional image generation tasks.\n\n1.The introduction of the MC-DiT paradigm, which utilizes clean-to-clean mask-reconstruction, represents a novel approach that addresses the limitations of existing methods in extracting contextual information.\n\n2.The authors provide a thorough theoretical and empirical analysis, particularly focusing on mutual information, which strengthens the validity of their claims.\n\n3. The proposed MC-DiT achieves superior results in both unconditional and conditional image generation tasks, as demonstrated by the state-of-the-art FID scores on the ImageNet dataset.\n\n1. The paper primarily focuses on image generation using the ImageNet dataset. It remains to be seen how well the approach generalizes to other domains or datasets with different characteristics.\n\n2. The authors should clearly elaborate on the differences between MC-DiT and other masked diffusion transformers (such as MaskGiT,SD-DiT, and MaskDiT).\n\nThe proposed method may still require large computational resources due to the dual-branch decoder design and the clean-to-clean reconstruction process. How to accelerate it?"
},
{
"confidence": 4,
"rating": 5,
"review_id": "ZW0hkp70hm",
"review_text": "The paper introduces a novel training paradigm for Diffusion Transformers (DiT) in the context of generative diffusion models for image generation. The authors propose MC-DiT, which focuses on enhancing contextual information extraction by reconstructing clean unmasked patches from clean masked patches, as opposed to the traditional noisy-to-noisy reconstruction. The method employs two complementary branches of DiT decoders to balance the use of noisy and clean patches, preventing model collapse.\n\n1.\tThe paper presents a new insight into the use of clean-to-clean reconstruction for learning contextual information in masked diffusion models, which is a significant departure from traditional noisy-to-noisy reconstruction methods.\n2.\tThe authors provide a theoretical analysis of mutual information between unmasked and masked patches, demonstrating the limitations of existing methods and the benefits of their proposed approach.\n3.\tThe introduction of two complementary DiT decoder branches to prevent model collapse is a thoughtful addition that addresses a common issue in such models.\n4.\tThe paper reports state-of-the-art results in terms of FID scores and IS scores, indicating that the proposed MC-DiT is highly competitive with existing methods.\n\n1.\tThe proposed MC-DiT model may be more complex than necessary, which could potentially hinder its adoption and implementation in practical applications.\n2.\tThe paper acknowledges that the training and inference speed of MC-DiT needs to be improved, which suggests that the current approach may have efficiency issues. The authors should provide specific comparisons to demonstrate that these efficiency sacrifices are worth the performance gains.\n3.\tThe paper could benefit from a more detailed comparative analysis with other state-of-the-art methods, including feature visualization, to better understand the advantages of MC-DiT.\n4.\tCan the author explain whether this specific context information is pixel-wise information or semantic information? And their role in the overall framework?\n5.\tAblation experiments can be further supplemented and improved. For example, the hyperparameters in Tab.5 can be further observed to have an impact. The current scaling still has some ambiguity.\n\nPlease refer to the Weakness Section."
},
{
"confidence": 3,
"rating": 6,
"review_id": "rGidQtU6K5",
"review_text": "In this work, the authors reveal the issues of Diffusion transformers of having semantic inconsistency as they fail to learn the contextual information. Based on their theoretical analysis, they proposed a novel training paradigm to fully learn contextual information with clean-to-clean mask reconstruction. The paper is well organised and written.\n\nThe authors have a comprehensive understanding of issues and the state-of-the-art. In terms of originality and quality, the work is technically sound in general. The analysis and written are clear in general.\n\nPlease see the list of questions for improvement and clarification on some of the aspects.\n\n1. The authors gave a thorough analysis on the issues of diffusion transformers in section 3. However, the motivation for the proposed MC-DiT to solve the issues is not very clear. \n2. The steps mentioned in section 3.3 are not so clear and probably are not cohesive with Figure 2. For instance, it mentioned as ‘the unmasked noisy patches x_t^{1} are fed into the DiT encoder for extraction’, but it seems those unmasked noisy patches go to the DiT decoder (?). I might be better to put the denotations on Figure 2 as well to guide readers.\n3. In Table 1, the results on using classier-free guidance were reported for ImageNet-256x256 generation. However, when it comes to the ImageNet-512x512 generation, they are ignored and not reported. Any particular reason behind?"
},
{
"confidence": 4,
"rating": 5,
"review_id": "Z7zT24HopD",
"review_text": "This paper proposes a training strategy for diffusion transformers that fully learns contextual information by introducing clean to clean mask reconstruction during training, and designs complementary DiT decoder branches as well as corresponding supervisory losses to avoid the problem of model collapse, giving theoretical and experimental validation.\n\n1.\tSufficient theoretical analysis\n2.\tThe overall writing of the paper is logically clear\n\n1.\tThere are errors in the description of parts of the paper, e.g., x1 in lines 107 and 109 of the introductory section of Masked AutoEncoders is described as masked and unmasked, respectively.\n2.\tVisualization of experimental results is indeed missing, and only quantitative experimental results exist in the body of the paper.\n3.\tUsing the training strategy in the paper, although it can improve the results, it is not possible to conclude the size of the contribution of the training strategy to the final experimental results, as parameter tuning is still required in the testing phase.\n\n1.\tWhy not release the qualitative results as proof of the effectiveness of the strategy?\n2.\tHow much does parameter tuning in the final testing phase affect the degree of merit of the final result? How can it be shown that it is the training strategy that is at work and not the parameter tuning?"
},
{
"confidence": 4,
"rating": 7,
"review_id": "XfuFpwgZjQ",
"review_text": "This paper critiques previous masked-reconstruction strategies in DiT training for their poor contextual information extraction, attributing this to noisy-to-noisy reconstruction. The authors theoretically and empirically validate that this approach limits mutual information between unmasked and masked patches. To address this, they propose a new training paradigm, MC-DiT, which uses clean-to-clean mask-reconstruction combined with diffusion denoising at varying noise levels. To prevent model collapse, they design two complementary DiT decoder branches to balance the reliance on noisy and clean patches. Model collapse would happen in this context due to excessive reliance on clean patches for reconstruction, leading to insufficient utilization of noisy patches and imbalanced training. Extensive experiments on the ImageNet dataset show that MC-DiT achieves state-of-the-art performance in both unconditional and conditional image generation, with faster convergence.\n\n- The paper motivates the need for their research in their introduction and is an interesting idea.\n- The paper adds to the mathematical discussion surrounding image generation using diffusion using well understood mutual information metric prevalent in other areas of computer vision.\n- Presents experimental evaluation, with section on reproducible details and supplementary materials.\n\n- While reading the article, there are many questions that arise which effect the reading experience of the article.\n- The main weakness of the paper is that at many occasions claims are made which are intuitive, but they are attributed to be implied from an equation / proposition which do not (at least not immediately) show the claim to be true. Look at questions for more details.\n- Some experiment details are unclear (in questions).\n- Table 1 is a bit difficult to read here with the number of methods and it is not obvious how the horizontal lines are drawn, i.e. what makes them different from other quadrants. I think there is enough space for a column or two to add a bit more detail instead of adding them all to the name of the method.\n- Figure 3 (a) is used to showcase speed of convergence. However, I think the distinction between convergence and a convergence to lower loss should be made. All 3 lines more or less flatten at the same time, you could actually argue the red and orange line are flattening faster. I agree the blue line is lower, but that does not mean it has converged faster, only converged to a lower loss. This also leads to a second point, a lower loss here does not necessarily mean a more performant model. As you notice in your own experiments, you require fine-tuning to make the output desirable. Therefore, I disagree that the model converges faster on the whole.\n\n- Where can I see: Line 38 \"Despite superior performance over vanilla DiT, they are deficient in exploiting contextual information by neglecting different noise scales in different steps of diffusion process.\" Is there a citation which discusses this is important, or this concluded from your Figure 1 and table of results Table 1?\n- It is not obvious from equation 5 and 6 that in Line 163: \"With the growth of $t$, the KL divergence terms in (5) and (6) increase due to larger noise perturbation on $x^1_0$ and $x^2_0$\" should be true. This can be understood intuitively, but there is no \"decay\" term with respect to $t$ in these equations to suggest that. Can this be formalised with respect to strength of the gaussian noise $n$. Also, I do realise that due to non negativity of KL divergence the two expectation terms subtract from it to make the mutual information smaller, but I so not see how this is true let's say between t and t+1.\n- Line 216: how are the 2 branches of DiT trained in the EMA fashion here (student teacher or are do they also collect gradients)?\n- Line 203: \"$\\mathcal{I}(x^1_0; x^2_0)$ is much higher\", the **much** part is not clear from Proposition 2.\n- Figure 3, is the training loss that is logged for all the models the same? i.e. $\\mathcal{L}_{\\text{clean}}$? or for your method is it the composite loss?\n- Related to Figure 3, when we talk about speed of convergence in terms of iterations it does not say anything about the wall clock time (or FLOPs or Memory) that an iteration takes. In this adapted method, we do x3 forward passes through the DiT decoder, therefore how do the wall clock times (or FLOPs or Memory) compare? From a practical standpoint, this should be clear. Hypothetically, do you also expect the other methods to make up the difference in performance if they were trained for a proportional time longer.\n- Figure 3 a and b, why is the plot only shown for different number of total iterations?\n- Figure 3b, are these metrics calculated before or after fine-tuning for your method?\n- Line 309 in Limitations. Why does the inference speed need to be improved? Is the model inferred differently and requires more steps?\n\n*Minor Typos*\n- Line 13: MDT mentioned before it is defined, although this clear from the citation\n- Equation 1: it was not clear $\\mathcal{L}_{\\text{asym}}$ was defined as the expectation term, this lead to some confusion in the discussion later in Proposition 3.\n- Proposition 2, Equation 6 should end with a full stop.\n- Equation 8 9, 10, 11: brackets are not matched, duplicate sencond closing brackets?"
}
] | |
y9huwsnGRJ | Continuously Learning, Adapting, and Improving: A Dual-Process Approach to Autonomous Driving | Autonomous driving has advanced significantly due to sensors, machine learning, and artificial intelligence improvements. However, prevailing methods struggle with intricate scenarios and causal relationships, hindering adaptability and interpretability in varied environments. To address the above problems, we introduce LeapAD, a novel paradigm for autonomous driving inspired by the human cognitive process. Specifically, LeapAD emulates human attention by selecting critical objects relevant to driving decisions, simplifying environmental interpretation, and mitigating decision-making complexities. Additionally, LeapAD incorporates an innovative dual-process decision-making module, which consists of an Analytic Process (System-II) for thorough analysis and reasoning, along with a Heuristic Process (System-I) for swift and empirical processing. The Analytic Process leverages its logical reasoning to accumulate linguistic driving experience, which is then transferred to the Heuristic Process by supervised fine-tuning. Through reflection mechanisms and a growing memory bank, LeapAD continuously improves itself from past mistakes in a closed-loop environment. Closed-loop testing in CARLA shows that LeapAD outperforms all methods relying solely on camera input, requiring 1-2 orders of magnitude less labeled data. Experiments also demonstrate that as the memory bank expands, the Heuristic Process with only 1.8B parameters can inherit the knowledge from a GPT-4 powered Analytic Process and achieve continuous performance improvement. Project page: https://pjlab-adg.github.io/LeapAD | https://openreview.net/pdf/b1babf61241a3da282fb13f4bf5bd64b4d8f7e45.pdf | [
{
"confidence": 3,
"rating": 6,
"review_id": "mcWlz9XlRk",
"review_text": "The paper introduces **LeapAD**, an interesting paradigm for autonomous driving inspired by human cognitive processes, addressing the limitations of prevailing data-driven methods in complex scenarios. LeapAD incorporates a dual-process decision-making module consisting of an Analytic Process (System-II) for logical reasoning and experience accumulation, and a Heuristic Process (System-I) for quick, empirical decision-making based on the learned knowledge from System-II. By emulating human attention to focus on critical objects, LeapAD simplifies environmental interpretation and mitigates decision-making complexities. The system is tested in the CARLA simulator, demonstrating superior performance over camera-only methods with less labeled data. The Heuristic Process shows continuous improvement through a reflection mechanism and a growing memory bank, indicating the effectiveness of the dual-process approach.\n\nThe paper presents several notable strengths across dimensions of originality, quality, clarity, and significance: \n\n \n\n### Originality \n\n1. **Dual-Process Decision-Making**: The combination of an Analytic Process (System-II) and a Heuristic Process (System-I) emulates human cognitive functions, offering a biologically inspired framework for autonomous driving. \n\n \n\n### Quality \n\nI think the quality is good. \n\n1. **Continuous Learning**: The reflection mechanism and growing memory bank enable continuous learning and improvement, showcasing the adaptability of the proposed system. \n\n \n\n### Clarity \n\nThe paper is well-written, with clear and concise explanations of complex concepts. The dual-process framework and its components are described in detail, making the methodology accessible to a broad audience. \n\n \n\n### Significance \n\n**Advancing the Field**: By introducing a dual-process decision-making framework, the paper opens avenues for research in autonomous driving and artificial intelligence, potentially influencing future developments in the field.\n\nWhile the paper presents some interesting contributions, there are areas where improvements could be made: \n\n \n\n### Methodological Concerns \n\n \n\nWhile I appreciate the design of the Analytic Process and Heuristic Process, does the paper clearly distinguish between the two? My understanding is that the Analytic Process uses LLMs, while the Heuristic Process uses a lightweight language model. Why can it be called the Heuristic Process? It would be better to clearly state why can it be called the Heuristic Process and the Analytic Process. \n\n \n\n### Experimental Limitations \n\n \n\n1. **Quantitative Metrics**: \nThe paper's experimental results are primarily based on the CARLA simulator, lacking real-world experiments. CARLA scenarios are still too simple. It would be better to report results that can comprehensively evaluate the performance of LeapAD, such as using the real-world dataset nuScenes. \n\n \n\n### Clarity and Presentation \n\n \n\n1. **Technical Details**: \nThis paper is based on Qwen VLM. It is not clear whether the performance improvement is due to this Qwen VLM or the two-system design. It would be better to include more ablation studies to explore the influence of VLMs, such as LLaVa. \n\n \n\nBy addressing these weaknesses, the authors can provide a more thorough and robust evaluation of LeapAD.\n\nThere are areas where improvements could be made: \n\n \n\n### Methodological Concerns: \n\n \n\nThe paper should clearly distinguish between the Analytic Process and Heuristic Process? How are these processes defined and why is the Heuristic Process called such if it uses a lightweight language model? \n\n \n\n### Experimental Limitations: \n\n \n\nCan results be reported to evaluate the performance of LeapAD using a real-world dataset like nuScenes? \n\n \n\n### Clarity and Presentation: \n\n \n\nIt is not clear whether the performance improvement is due to Qwen VLM or the two-system design. Can you report ablation studies to explore the influence of VLMs, for example, also try LLaVa? \n\n \n\nBy addressing these questions, the authors can provide a more thorough and robust evaluation of LeapAD."
},
{
"confidence": 4,
"rating": 8,
"review_id": "ofWaRSxcCh",
"review_text": "This paper presents LeapAD, a dual-process closed-loop autonomous driving system.\n\nLeapAD first uses a VLM to analyze the scene by selecting and locating critical objects in the scene, and then it uses a dual-process learning approach to learn driving behaviors.\n\nThe dual-process learning system contains an Analytical Process and a Heuristic Process. The Analytical Process is strong but expensive to run. It is used to summarize the driving experience into the Memory Bank. The Heuristic Process is more lightweight and is used to generate controls to control the vehicle. The Heuristic Process is trained with data in the Memory Bank.\n\nThe Analytical Process can also reflect from collision events in previous simulation runs. It will analyze the cause of the collisions and save the knowledge in the Memory Bank.\n\nThe authors evaluated the LeapAD method in closed-loop simulation with the CARLA simulator. They used the Qwen models as the VLMs and GPT-4 for the Analytical Process.\n\nThe evaluation result shows that LeapAD surpasses the performance of the other camera-only models on the CARLA Town05 benchmark.\n\n* The dual-process idea is neat and thought-provoking. It equips the autonomous driving system with the ability to learn from past experiences.\n\n* The method achieves stronger performance than state-of-the-art methods in CARLA closed-loop simulation.\n\n* This paper is well-written and provides sufficient details for reproducing their approach.\n\n* The performance improvement is not very significant compared to the baseline.\n\nN/A"
},
{
"confidence": 3,
"rating": 6,
"review_id": "1qkVsvJU37",
"review_text": "This paper introduces a paradigm to design an annotation-efficient end-to-end autonomous driving system that harnesses the power and generalizability of open-source LLM models. It proves that critical frame/instance selection are critical to a decision-making module training. This method is evaluated by closed-loop testing in CARLA and achieves the SOTA performance among camera-based methods.\n\n1. The core idea is straightforward.\n2. Achieves the SOTA result.\n3. Complete adequate ablation studies to support its claim.\n\n1. No quantitative benchmark on its VLM module on simulation and the real world. Only some samples are listed in the paper.\n2. The paper only presents an overall benchmark on the system but no failure case analysis.\n3. The result relies on the foundation model performance and the paper does not show a way to fill the gap between the simulation and the real world, which limits its impact.\n\n1. Why decouple into 2 separated modules, scene understanding, and decision making?\n2. The scene understanding section mentions that the motion direction is one of the outputs. However, since the input sensor data is single-frame based, how does the model know the motion direction?\n 3. It is not clear how the interaction with GPT4 completes in the reflection mechanism. It would be better to provide more details."
},
{
"confidence": 3,
"rating": 5,
"review_id": "nDXBRpltDy",
"review_text": "The paper \"LeapAD\" introduces a new approach to autonomous driving that addresses key challenges in adaptability and interpretability. It draws inspiration from human cognition to enhance decision-making processes in complex environments.\nThe system incorporates two complementary processes:\n- Analytic Process: Provides thorough analysis and reasoning, accumulating driving experience through logical reasoning.\n- Heuristic Process: Employs swift, empirical processing and learns from the Analytic Process through supervised fine-tuning. This dual-process setup enhances adaptability and performance.\n\nClosed loop testing in the CARLA simulator demonstrates that LeapAD outperforms methods relying solely on camera input. The Heuristic Process can inherit knowledge from an Analytic Process powered by GPT-4, leading to continuous performance improvements as the memory bank expands.\n\n1.\tThe paper is generally well-written and but easy to follow. Good motivation for the model design in the introduction. \n2.\tI like the problem setup: how can we design AV systems that continually learn from its mistakes. \n3.\tThe experimental results seem to support the authors' claims.\n\nI overall liked the idea of closed-loop autonomous driving approach that could emulate the critical attention mechanisms required for smooth driving environment in safety critical scenarios. The notion of heuristic and analytical processes for executing actions in robotics seems a novel approach.\nHowever, my primary concern lies in the setup of data and models for generating scene descriptions into text to identify critical objects. Operating within the text domain, which requires subsequent interpretation and tokenization by the analytical and heuristic modules, seems less efficient than using a direct vectorized representation. For instance, representing an object with parameters such as {v = 0.2 m/s, s = 3m, class = Car} is likely more efficient and robust than the text output \"The car is 3 m away from the ego vehicle and is moving at 0.2 m/s.\" This textual method could lead to inefficiencies, especially in scenarios with multiple dynamic actors.\n\n- The authors should detail the data generation process for complex driving scenarios like intersections, lane changes, and overtaking. Based on my understanding, the current model primarily focuses on simpler scenarios involving a single-lane and limited interaction with other actors. \n- I recommend evaluations in more dynamic settings such as intersections and scenarios involving lane changes and overtaking, where multiple actors interact and cooperate for safety. \n- A comparison with traditional vectorized or feature-based planning systems, such as Wayformer or Precog, would be beneficial. These systems process scenes as images and convert data into vectors instead of text, which might offer insights into efficiency and performance. \n- I see DriveLM also has good performance in the case of VLM based driving. Is there any reason why that has not been put as a baseline, considering the similarity in dataset generation processes.\n\nI look forward to seeing how these suggestions might be incorporated to further enhance the robustness and applicability of the proposed approach in more complex driving scenarios.\n\nRef: \n- http://arxiv.org/abs/2207.05844 \n- https://openaccess.thecvf.com/content_ICCV_2019/papers/Rhinehart_PRECOG_PREdiction_Conditioned_on_Goals_in_Visual_Multi-Agent_Settings_ICCV_2019_paper.pdf\n- https://arxiv.org/abs/2312.14150"
}
] | |
y929esCZNJ | MomentumSMoE: Integrating Momentum into Sparse Mixture of Experts | Sparse Mixture of Experts (SMoE) has become the key to unlocking unparalleled scalability in deep learning. SMoE has the potential to exponentially increase in parameter count while maintaining the efficiency of the model by only activating a small subset of these parameters for a given sample. However, it has been observed that SMoE suffers from unstable training and has difficulty adapting to new distributions, leading to the model's lack of robustness to data contamination. To overcome these limitations, we first establish a connection between the dynamics of the expert representations in SMoEs and gradient descent on a multi-objective optimization problem. Leveraging our framework, we then integrate momentum into SMoE and propose a new family of SMoEs, named MomentumSMoE. We theoretically prove and numerically validate that MomentumSMoE is more stable and robust than SMoE. In particular, we verify the advantages of MomentumSMoE over SMoE on a variety of practical tasks including ImageNet-1K object recognition and WikiText-103 language modeling. We demonstrate the applicability of MomentumSMoE to many types of SMoE models, including those in the Sparse MoE model for vision (V-MoE) and the Generalist Language Model (GLaM). We also show that other advanced momentum-based optimization methods, such as Adam, can be easily incorporated into the MomentumSMoE framework for designing new SMoE models with even better performance, almost negligible additional computation cost, and simple implementations. | https://openreview.net/pdf/72b117c375c15a0ef6ea9c489740b45ea2c3e8ed.pdf | [
{
"confidence": 4,
"rating": 7,
"review_id": "OEZ0H0aYZ0",
"review_text": "The paper introduces MomentumSMoE, a novel integration of heavy-ball momentum into Sparse Mixture of Experts (SMoE) to enhance stability and robustness. It establishes a connection between SMoE and gradient descent on multi-objective optimization problems.\nThe paper demonstrates theoretical and empirical improvements of MomentumSMoE over standard SMoE across various tasks. The method is universally applicable to many SMoE models, including V-MoE and GLaM, with minimal additional computational cost.\n\nTo the best of my knowledge, attempting to accelerate the fixed point iteration in SMoE is an original idea.\n\nIt seems like there is comprehensive empirical evidence for the method, but I am not an expert on metrics for the SMoE, and will have to rely on other reviews to be confident in this strength.\n\nThe paper is fairly clear, with well-organized sections and figures.\n\nMy largest negative for this paper is the largely unfounded connection between the SMoE and gradient descent. If the authors had made a connection to accelerating fixed-point iterations in general, I would want to accept this paper. Essentially, the authors are assuming that $\\nabla_x f$ has strictly real eigenvalues when they should just work with truly, potentially complex, eigenvalues, ex., using tools as in Azizian et. al. For example, when performing this analysis, various other acceleration schemes are often better, like negative momentum (Gidel et. al.) or complex momentum (Lorraine et. al.). I would be curious to see some empirical investigation (or theoretical) or what the eigenvalues of $\\nabla_x f$ are – ex., as in Figure 7 of https://arxiv.org/pdf/2102.08431 -- to validate any theoretical claims about what acceleration schemes should be used.\n\nBut, of course, the spectrum is only known in small-scale problems, leading to the second weakness, which is that some of the methods – ex., RobustSMoE – seem to rely on knowing the spectrum to set various parameters, which we won’t have access in real settings. Th\n\nThe theoretical results are also largely just reproductions of known theoretical results for momentum once you assume that the update from the SMoE is a gradient. This makes them not much of a contribution from my point of view other than leveraging existing tools. I think these results could be easily substituted for analogous techniques from Azizian.\n\nAzizian, Waïss, et al. \"Accelerating smooth games by manipulating spectral shapes.\" International Conference on Artificial Intelligence and Statistics. PMLR, 2020.\nLorraine, Jonathan P., et al. \"Complex momentum for optimization in games.\" International Conference on Artificial Intelligence and Statistics. PMLR, 2022.\nGidel, Gauthier, et al. \"Negative momentum for improved game dynamics.\" The 22nd International Conference on Artificial Intelligence and Statistics. PMLR, 2019.\n\nHow can the assumptions about the fixed point operator's spectrum and the Jacobian's conservativeness be validated or relaxed in practical scenarios?\n\nAre there more general acceleration tools than momentum you might want to use for this problem?"
},
{
"confidence": 3,
"rating": 7,
"review_id": "unwj5JN3td",
"review_text": "This paper proposes a variant of sparse mixture of experts, MomentumSMoE, by incorporating momentum into the traditional sparse mixture of experts framework. The authors provide both theoretical proofs and empirical evidence demonstrating that MomentumSMoE offers greater stability and robustness compared to the standard sparse mixture of experts. Experiments on language modeling and object recognition tasks are conducted to verify the effectiveness of the proposal.\n\n1. The idea of integrating momentum into sparse mixture of experts is interesting. \n2. Both the theoretical proof and extensive empirical results are provided to demonstrate that the proposed MomentumSMoE is more stable and robust than SmoE; the experimental results are appealing.\n3. The code is provided.\n\nThe pseudocode may be provided to better illustrate the implementation of the proposal.\n\n1. Why the MomentumV-MoE and Robust MomentumV-MoE have marginal gains on clean IN-1K data, is there any in-depth analysis available on this?\n2. In the ImageNet-1K Object Recognition experiment, why was the popular top-5 accuracy metric not used, as it was in the Soft Mixture of Experts experiment?\n3. As stated in the weaknesses, the authors could provide pseudocode to better clarify their proposal."
},
{
"confidence": 4,
"rating": 5,
"review_id": "x39QRlktc9",
"review_text": "The paper introduces a novel approach to enhancing the robustness and stability of Sparse Mixture of Experts (SMoE) models. Inspired by the analogy of gradient descent and SMoE, the authors develop a family of models by incorporating momentum into the training process. The key idea is that training SMoE is a multi-objective optimization problem where the monument-based gradient descent method is more stable and robust than the vanilla one. They proposed the AdamSMoE and Robust MomentumSMoE, which demonstrate improved performance across a variety of tasks, including language modeling and object recognition.\n\n(1) The integration of momentum into SMoE is a non-trivial innovation that addresses instability and inefficiency issues in existing models.\n\n(2) The paper provides convincing empirical evidence showing the effectiveness of MomentumSMoE across multiple benchmarks.\n\n(3) The proposed method's compatibility with other momentum-based optimizers, like Adam, suggests it can be broadly applied to various SMoE architectures.\n\n(1) Formulating SMoE as a multi-objective optimization problem is doubtful to me. Every expert network is continually changing during the model training, which makes each objective nonstatic, which violates the basic assumption of multi-objective optimization, whose objectives should be very clear and stable. \n\n(2) It is unconvincing to use ||f(x)|| as the key metrics to measure the efficacy of SMoE or MoE. This confuses me a lot. Please explain why the output norm represents the goodness/badness of the model.\n\n(3) There are some grammar issues. Please use `` instead of \" in the paper (line 665).\n\n(4) There is no sufficient discussion of computation overhead. Training efficiency is a critical issue for current foundation model training. Does computation significantly increase by applying momentum over the SMoE? Keeping an additional copy weight (p in Fig 1) would take additional memory and may decrease the throughput.\n\nI'd like to hear a more insightful discussion regarding all the points above from the authors.\n\n(1) Please explain more of line 140 (\"Thus, it is expected that these two terms learn to reduce ...)."
},
{
"confidence": 4,
"rating": 6,
"review_id": "kSw5XLnjw8",
"review_text": "This paper addresses the instability problem of training SMoE models. By establishing a relationship between SMoE and multi-objective optimization, the authors integrate momentum into SMoE and propose MomentumSMoE. Experimental results show that MomentumSMoE is more stable than SMoE during training.\n\n1. The paper tackles a critical issue in the training of SMoE models.\n\n2. The proposed method is generalizable and can be applied to various SMoE models such as V-MoE and GLaM.\n\n3. Experimental results demonstrate that this method is more stable than SMoE during the training process.\n\n1. This method has little effect on models with few layers.\n\n2. The largest models for evaluation only have 388M parameters, which are much smaller than mainstream MoE LLMs.\n\n3. From a theoretical standpoint, developing a framework to explain the enhanced robustness of MomentumSMoE would be interesting.\n\nplease refer to weaknesses"
}
] | |
y8Rm4VNRPH | Parallelizing Linear Transformers with the Delta Rule over Sequence Length | Transformers with linear attention (i.e., linear transformers) and state-space models have recently been suggested as a viable linear-time alternative to transformers with softmax attention. However, these models still underperform transformers especially on tasks that require in-context retrieval. While more expressive variants of linear transformers which replace the additive update in linear transformers with the delta rule (DeltaNet) have been found to be more effective at associative recall, existing algorithms for training such models do not parallelize over sequence length and are thus inefficient to train on modern hardware. This work describes a hardware-efficient algorithm for training linear transformers with the delta rule, which exploits a memory-efficient representation for computing products of Householder matrices. This algorithm allows us to scale up DeltaNet to standard language modeling settings. We train a 1.3B model for 100B tokens and find that it outperforms recent linear-time baselines such as Mamba and GLA in terms of perplexity and zero-shot performance on downstream tasks. We also experiment with two hybrid models which combine DeltaNet layers with (1) sliding-window attention layers every other layer or (2) two global attention layers, and find that these hybrids outperform strong transformer baselines. | https://openreview.net/pdf/397e5724c60c7bb0691c6436dc8a56f5a0336f4f.pdf | [
{
"confidence": 4,
"rating": 7,
"review_id": "jq3aXcAGvg",
"review_text": "This paper proposes the Delta Rule method to construct the state updates for Linear Attention. Furthermore, the paper introduces a chunk-wise training approach, allowing the computational cost of training to grow subquadratically with the text length. Experimentally, the paper validates the effectiveness of the model architecture using three synthetic benchmarks: MQAR, MAD, and RegBench. Additionally, the paper uses Common Sense Reasoning and Retrieval tasks in LLM pre-training to verify the model's performance in real-world tasks. The model has been validated at scales ranging from 340M to 1.3B parameters. Furthermore, this paper explores the possibility of combining the Delta Rule with Sliding Window Attention and Global Attention, demonstrating the positive impact of the hybrid architecture on model performance.\n\n1. Solid work. The paper provides a good derivation, offering a more general method for state updates in Linear Models.\n2. The experiments are comprehensive and effectively demonstrate the validity of the model architecture.\n\n1. Have you conducted experiments on long context? For example, measuring extrapolation and scenarios akin to \"looking for a needle in a haystack\"? As a linear model, I would like you to further discuss its capability to generalize to long context.\n2. The algorithmic speed of Delta Net increases linearly, but it seems to be slower than GLA. Can you analyze the factors contributing to this?\n3. Could you further explain the insights of the Delta Net updates? I understand there are algorithmic differences compared to GLA operators, but what unique benefits do they bring? Is there any theoretical analysis?\n\nI would like to discuss the following questions with you:\n\nDo you think linear models can fundamentally bridge the gap with transformers in memory-based tasks?\n\nIs there an inherent conflict between the ability to handle long context and the performance of memory-based tasks?"
},
{
"confidence": 3,
"rating": 8,
"review_id": "o2xPg2Q6RX",
"review_text": "This paper introduces a novel algorithm for the efficient training of DeltaNet Linear Transformers. DeltaNet enhances contextual associative recall using a delta rule-like update but was previously limited by inefficient parallelization in its training algorithm. The work described in this paper presents a hardware-efficient algorithm that leverages the memory-efficient WY representation for computing products of Householder matrices, enabling the scaling of DeltaNet similar to other linear Transformer models. The authors trained a 1.3B parameter model on 100B tokens and found that it outperforms strong linear-time baselines such as Mamba and GLA in terms of perplexity and zero-shot performance on downstream tasks.\n\n- The paper introduces a novel hardware-efficient algorithm for training DeltaNet Linear Transformers, leveraging the WY representation of Householder matrices, which effectively addresses the parallelization limitations of previous algorithms.\n- Through large-scale experiments, the authors demonstrate that DeltaNet significantly outperforms existing models like Mamba and GLA in terms of perplexity and zero-shot performance on downstream tasks.\n- The new algorithm enables the scaling of DeltaNet to larger datasets and parameter sizes, which is crucial for large language models.\n\nThe algorithms presented in this paper are satisfactory in terms of efficiency and performance.\n\nI have no questions for this paper."
},
{
"confidence": 4,
"rating": 7,
"review_id": "hQNYHP50sF",
"review_text": "This paper proposes a hardware-efficient algorithm for training linear transformers with a delta update (DeltaNet; SMS21). This architecture has an attention formulation that prevents the direct application of chunk-wise parallel algorithms for computing its output. To address this issue, the authors introduce a re-parameterization of DeltaNet as a matrix-valued RNN whose recurrence is given by a generalized Householder transformation. This enables the use of WY representation which is memory efficient and eliminates the need to materialize the hidden state matrices. Experiments on synthetic benchmarks and language modeling tasks shows competitive performance compared to strong baselines (Mamba, GLA) and faster speed than the original Deltanet implementation.\n\n- The paper is well motivated and situated with respect to prior work. It provides sufficient background for linear transformers, demonstrates great scholarship in crediting prior work, and has a clear exposition of the proposed idea. In addition, it presents an informative overview that compares the formulations of recent linear transformers that highlights their differences. \n- Proposes an efficient algorithm for training linear transformers with the delta update which is a competitive variant. The re-parameterization is non-obvious and leverages WY representation for Householder matrices in a novel way. Previously, this architecture could not be easily scaled to larger models and datasets with a recurrent formulation. In addition, it introduces two competitive hybrid methods based on DeltaNet that leverage local and global full attention. \n - Demonstrates the effectiveness of the proposed approach on two synthetic benchmarks and eleven language modeling and understanding tasks compared to strong baselines such as Mamba and GLA. The results are consistent, have a good coverage, and are important for the researchers working on efficient transformers. \n- The experiments are thorough and have convincing settings, namely all the variants are trained from scratch with the same configurations, there are ablations to justify the design choices, and the experimental reporting is very detailed.\n\n- W1. In terms of scale, the model explores two different architectures of increasing size up to 1.3B parameters. Even though this size is considerable, it is still relatively small compared to the LLMs that are widely used such as Llama, Mistral (7B+ size). There is always the question of whether the quality is maintained with further model increase.\n- W2. The improved results compared to Mamba and GLA make use of additional architectural components: convolution and local/global attention, without them the results are comparable to the other models.\n\n- Q1: What is the effect of chunk size in the chunk-wise parallel algorithm for DeltaNet? Varying the chunk size $C$ and showing its effect in efficiency would be interesting to explore. \n- Q2: The chunk-level hidden states $S_{[t]}$'s are discarded to save memory. From Eq. 7, it seems that their computation depends on the previous hidden states $S_{[t-1]}$'s. Are these kept in memory for the re-computation in the backward pass? \n- Q3: GLA with convolution performs worse than w/o convolution with the larger model size. Do you expect this to be the case for DeltaNet as well? It would be good to add this result if possible. \n\nMinor:\n- In Table 2, is the L1/L2 norm referring to the normalization of queries and keys? Please specify. \n- In Eq.1, why is this equation showing the state $S_{[t+1]}$ instead of $S_{[t]}$? The latter is used in Eq. 2. Same for Eq. 7. \n- l172: stable -> stabilize\n- l213-214: we -> we follow\n- l321: vallina -> vanilla"
}
] | |
y8P633E5HQ | Equivariant Machine Learning on Graphs with Nonlinear Spectral Filters | Equivariant machine learning is an approach for designing deep learning models that respect the symmetries of the problem, with the aim of reducing model complexity and improving generalization.
In this paper, we focus on an extension of shift equivariance, which is the basis of convolution networks on images, to general graphs. Unlike images, graphs do not have a natural notion of domain translation.
Therefore, we consider the graph functional shifts as the symmetry group: the unitary operators that commute with the graph shift operator.
Notably, such symmetries operate in the signal space rather than directly in the spatial space.
We remark that each linear filter layer of a standard spectral graph neural network (GNN) commutes with graph functional shifts, but the activation function breaks this symmetry. Instead, we propose nonlinear spectral filters (NLSFs) that are fully equivariant to graph functional shifts and show that they have universal approximation properties.
The proposed NLSFs are based on a new form of spectral domain that is transferable between graphs.
We demonstrate the superior performance of NLSFs over existing spectral GNNs in node and graph classification benchmarks. | https://openreview.net/pdf/7cf055b02c2ad0760653ed4c078ae8d3ffd7a0eb.pdf | [
{
"confidence": 3,
"rating": 7,
"review_id": "kRqVUHltVu",
"review_text": "This paper addresses the issue of inadequate modeling of graph equivariance in existing spectral GNNs due to nonlinear operations. The authors investigate the concept of domain translation in graph space as functional translations, drawing from the convolutional operations defined on images. Based on a series of in-depth analyses, they propose nonlinear spectral filters (NLSFs) that are fully equivariant to graph functional shifts and demonstrate universal approximation capabilities.\n\n1. The research problem is highly valuable, and the ideas presented are novel.\n2. The theoretical analysis is rigorous, thoroughly supporting the arguments and solutions proposed in the paper. It reflects the authors' deep understanding and insight in the field.\n2. Notable experimental improvements in classification tasks.\n\n1. The paper is missing important spectral GNN models such as [1,2,3,4].\n2. There is a lack of discussion on equivariant GNNs, such as [5,6,7,8,9,10,11,12]. While the focus is on spectral GNNs, it is also essential to discuss works in the spatial GNN domain, especially given that the paper’s title starts with \"Equivariant Machine Learning on Graphs.\" Positioning your work within the broader GNN field can further elucidate the significance and unique advantages of your contributions.\n3. The experiments are somewhat lacking in comprehensiveness. While you have demonstrated the effectiveness of your proposed model in common node and graph classification tasks, your core contribution is enhancing equivariant representation learning in spectral GNNs. I suggest adding experiments that specifically show how the improved performance is due to enhanced equivariant learning, such as graph isomorphism tests.\n\n[1] How powerful are spectral graph neural networks\n\n[2] Bernnet: Learning arbitrary graph spectral filters via bernstein approximation\n\n[3] Specformer: Spectral graph neural networks meet transformers\n\n[4] Graph neural networks with learnable and optimal polynomial bases\n\n[5] Universal Invariant and Equivariant Graph Neural Networks\n\n[6] E(n) Equivariant Graph Neural Networks\n\n[7] On the Generalization of Equivariant Graph Neural Networks\n\n[8] Expressive Power of Invariant and Equivariant Graph Neural Networks\n\n[9] Approximately Equivariant Graph Networks\n\n[10] Equivariant Polynomials for Graph Neural Networks\n\n[11] Graph Neural Networks for Learning Equivariant Representations of Neural Networks\n\n[12] Sign and Basis Invariant Networks for Spectral Graph Representation Learning\n\nsee weaknesses"
},
{
"confidence": 3,
"rating": 7,
"review_id": "wTwK9qLRyk",
"review_text": "This paper proposes a spectral GNN called non-linear spectral filters (NLSF), which aims to enhance GNNs with nonlinear functions. Since general GNNs with nonlinear functions do not commute with unitary operators, this paper defines Graph Functional Shifts, which is a set of unitary matrices commuting with a normal graph shift operator (GSO). It then formulates two functions for spectral index and filter bank, respectively, and concatenates these two functions as graph attention. In the experiment section, NLSF is compared with GAT, SAGE, and other spectral-like GNNs. In the node-classification task, att-Node-level NLSF shows outstanding performance among these models. In the graph classification task, att-Graph-level NLSF achieves comparable results with these models, and att-Pooling-NLSF performs better than other models in the graph classification task.\n\n1. NLSFs have a solid mathematical foundation and proof, especially on Universal Approximation and graph expressivity.\n2. The experimental results validate the effectiveness of the theory.\n\n1. The method proposed in this paper requires the use of eigenvalues, hence it necessitates eigen decomposition of the GSO. The time complexity of eigendecomposition is relatively high, especially for very large graphs.\n\nNone"
},
{
"confidence": 3,
"rating": 7,
"review_id": "v892UQ7PDW",
"review_text": "The authors introduce spectral GNNs that are equivariant to functional symmetries. Specifically, they introduce node-level, graph-level and pooling non-linear spectral filters and show that these are able to outperform standard convolutional GNNs on (semi-supervised) node classification and graph classification tasks.\n\n- The experimental results are compelling.\n- To the best of my understanding, the theory is sound\n- The idea being proposed is novel and worth being investigated\n- The paper is clearly written, even though it doesn't seem to be very accessible to readers unfamiliar with graphs signals processing\n\n- While the authors did a good job in trying to introduce all the relevant concepts, the paper is quite dense with mathematical details and notions that will likely be unfamiliar to many GNN researchers and may therefore hinder the accessibility of the manuscript.\n\n- While it's very intuitive to understand what is meant by \"shift\" in the context of images and CNNs, this doesn't come across very clear in the paper in the context of graphs: what is the rationale behind the decision to \"model the group of translations on graphs as the group of all unitary operators on signals that commute with the graph shift operator\"? If more space in the paper can be used to make the underlying concepts more accessible (perhaps moving some of the material on the theoretical properties of the NLSFs to the appendix) I think the paper would greatly gain in accessibility, potentially increasing its impact beyond the graph signal processing community."
},
{
"confidence": 5,
"rating": 5,
"review_id": "Q7U31Fv2ce",
"review_text": "The authors propose nonlinear spectral filters (NLSFs) that achieve full equivariance to graph functional shifts, demonstrating that these filters have universal approximation properties. These NLSFs are designed based on transferable spectral domain, potentially improving GNN performance in node and graph classification tasks across diverse graph structures.\n\n1- The paper is well-written and self-contained, offering clear, didactic insights. The experiments provide valuable conclusions that future practitioners will find useful. However, a synthesis of the information could further enhance readability and understanding for the reader.\n\n2- The use of the nonlinear spectral filters for graphs to achieve full equivariance to graph functional shifts may be a promising avenue to explore.\n\nDespite these merits, I have the following concerns about the paper.\n\n1- While the paper presents a compelling method with potential applications in graph analysis, one significant limitation is its scalability, particularly concerning large-scale graphs. The reliance on specific spectral properties, such as the leading eigenvectors, may not only limit the method's capacity to capture diverse graph dynamics but also result in computational inefficiencies when applied to extensive graph datasets.\n\n2- The datasets used in the paper predominantly consist of mid-sized, homophilic graphs, which may not fully represent the diverse range of real-world applications, particularly in contexts involving heterophilic graphs.\n\n3- The efficiency of the proposed models in terms of computation and resource utilization is not adequately discussed.\n\n(i) Does your theory adapt differently when applied to heterophilous graphs compared to homophilous graphs, and if so, how are these differences addressed within your methodology?\n\n(ii) Given that your Nonlinear Spectral Filters (NLSFs) are motivated by respecting graph functional shift symmetries, similar to Euclidean CNNs, do you have any claims or observations regarding how NLSFs fit within or potentially extend the Weisfeiler-Lehman hierarchy of expressivity? Additionally, could you elaborate on how the expressivity of NLSFs, as informed by metrics from the Euclidean vector space, compares to traditional graph neural network models?"
},
{
"confidence": 2,
"rating": 7,
"review_id": "w6Rk6FtZdT",
"review_text": "The paper tackles the task of Network design for graph neural networks. The suggested approach is based on spectral properties of graphs. So far in the literature spectral methods were limited in assuming that the graph domain is fixed. To address this, a relaxed version of symmetry is proposed based on band-limited projections. In addition, a nonlinear spectral filter design is suggested, suggesting node-level, graph-level, and pooling operations. The method is evaluated on several graph learning tasks, demonstrating improvement in generalization over existing spectral methods.\n\nThe paper makes a valuable contribution to the literature on Graph Neural Networks (GNNs), particularly by addressing the challenge of transferability in spectral methods, which is highlighted as a significant issue.\n\nclaims are supported by theoretical analysis.\n\nThe paper is self-contained, providing both background information and a short overview on spectral graph learning.\n\nWriting Quality: Some sections of the manuscript could benefit from revision. For instance, reordering the paragraphs in the introduction could improve readability. Specifically, mentioning what was missing from previous works earlier rather than at the end would help.\n\nMore examples can be found in the method section: i) The discussion on problem the activation functions is missing some details, e.g., what rho is exactly? ii) The paper states that \"It is important to note that functional shifts, in general, are not induced from node permutations. In stead, functional shifts are related to the notion of functional maps...\". This sentence is too vague. Consider adding more details to make it clearer.\n\nNo qualitative results are provided. Is it possible to visualize learned features as in the illustration in figure 2? Is it possible to design a toy experiment showcasing the suggested notion of relaxed symmetry, for which the suggested network design generalizes adequately?\n\nNo question other than the weakeness stated above."
}
] | |
y8HUXkwAOg | ChronoEpilogi: Scalable Time Series Selection with Multiple Solutions | We consider the problem of selecting all the minimal-size subsets of multivariate time-series (TS) variables whose past leads to an optimal predictive model for the future (forecasting) of a given target variable (multiple feature selection problem for times-series). Identifying these subsets leads to gaining insights, domain intuition,and a better understanding of the data-generating mechanism; it is often the first step in causal modeling. While identifying a single solution to the feature selection problem suffices for forecasting purposes, identifying all such minimal-size, optimally predictive subsets is necessary for knowledge discovery and important to avoid misleading a practitioner. We develop the theory of multiple feature selection for time-series data, propose the ChronoEpilogi algorithm, and prove its soundness and completeness under two mild, broad, non-parametric distributional assumptions, namely Compositionality of the distribution and Interchangeability of time-series variables in solutions. Experiments on synthetic and real datasets demonstrate the scalability of ChronoEpilogi to hundreds of TS variables and its efficacy in identifying multiple solutions. In the real datasets, ChronoEpilogi is shown to reduce the number of TS variables by 96% (on average) by conserving or even improving forecasting performance. Furthermore, it is on par with GroupLasso performance, with the added benefit of providing multiple solutions. | https://openreview.net/pdf/f177a5a2fd2abbbc575b39ef2994a06b9da31513.pdf | [
{
"confidence": 2,
"rating": 5,
"review_id": "untddUnYmW",
"review_text": "The authors consider the problem of feature selection when forecasting multivariate time series. They propose a novel algorithm called ChronoEpilogi based on identifying a Markov boundary of the time series variables. They experimentally and theoretically validate the findings.\n\n1. A significant problem to tackle, \n2. Good formalization of the problem,\n3. The paper is generally well-written.\n\n1. Not all limitations are discussed: for instance, the model assumes that the selected set of variables remains fixed over time. When we deploy time series models, it is important that the method should work with different train and test time segments. However, since the set of variables is selected, generally, it might not be applicable to other train/test splits and thus might cause issues in real-world use.\n\n2. In my view, the experiments could be more comprehensive. It would be beneficial to consider other forecasting models, baselines, and datasets to provide a more robust evaluation of the model's performance. \n\n3. Prior work discussion is incomplete: for instance, signature paths and feature selection should be discussed and compared to the method. Some specific examples include\n - Cross-correlation analysis,\n - Signature transforms https://arxiv.org/abs/1603.03788\n\n1. The abstract states, “Identifying these subsets leads to gaining insights, domain intuition, and a better understanding of the data-generating mechanism.” Is this claim supported by the experiments or otherwise in the main text?\n2. I have the same question about “identifying all such minimal-size, optimally predictive subsets is necessary for knowledge discovery and important to avoid misleading a practitioner.” I guess feature selection helps with interpretability, but I think the main text does not discuss it in detail.\n3. Please comment on weakness #1, which I have listed above.\n\n\n## Additional comments:\n\nAdditional comments\nI am really surprised that the primary area is interpretability and explainability, even though the paper almost does not discuss these aspects of the solution.\n\nWhat is V in tvs?\n\n“under Composition, Interchangeability, and other broad, non-parametric assumptions” – could you list all the assumptions here?\n\nL102 : conditional independence is not written well\n\nL205 RVS is not properly capitalized\n\nEq 2: what does the dot mean? Is it a misprint?\n\nTable 2:\nWhat do you mean by size?\nNo selection yields worse results. I wonder, what happens if you use a stronger base model for forecasting, e.g., LSTM?\n\nRe: Statistical significance:\nThe paper does not report statistical significance in Table 2."
},
{
"confidence": 3,
"rating": 7,
"review_id": "G5OG1DKMR2",
"review_text": "The authors propose a scalable algorithm called ChronoEpilogi that aims to select multiple subsets (Markov Boundaries) of time series (TS) features in order to better understand the underlying data generation process and to provide better explanations of downstream forecasting tasks. Through extensive experiments, the authors show that time series forecasting models perform better when fed with these subsets of TS features (individually) than when fed with all TS features.\n\n- Originality: Although the problem addressed in this paper is not new and the proposed solution is based on the combination of existing methods/models, the originality lies in the fact that, unlike previous work, the proposed algorithms provide a compact representation of mutually equivalent variables for multivariate time series (MTS). In addition, the redundant and irreplaceable variables in MTS can be relatively easy to identify.\n\n- Quality: The document is well structured and the assertions are fairly well supported.\n\n- Clarity: Overall, the document is well written and pleasant to read. However, the reviewer suggests improving definition 2. What does V stand for?\n\n- Significance: The reviewer believes the proposed algorithm can be used as an alternative solution to provide explainability in diverse and sensitive fields such as medicine and autonomous vehicles. Indeed, in these fields, when it comes to forecasting tasks, identifying the the time series features that influence the decision is as important as model accuracy.\n\n- The experiment is not conducted with multivariate time series that have a high rate of missing values. This is a crucial aspect that the study should have taken into account, as missing values are inherent in time series and may affect causal inference.;\n\n- The authors simply identify subsets of time series variables without providing concrete explanations that could have strengthened their claim. For example, the authors should have taken a few subsets (Markov Boundaries) in any task and shown how they are actually relevant to the target;\n\n- The reviewer understands the usefulness of greedy heuristics to speed up the algorithm. However, this heuristic has the disadvantage of providing suboptimal results. Although the authors demonstrate its effectiveness, it would be interesting in future work to test it on additional datasets covering different domains.\n\n- Why did you choose ARDL over other models like RNNs which are also autoregressive models and are certainly more efficient (Equation 2)?\n\n- Is the model redesigned at each iteration of the Forward phase, or does it remain unchanged? The reviewer asks this question because the size of the inputs $\\textbf{S}$ may vary at each iteration (see Algorithm 3 line 8)?\n\n- What can explain the outperformance of SVR over TFT and DeepAR with the Solar Dataset?\n\n- How do the authors expect their algorithm to perform when faced with sparse multivariate time series, given that missing value can affect causal inference?\n\n- The reviewer does not understand the relevance of reporting the standard deviation in columns MB size and Number of MB (Table 1). Could you please elaborate on this?"
},
{
"confidence": 3,
"rating": 6,
"review_id": "ynEwtfT4ew",
"review_text": "The authors presents **ChronoEpilogi**, an algorithm for multiple feature selection in multivariate time-series (TS) forecasting. This approach aims to identify all minimal-size subsets of TS variables (Markov Boundaries) that optimally predict a given target variable's future. The key contributions are:\n\n1. **Theory Development**: Introduces the problem of multiple time-series variable selection (MTVS) and the concepts of informational equivalence and interchangeability in TS data.\n2. **Algorithm Design**: Proposes ChronoEpilogi, which identifies all Markov Boundaries under broad, non-parametric assumptions.\n3. **Experiments and Results**: Demonstrates ChronoEpilogi's scalability to hundreds of TS variables and its effectiveness in reducing the number of variables while maintaining or improving forecasting performance.\n\nSome of the key strengths of the paper are:\n\n1. The paper proposes a novel theoretical foundation for multiple feature selection in TS data including concepts like informational equivalence and interchangeability. Combining these concepts the authors have been able to propose an empirical method to detect Markov Boundaries\n2. Furthermore, the proposed algorithm ChronoEpilogi is shown to handle large datasets effectively, making it suitable for real-world applications with numerous TS variables. This scalability is crucial to real world usage of the algorithm\n3. Another key contribution for the paper is that ChronoEpilogi aims to identify all minimal-size subsets, offering multiple valid forecasting models and insights.\n\nWhile being a very interesting paper, there are some avenues for improvement: \n1. While the authors discuss the scalability, the algorithm's computational complexity seems to be high for very large datasets, potentially limiting its practicality.\n2. Some of at the assumptions that the algorithm relies on such as Compositionality and Interchangeability may not hold in all real-world scenarios, potentially affecting its generalizability. The authors should consider discussing the limitations in their papers and how well the assumptions hold in practice\n3.The authors have provided thorough experimentations. However, to justify the practicality of the algorithm, it would be interesting to report additional validation on more diverse and complex real-world datasets\n\nIt would be great if the authors can discuss about the computational complexity and the limitations stemming from their assumptions"
},
{
"confidence": 3,
"rating": 7,
"review_id": "ulcFsLRQAa",
"review_text": "This paper handles the problem of selecting all the minimal-size subsets of multivariate time series variables such that the past leads to an optimal predictive model for the forecast of a given target variable, which is essentially a time series feature selection problem. Past algorithms have worked to select a single such subset. The proposed algorithm is relatively efficient, in that it does not take as much longer than finding a single subset as one would think, but leading to more insight and better \"Markov blankets.\"\n\n1. The paper handles an important problem in a clever way and is explained quite clearly.\n2. The experimental results are convincing and actually include running time, which is often omitted.\n3. The theoretical results look correct, although admittedly I did not comb through the proofs in great detail.\n\n1. The proposed algorithm was only compared against GroupLasso and not against any other among the related work mentioned in the paper.\n\n1. Line 176 should say \"forward\" rather than \"backward.\"\n2. I suspect that algorithm 3 step 9 and algorithm 4 step 3 should have $\\geq$ in place of $\\leq$.\n3. In line 260, how is TSS defined?"
},
{
"confidence": 3,
"rating": 6,
"review_id": "fKrIh6VE5d",
"review_text": "This paper considers the problem of finding all minimal subsets of variables for optimal prediction of time series data, coining the term \"Markov Boundaries\" for those minimal subsets constituting Markov Blankets for the target time series variables in question. \nThe paper then proposes novel algorithms for this problem,FBE and FE, and prove the soundness and completeness of FBE (FE is a faster approximate algorithm) and empirically evaluate their performance. \nThe experiments are conducted using both synthetic data (with ground truth causal structure) and real world data, and compare the performance of the proposed algorithms against baselines of Group Lasso (GL) and No variable selection, with respect metrics including predictive accuracy, accuracy of causal structure learning (for synthetic data), computation time and solution size. \nThe results presented validate a number of claims about the proposed methods, the notable ones being that they are more accurate at uncovering the ground truth causal structure on synthetic data and FE roughly comparable to GL on real world data sets in terms of accuracy and computation time, sometimes significantly out-performing it in terms of solution size. \nThe problem formulating is apparently novel and interesting, and the proposed methods are also novel and theoretically sound (and complete). The empirical results show that they are at least competitive to the standard baselines. \nThis work would add some valuable knowledge and insights to the community with interest in causal modeling and interpretable learning in time series data.\n\nThe problem formulation is novel and interesting and well motivated practically. \nThe proposed solution is novel and sound and complete.\nThe empirical evaluation is reasonable.\n\nThe performance of the proposed methods against the baseline of Group Lasso on real world data sets is not exactly compelling. \nMore clarify on the relative advantage of the proposed method(s) would be valuable. \nThe optimal algorithm, FBE, is not evaluated on real world data sets, which I assume is due to computational complexity. It would be beneficial to know if any evaluation (even if partial) could be performed on FBE on the real world data.\n\nOne wonders if there are ways to use Group Lasso to obtain multiple solutions of the type obtained by the proposed methods, for example, by performing multiple randomized runs with perturbation and aggregating the outputs. A comparison with such heuristics would be of interest."
}
] | |
y7oxY5pq4j | RobIR: Robust Inverse Rendering for High-Illumination Scenes | Implicit representation has opened up new possibilities for inverse rendering. However, existing implicit neural inverse rendering methods struggle to handle strongly illuminated scenes with significant shadows and slight reflections. The existence of shadows and reflections can lead to an inaccurate understanding of the scene, making precise factorization difficult. To this end, we present RobIR, an implicit inverse rendering approach that uses ACES tone mapping and regularized visibility estimation to reconstruct accurate BRDF of the object. By accurately modeling the indirect radiance field, normal, visibility, and direct light simultaneously, we are able to accurately decouple environment lighting and the object's PBR materials without imposing strict constraints on the scene. Even in high-illumination scenes with shadows and specular reflections, our method can recover high-quality albedo and roughness with no shadow interference. RobIR outperforms existing methods in both quantitative and qualitative evaluations. | https://openreview.net/pdf/92d9baadccb6c0d8ed17f5cd5f1fc5980a06e590.pdf | [
{
"confidence": 4,
"rating": 7,
"review_id": "hgxArhFlht",
"review_text": "This paper addresses inverse rendering in high-illumination scenes with strong shadows where past methods bake shadows and highlights into estimation results. This paper proposes to use ACES tone mapping and makes it scene-dependent for inverse rendering in high-illumination scenes. This paper also proposes to directly estimate the visibility of each spherical Gaussian of direct illumination instead of a visibility field, which enables an accurate representation of shadows at the edge. The experimental results on the synthetic and real-world datasets show that the proposed method can estimate accurate albedos, surface roughness, and illumination without artifacts in the high-illumination scenes.\n\n+ This paper proposes a novel regularized visibility estimation that enables an accurate representation of shadows at the edge.\n+ Experimental results show that the proposed method successfully estimates BRDF and illumination while existing methods suffer from artifacts. They also indicate the effectiveness of ACES tone mapping compared with log tone mapping methods.\n\n- It is unclear why the ACES tone mapping, of which usage is the key contribution, enables the robust inverse rendering of high-illumination scenes with strong shadows.\n- The proposed method loses the albedo of the detailed texture due to smoothness loss in Eq. 10. For example, Bear in Fig. 7 and truck in Fig. 4.\n\n- A more detailed explanation of the effects of ACES tone mapping is expected. Why is it more suitable for high-illumination scenes than other tone mapping methods such as sRGB and log tone mapping? How does it affect the loss and the optimization?"
},
{
"confidence": 5,
"rating": 5,
"review_id": "OcPdwkNjFc",
"review_text": "This paper proposes a method for the inverse rendering of high-illumination and highly reflective scenes. There are two training phases, in the first phase, it trains by Neus, to get geometry and compute visibility by octrees. In the second phase, it decomposes lighting as SGs and material by MLPs.\n\nDirect and indirect lighting, visibilities are presented by SGs, which is compact. \nFrom the results, shadows and specularities are decomposed well. \nTone mapping is used, as NeRF in the Dark, which improves results for high-illumination scenes.\n\nFigure 2 gives comparisons with and without smooth loss, however, the one w/o smooth loss is better, while the loss may over-smooth the details. \nFigure 11 shows results where the lighting consists of the original colors.\n\nHow does the relighting work? Do you use original indirect lighting? \nWhy is visibility divided into two stages, and why not directly use the results calculated by the Neus octree as the ground truth to supervise SG? Instead, why is the Neus octree result used as the ground truth to supervise the MLP, and then the MLP's distribution used to supervise the SG results? What are the differences?"
},
{
"confidence": 5,
"rating": 5,
"review_id": "SONb9Q5T41",
"review_text": "This paper introduces RobIR, an inverse rendering approach that can better tackle “high-illumination” scenes. RobIR first leverages the existing neural field model (NeuS) to represent 3D geometry information including normal, visibility, and indirect illumination. It then utilizes these geometry priors to decompose rendering attributes of the scene through the approximated rendering equation with spherical Gaussian. This work introduces an optimizable ACES tone-mapping and regularized visibility estimation model to better handle HDR color and shadow occlusions, respectively. Their experiment demonstrates some impressive results on shadow disentanglement.\n\n1. This work has some further thoughts on color tone mapping for the typical multi-view inverse rendering. The proposed optimizable ACES tone mapping looks very effective in improving inverse rendering results.\n2. The proposed visibility representation (RVE) also plays an important role in the final results. RVE with its neural net accumulates and denoises Monte Carlo samples to achieve more accurate and stable visibility evaluation. Their efforts to refine the visibility should be appreciated.\n3. I am glad the authors clearly point out they use the original NeRF rendering of Hotdog and Lego, instead of the NeRFactor’s.\n\n1. This paper still follows a commonly used multi-stage inverse rendering strategy with neural fields. The geometry representation is based on NeuS; the rendering formulation (SG rendering, visibility, and indirect illumination) is mainly based on InvRender; The proposed optimizable ACES tone and REV are more like incremental improvements over InvRender. The key rendering formulation and optimization remain the same as the prior works. Therefore, the novelty of this work is moderate.\n2. The description of RVE in Sec. 3.4 is not very clear. It seems that MLP $Q(x, \\tau)$ directly outputs N visibility ratios, thus $\\eta(x)$ in Eq. 12 should also be an N-dim vector instead of a scalar value.\n3. The proposed method is quite time-consuming. Training time even without NeuS is around 5 hours.\n4. The proposed regularization terms may hurt the high-frequency details in real-world scenes (Fig. 7).\n5. This method is limited to dielectric materials, without the consideration of metallic and glossy objects, as already pointed out by the authors.\n6. The paper should include some inverse rendering methods with differentiable path tracing, as these methods can explicitly handle visibility, for example, NvDiffRecMC, Mitsuba, etc.\n7. Minor errors:\n * L275 accuurate -> accurate\n * Figure 8: Hotdog label is wrong.\n\n1. What are the differences between the final optimized ACES tone mapping and sRGB gamma tone mapping? How does this optimized tone mapping vary from scene to scene? It would be better if tone-mapping curves were included in the paper.\n2. I tested this ACES tone mapping mentioned in the paper, it seems that the proposed learnable ACES tonemapping (an S-curve) cannot well approximate the existing concave tone-mapping curves that are potentially used for rendering Blender objects (e.g., sRGB curve, Filmic curve, AgX curve, etc.). Given this limitation, how does the paper address the color mismatch in the low-illumination color space?\n2. The paper does not show the metrics for rendering (NVS) with decomposed attributes. I’d like to see the rendering quality metrics for those NeRF scenes.\n3. It would be better if the author could show some video examples of moving shadows while rotating envmaps in the final release.\n4. Why does the learnable parameter $\\gamma$ have an exponent 0.2?"
},
{
"confidence": 5,
"rating": 6,
"review_id": "u4eU7dmdht",
"review_text": "This paper introduces RobIR, an inverse rendering approach designed to handle strong or directional illumination scenes with strong shadows and specular reflections. \nThe proposed method aims to decouple environment lighting and object materials, with the goal of producing high-quality albedo without baked shadow.\n\nBuilding on top of prior inverse rendering methods such as InvRender, RobIR introduces two components that further boost the reconstruction quality: (1) ACES tone mapping with an optimizable gamma parameter to better capture the image formation process; (2) regularization for visibility estimation. \n\nRobIR demonstrates better performance over existing methods in quantitative and qualitative evaluations.\n\n- The model design choices are valid and sensible. \n- The experiments and ablation study are thorough. \n- The paper is well written and easy to follow.\n\n[W1] The benefit of ACES tone-mapping is a bit surprising. Despite a more accurate formulation for the image formation process, the task of inverse rendering is inherently still an ill-posed problem. It’s a surprising conclusion that a tone-mapping formulation can robustly and significantly benefit shadow removal. \n\nWith many other regularization terms entangled, it’s a bit hard to evaluate the correctness of this specific component. \nI’d hope to know more details in the following aspects: \n\n\n[W1.1] Missing visualization of the tone-mapping curve. Despite an important contribution, the estimation results of the tone-mapping curve are missing. What does the default ACES tonemapping look like, and what does the final tonemapping look like with the optimizable gamma? \n\nIn the revised version, the curves should be qualitatively visualized and included in main paper. From the existing experiments (such as the PSNR metrics), the audience cannot intuitively understand how well the tonemapping is estimated. \n\n[W1.2] Missing evaluation of the tone-mapping curve. \n\nAs many datasets do not have GT tonemapping, it’s unclear how accurately the tone-mapping approximate the GT tonemapping. \n\nOne way to evaluate is to add additional tone adjustment to the input dataset. Assume with the original dataset images $\\{ I \\}$ and the method originally reconstructs a tonemapping curve $f$. Given a new tone adjustment function, e.g. $g(x) = x^\\kappa$, the adjusted dataset images become $\\{g(I)\\}$. Re-running the method can get a newly reconstructed tonemapping curve $f_\\kappa $. The consistency between $g \\circ f$ and $f_\\kappa$ can indicate how well the model can approximate the additional introduced tone adjustment function. $\\kappa$ can be set to values like 0.5 or 2. \n\n[W1.3] The evaluation metric on the Albedo is flawed. As albedo estimation/optimization often involve an unknown scale, PSNR alone is not a proper evaluation for Albedo. Check out [1] for more analysis and more appropriate scale-invariant metrics.\n\n[W1.4] Most of the results are from synthetic datasets, where the GT tonemapping could potentially be close to ACES. For real-world results in Fig.7, the albedo estimation looks strongly regularized and over-smoothed. \n\n[W1.5] Are the radiance values (before tonemapping) and indirect illumination in HDR? If so, what is the activation function? \n\n[W2] The proposed method involves a complicated training pipeline (two stages with each stage have its own loss scheduling), and the novelty is relatively limited. \n\n\n**References** \n\n[1] Grosse et al., Ground truth dataset and baseline evaluations for intrinsic image algorithms, ICCV 2009.\n\nThis paper addresses the shadow baking issue in inverse rendering by proposing two straightforward but effective techniques: optimizable tone-mapping and visibility regularization.\n\nHowever, I am not fully convinced that optimizable tone-mapping significantly benefits shadow removal, and without further clarification it’s challenging to evaluate the technical correctness of this component. In the rebuttal, please prioritize addressing Weaknesses 1.1-1.4. \n\n**[Post rebuttal update]**\n\nThe rebuttal and related discussion address my concerns. I update my rating to Weak Accept."
}
] | |
y6qhVtFG77 | NeuroBOLT: Resting-state EEG-to-fMRI Synthesis with Multi-dimensional Feature Mapping | Functional magnetic resonance imaging (fMRI) is an indispensable tool in modern neuroscience, providing a non-invasive window into whole-brain dynamics at millimeter-scale spatial resolution. However, fMRI is constrained by issues such as high operation costs and immobility. With the rapid advancements in cross-modality synthesis and brain decoding, the use of deep neural networks has emerged as a promising solution for inferring whole-brain, high-resolution fMRI features directly from electroencephalography (EEG), a more widely accessible and portable neuroimaging modality. Nonetheless, the complex projection from neural activity to fMRI hemodynamic responses and the spatial ambiguity of EEG pose substantial challenges both in modeling and interpretability. Relatively few studies to date have developed approaches for EEG-fMRI translation, and although they have made significant strides, the inference of fMRI signals in a given study has been limited to a small set of brain areas and to a single condition (i.e., either resting-state or a specific task). The capability to predict fMRI signals in other brain areas, as well as to generalize across conditions, remain critical gaps in the field. To tackle these challenges, we introduce a novel and generalizable framework: NeuroBOLT, i.e., Neuro-to-BOLD Transformer, which leverages multi-dimensional representation learning from temporal, spatial, and spectral domains to translate raw EEG data to the corresponding fMRI activity signals across the brain. Our experiments demonstrate that NeuroBOLT effectively reconstructs unseen resting-state fMRI signals from primary sensory, high-level cognitive areas, and deep subcortical brain regions, achieving state-of-the-art accuracy with the potential to generalize across varying conditions and sites, which significantly advances the integration of these two modalities. | https://openreview.net/pdf/bdea46a673736c763526ec0b31ddac30ab558611.pdf | [
{
"confidence": 5,
"rating": 4,
"review_id": "koHATLtwug",
"review_text": "This paper introduces NeuroBOLT, a transformer-based model. NeuroBOLT utilizes multi-dimensional representation learning across temporal, spatial, and spectral domains to translate raw EEG data into comprehensive fMRI activity signals across the entire brain. Experimental results showcase NeuroBOLT's ability to effectively reconstruct resting-state fMRI signals across primary sensory, high-level cognitive areas, and deep subcortical regions.\n\n1. The paper tackles one of the most challenging and competitive topics in neuroscience.\n2. The motivation behind the paper is quite clear, and the experimental section is logically sound.\n3. The figures and tables in the article are clear and well-organized, making it highly readable.\n\n1. The method abbreviation and the title are not closely related. It is unclear where 'BOLT' comes from in the title, and even after reading the abstract, it remains confusing.\n2. In fact, there has been a lot of work on fMRI-EEG in recent years, especially in 2023 and 2024, but the author's related work lacks a significant amount of relevant literature.\n3. In the abstract and introduction, the author's description of the method is inconsistent with the organization in the methodology section, resulting in a need for improved readability.\n4. The writing of the article needs to be further standardized. For example, 'FMRI data' is sometimes written with a capital 'F' and other times as 'fMRI data'.\n5. Although the layout and presentation of the tables are aesthetically pleasing, the font size is too small, making them difficult to read even when enlarged.\n6. The paper does not provide code or data to support the reproducibility of results. \n7. This paper lacks details on the parameter selection for the baseline methods. Although the authors state, 'The baseline models are from [44] and [16], where we choose the models with the best downstream classification task performance,' the datasets and tasks in references [16] and [44] are not entirely consistent with those in this paper. Therefore, the authors should specify the exact process of parameter selection.\n8. The equations and symbols in the article are not very standardized. The authors should provide notation to help readers understand.\n9. The authors should conduct statistical tests to validate the significance of their methods.\n10. For readers in the NeurIPS community, the theoretical contribution of this paper appears to be weak.\n11. I don't quite understand what the author means by the third point of contribution: 'Successful resting-state fMRI reconstruction To our knowledge, this is the first study to successfully reconstruct the resting-state fMRI signal from raw EEG data, with only 26 electrodes.' What is the significance of 26 electrodes?\n12. Equation 2 does not appear to be a complete equation.\n\nPlease see the twelve weaknesses above."
},
{
"confidence": 4,
"rating": 8,
"review_id": "jZX99NmPZ9",
"review_text": "The manuscript proposes an EGG-to-fMRI synthesis model. The framework implements a transformer architecture and uses a multi-channel feature combination expanded across the temporal axis. To evaluate the proposed model, EGG and fMRI data from 22 participants were recorded while they were in the resting state with eyes closed.\n\nThe manuscript addresses an interesting problem and can open opportunities for multimodal neuroimaging analysis. Overall, this line of investigation is little explored, therefore, the present manuscript is novel and of interest to the community.\n\nThe present manuscript is quite complete, (1) the model and rationale behind are sound; (2) a dataset is collected which allows a faithful evaluation of the proposed translation (from EEG to fMRI); (3) it's rather easy to read and follow the manuscript, (4) the reported results are promising.\n\nThe biggest weakness is that the framework and the dataset are only addressing the resting state. While this is an important baseline to investigate, it would have been great to explore the fidelity of the proposed framework when participants are presented with some stimuli.\n\nIt is unclear whether the source code and dataset will be released publicly.\n\nThe stability of the results is fully guaranteed given no statistical analyses are performed.\n\nWhat is \"In-scan\" in Table 1? This is not explained in the manuscript.\n\nIt is unclear to me why all methods are not evaluated for both inter-subject and in-scan. For example, isn't it possible to evaluate BIOT [44] for inter-subject data?"
},
{
"confidence": 5,
"rating": 5,
"review_id": "3CFFonOIT6",
"review_text": "In this work, the authors present a deep learning architecture for inferring functional magnetic resonance imaging (fMRI) signal from electroencephalography (EEG) data. The proposed model, named NeuroBOLT, utilizes transformer backbones to provide spatial, temporal, and frequency-based features from the EEG are utilized for reconstruction. The authors demonstrate the performance of their architecture on a small (N=22) data set using a propriatary data set of simultaneously measured EEG-fMRI.\n\nfMRI reconstruction from simultaneous EEG is a fascinating topic, and a difficult problem to tackle. The approach taken by the authors in this work is novel for the task at hand, i.e. using a multi-scale spectral feature embedding. Although the decision to use multi-scale spectral embeddings is not new in MRI analysis, as far as I could find the approach has not been utilized for this particular problem and the authors address novel problems for their application to simultaneous EEG-fMRI data in a deep learning architecture. At best this paper is a novel methodological tweak applied with state of the art architectures to see improvements over other deep learning baselines.\n\nThe breadth of the experiments attempted by the authors is promising; however, see my discussion below for more of a discussion of the limitations of the experiments performed. \n\nThe authors also perform an ablation study to explore how the inclusion of Multi-Scale Spectral features improves model performance, thus demonstrating the benefit of combining the multi-scale spectral features with the spatiotemporal. This is well appreciated.\n\nThe major weaknesses of this work come down to weaknesses in the empirical evaluation. I am afraid that in its current state, the evaluation does not lead to a convincing demonstration of this method for fMRI reconstruciton, and the claims in the introduction about novelty coming from the application to multiple brain regions and resting-state fMRI seem somewhat overemphasized. Currently this brings it to a full reject as the paper is otherwise sound but the limitations in the evaluation are significant enough to bring it well below the threshold, and cannot be easily addressed in the rebuttle I believe. \n\nFirst, I will highlight the lack of reported standard deviations or error bars in any of the results. No standard deviations are provided in tables 1 or 2, or in any of the figures providing results. In the checklist, the authors state \"Error bars are not reported at the current stage because it would be too computationally expensive to compute over all the brain regions and for all participants,also there is limited space in the paper to put all the statistics. But we could always add this information if reviewers think it’s important to know.\"\n\nWhile I appreciate the authors' acknowledgement of this exclusion, I do think error bars are absolutely necessary to demonstrate the efficacy of the proposed method. The demonstrated improvements are often quite small (e.g. improvement from 0.540 to 0.588 in table 1), and it is not clear if the purported improvements can be explained away from model noise. I could not find any information about controlling model initialization or seeds as well to ensure that random initializations played a less significant role between experiments even with the same architecture on different regions. I absolutely think error bars are necessary for this work, and the reasoning provided by the authors is not mitigated elsewhere or behind a more significant barriers other than training and evaluation time. Additionally, the authors could have mentioned this omission in the limitation section of their main paper since I had to go to the checklist to be sure the authors were aware of the issue.\n\nSecond, in the abstract the authors highlight the ability to \"generalize to other brain areas\" and \"other conditions (such as resting state)\". Unless I am missing something, I cannot find any experiments by the authors that address these particular gaps. The authors do provide inter- and intra-subject predictions which is interesting; however, their model is still only trained on individual ROIs, and they don't include any experiments demonstrating transfer learning between models trained on other regions, and they do not include any experiments studying other tasks BEYOND resting-state fMRI. Thus, the paper falls into the same limitation as past works which were only focused on task, just in the other direction. This work would have been much more compelling if they could demonstrate a model which trained well both on task and rest related data, or even better, which could reconstruct task-related data despite only being trained on resting-state fMRI. The acknowledgement of the limitations in the literature is thus misleading as the proposed method still suffers from these same limitations. The choice to only demonstrate the results for several ROIs highlights this limitation - it would be okay if the authors did not seem to imply elsewhere that their model gets around the single-ROI training approach from past methods. \n\nClearly, the N in this study is quite small. This is to be expected as simultaneous EEG-fMRI is still quite rare as a sequence to collect; however, the authors seem to gloss over all of the myriad issues which will come with training their data over such a small data set. I am not penalizing this work for the small N in and of itself, but as I cannot find any mention of common obstacles such as overfitting, bias towards particular kinds of reconstruction errors, and other limitations that would inevitably arise. I am extremely surprised I could find no mention of pretraining anywhere in this work, which I almost imagine would be hugely necessary for these kinds of studies with very small data sets. Again, it's not necessarily a limitation in and of itself to not do these things, with how the paper currently reads these seem to be touted as benefits of the new approach which are not backed up by evidence.\n\nHow well does the model trained on other ROIs transfer to reconstruction of completely different ROIs?\n\nHow might scanner model and particular parameters of the resting state sequence affect reconstruction?\n\nWhy do you not compare anywhere with Source Localization? Source Localization is only mentioned once offhand, and its limitations and efficacy as a reconstruction technique are not gone into in detail. I am surprised it was not included as a baseline method in fact."
}
] | |
y6JotynERr | Towards Diverse Device Heterogeneous Federated Learning via Task Arithmetic Knowledge Integration | Federated Learning (FL) has emerged as a promising paradigm for collaborative machine learning, while preserving user data privacy. Despite its potential, standard FL algorithms lack support for diverse heterogeneous device prototypes, which vary significantly in model and dataset sizes---from small IoT devices to large workstations. This limitation is only partially addressed by existing knowledge distillation (KD) techniques, which often fail to transfer knowledge effectively across a broad spectrum of device prototypes with varied capabilities. This failure primarily stems from two issues: the dilution of informative logits from more capable devices by those from less capable ones, and the use of a single integrated logits as the distillation target across all devices, which neglects their individual learning capacities and and the unique contributions of each device. To address these challenges, we introduce TAKFL, a novel KD-based framework that treats the knowledge transfer from each device prototype's ensemble as a separate task, independently distilling each to preserve its unique contributions and avoid dilution. TAKFL also incorporates a KD-based self-regularization technique to mitigate the issues related to the noisy and unsupervised ensemble distillation process. To integrate the separately distilled knowledge, we introduce an adaptive task arithmetic knowledge integration process, allowing each student model to customize the knowledge integration for optimal performance. Additionally, we present theoretical results demonstrating the effectiveness of task arithmetic in transferring knowledge across heterogeneous device prototypes with varying capacities. Comprehensive evaluations of our method across both computer vision (CV) and natural language processing (NLP) tasks demonstrate that TAKFL achieves state-of-the-art results in a variety of datasets and settings, significantly outperforming existing KD-based methods. Our code is released at https://github.com/MMorafah/TAKFL and the project website is available at https://mmorafah.github.io/takflpage . | https://openreview.net/pdf/775a6d52478bc6934cf412e2704981a99343583b.pdf | [
{
"confidence": 3,
"rating": 5,
"review_id": "LQYPvstoHB",
"review_text": "The authors aim to develop a knowledge distillation method that addresses the challenges posed by heterogeneous device prototypes in federated learning. By capturing the knowledge transfer among device prototypes, the proposed TAKFL tries to preserve each device's unique contribution and prevent knowledge dilution during the learning procedure. The method incorporates a self-regularization technique to address issues of the noisy and unsupervised ensemble distillation. Evaluation of some CV and NLP tasks shows the performance of model accuracy and scalability.\n\nThe authors focus on knowledge distillation in heterogeneous settings, which is meaningful to real-world tasks. The overall presentation is clear and easy to understand. The proposed TAKFL shows some primary theoretical analysis of the learning efficiency. The evaluation provides a quantitative analysis to compare model accuracy and scalability with previous methods.\n\nWhile the paper presents a clear presentation and evaluation, there are a few aspects to be strengthened. \n\n1. The concept of \"task arithmetic\" in the context of federated learning is vague. It would be beneficial to explain how this concept enhances the design of the federated learning process. \n2. While the knowledge distillation process is described, it appears to be largely based on existing methodologies. It would be interesting to explore any novel design elements introduced in TAKFL. Additionally, what is the theoretical convergence order of the proposed method? \n3. Although TAKFL demonstrates higher model accuracy in performance comparisons, the baselines used, such as FedAvg and FedDF, seem outdated. Incorporating more recent baselines could provide a more rigorous evaluation of TAKFL's effectiveness.\n\nPlease refer to the concerns and questions mentioned in the weakness part."
},
{
"confidence": 4,
"rating": 5,
"review_id": "USFRZZJb0q",
"review_text": "The paper focus on a problem that traditional federated learning methods fail to effectively handle scenarios where devices have widely varying capabilities. It improve existing Knowledge Distillation (KD) methods that are inadequate in these heterogeneous environments. Experimental results show the validity of their proposed method.\n\n1. TAKFL treats knowledge transfer from each device prototype’s ensembles as separate tasks and distills them independently.\n2. The paper is well written with comprehensive experiments\n\n1. Assumption on Weight Disentanglement Property: The theorem 2 reply on the assumption of the weight disentanglement property (line 679-681 in appendix) is too strong. In practice, achieving weight disentanglement is challenging. Studies [1,2] demonstrate that there are interferences among task vectors, making disentanglement difficult. [3] achieves disentanglement using Neural Tangent Kernel (NTK) and shows without disentanglement the performance is dropped. Consequently, asserting the disentanglement property is problematic, thereby limiting the theoretical impact. \n\n2. Similar to weight conflicts when doing merge, averge logits also have conflicts, the paper only considers vanilla average logit as the KD loss. However, there are studies to resolve the issue, like [4,5] have proposed better ensemble KD loss designs, more studies online. Therefore, incorporating these methods for comparison is important.\n\n3. The computation cost of the method is high as the number of distillation process is O(M^2) which exponentially increases with the prototype number.\n\n4. The changes compared to [6] are minor. Initially, I believed this method could be simple and efficient. However, upon reviewing the weight disentanglement property, I have concerns about its practical validity, which limits the novelty of the approach.\n\n5. A minor issue is only apply to data with prototype label, thus may limited its impact\n \n[1] TIES-Merging: Resolving Interference When Merging Models\n\n[2] Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch\n\n[3] Task Arithmetic in the Tangent Space: Improved Editing of Pre-Trained Models\n\n[4] Adaptive Multi-Teacher Multi-level Knowledge Distillation\n\n[5] Agree to Disagree: Adaptive Ensemble Knowledge Distillation in Gradient Space\n\n[6] Ensemble distillation for robust model fusion in federated learning.\n\nplease refer to Weakness."
},
{
"confidence": 3,
"rating": 5,
"review_id": "v1u8g5oJDF",
"review_text": "The paper presents a novel framework called TAKFL, which addresses the challenge of transferring knowledge in federated learning across heterogeneous devices, ranging from small IoT devices to large workstations. TAKFL uniquely handles the knowledge distillation by treating the transfer from each device prototype as a separate task, allowing for tailored integration of knowledge to optimize performance. The approach is validated theoretically and through extensive experiments, demonstrating superior results over existing methods across both computer vision and natural language processing tasks.\n\n(1) Practical Problem and Innovative Approach\nThe authors address a significant, real-world challenge in federated learning—knowledge transfer across heterogeneous devices. Their novel framework, TAKFL, innovatively treats each device's knowledge transfer as a separate task, allowing for customized integration. This tailored approach is both practical and theoretically sound, making it a substantial contribution to the field.\n\n(2) Clarity and Organization\nThe paper is well-structured, facilitating easy understanding of complex concepts and methods. The clear presentation enhances the accessibility of the content, making it easier for readers to grasp the significance of the proposed solution and its impact on the field.\n\n(3) Strong Experimental and Theoretical Support\nThe authors back their claims with extensive experimental results across multiple tasks and datasets, demonstrating the effectiveness of TAKFL in diverse scenarios. Moreover, the inclusion of theoretical analysis adds depth to the validation, reinforcing the reliability and scalability of their approach. This combination of empirical and theoretical evidence strongly supports the paper's contributions and conclusions.\n\n(1) [main concern] Strong Dependence on Hyperparameters\nThe TAKFL framework introduced in the article significantly relies on the setting of hyperparameters, especially during the Task Arithmetic Knowledge Integration process, where the weights for different task vectors are set as hyperparameters and adjusted on a validation set. This might limit the method's generalizability across different real-world applications. To enhance the practicality and robustness of the method, it is recommended that the authors explore more automated hyperparameter optimization strategies to reduce the need for manual tuning and improve the adaptability of the model.\n\n(2) [main concern] Strong Assumptions in the Selection of Public Datasets\nThe experimental design involves the use of public datasets, such as CIFAR-100 and ImageNet-100, for knowledge distillation. This choice seems to be based on two key assumptions: that the public datasets must exhibit high diversity and that the training data distribution can be approximately considered a subset of the public dataset. These assumptions may not always hold in practical applications, so it is advisable for the authors to thoroughly investigate the actual impact of these choices on model performance in future work. Additional experiments could validate the effectiveness of these assumptions, and considerations of these potential limitations should be explicitly stated in the manuscript.\n\n(3) [minor concern] Quantification of Data Heterogeneity and Hyperparameter Selection\nThe authors utilize a Dirichlet distribution to quantify Data Heterogeneity, setting $Dir(\\alpha)$ at 0.3 and 0.1 to simulate varying degrees of data heterogeneity. However, there is insufficient explanation for the choice of these specific values. To enhance the transparency and reproducibility of the research, it is recommended that the authors provide a detailed rationale behind these parameter choices, based on logic and references to previous studies. Moreover, to give readers a more intuitive understanding of the data distribution differences under different $\\alpha$ settings, descriptive statistics or visualizations, such as the distribution of samples across categories, would be helpful.\n\nSee weaknesses"
},
{
"confidence": 4,
"rating": 5,
"review_id": "6lbARfO6VE",
"review_text": "This paper introduced a KD-based framework (TAKFL) to address the dilution and diversity issues in heterogeneous FL knowledge transfer learning. The TAKFL distills knowledge from prototypes of varying sizes and incorporates a self-regularization to mitigate noise simultaneously, then integrates these separately distilled knowledge by task arithmetic. Empirical evaluations across various CV and NLP datasets demonstrate the framework's effectiveness.\n\n1. The paper is well-organized and easy to follow.\n2. The paper novelty introduced a theoretical model to illustrate the efficacy of knowledge distillation in heterogeneous FL.\n3. The paper proposed a new framework, TAKFL, for considering varying sizes of prototypes with different contributed information, and experiments on different CV and NLP datasets show its effectiveness.\n\n1. Some baselines are lacking. For example, FedProto [1], which also employs prototypes within device heterogeneous FL, should be included for a more comprehensive comparison. \n2. It seems that the proposed method incurs higher time and storage costs, as it requires the independent learning of multiple student models compared to the vanilla methods. The paper should provide an efficiency analysis that compares the proposed method with existing baselines, highlighting both time and storage metrics.\n3. It would be better to provide a visualization study for a better understanding of the effectiveness of transfer learning from different prototypes.\n\n[1] Tan, Yue, et al. \"Fedproto: Federated prototype learning across heterogeneous clients.\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. No. 8. 2022.\n\nsee weakness"
}
] | |
y2fAmldTIf | HEPrune: Fast Private Training of Deep Neural Networks With Encrypted Data Pruning | Non-interactive cryptographic computing, Fully Homomorphic Encryption (FHE), provides a promising solution for private neural network training on encrypted data. One challenge of FHE-based private training is its large computational overhead, especially the multiple rounds of forward and backward execution on each encrypted data sample. Considering the existence of largely redundant data samples, pruning them will significantly speed up the training, as proven in plain non-FHE training.
Executing the data pruning of encrypted data on the server side is not trivial since the knowledge calculation of data pruning needs complex and expensive executions on encrypted data. There is a lack of FHE-based data pruning protocol for efficient, private training. In this paper, we propose, \textit{HEPrune}, to construct a FHE data-pruning protocol and then design an FHE-friendly data-pruning algorithm under client-aided or non-client-aided settings, respectively. We also observed that data sample pruning may not always remove ciphertexts, leaving large empty slots and limiting the effects of data pruning. Thus, in HEPrune, we further propose ciphertext-wise pruning to reduce ciphertext computation numbers without hurting accuracy. Experimental results show that our work can achieve a $16\times$ speedup with only a $0.6\%$ accuracy drop over prior work.
The code is publicly available at \href{https://github.com/UCF-Lou-Lab-PET/Private-Data-Prune}. | https://openreview.net/pdf/0446457111dbee1b34d15e6bd303f6f1da78ef41.pdf | [
{
"confidence": 4,
"rating": 6,
"review_id": "TaJgDqeCs8",
"review_text": "This paper proposes a data pruning algorithm for the training of Homomorphic Encryption (HE)-based neural networks. The authors introduce an HE-friendly importance score and client-aided masking to prune samples in the dataset. The authors further propose ciphertext-wise pruning to merge ciphertexts with empty slots, thereby reducing computational costs during training. Finally, the paper presents empirical studies to validate the effectiveness of the proposed data pruning method.\n\nThe main advantages can be listed as follows:\n\n1.\tThe paper provides a new data pruning method for data encrypted by HE scheme. The authors propose the HEL2N score, which substitutes the $\\ell_2$-norm in the EL2N score with $\\ell_1$-norm, and the client will select important samples based on the score computed by server.\n2.\tThe paper proposes the ciphertext-wise pruning, enabling the server to merge ciphertexts with empty slots with the communication of the client.\n3.\tThe paper conducted experiments on five datasets and compares the proposed method with the HETAL method to demonstrate its effectiveness.\n\nDespite many strengths, there are some issues with the paper as follows:\n\n1.\tThe submission requires further revisions for clarity and consistency. At line 43 “methds” should be “methods”; At line 74 and Figure 1 “CIFAR10” should be “CIFAR-10” as in Section 4; At line 326 “Table2” should be “Table 2”; At line 340 “Figure 4(4)” should be “Figure 4(a)”; Figure 1 and 4 should have sub-captions denoting which subgraph is (a) or (b).\n2.\tThe computation costs associated with data pruning raise concerns. As given by Eqn. (1), computing the HEL2N score involves multiple gradient computations for each sample, which is computationally intensive. Moreover, ciphertext-wise pruning seems to require a large number of rotations, which is also a very slow HE operation. If the data pruning process is time-consuming, it may negate the benefits, making it more efficient to train directly without pruning.\n3.\tThe novelty of the paper is questionable. The HEL2N score primarily modifies the $\\ell_2$-norm in the EL2N score to an $\\ell_1$-norm, and directly computing the square of EL2N score seems to be faster than HEL2N score. The trick of masking is also common place in HE literature. Moreover, one advantage of HE-based method is that they do not require any communications between server and client. The requirement for client-server communication in client-aided masking and ciphertext-wise pruning could diminish the significance of the proposed method.\n\n1.\tWhy not directly compute the square of the EL2N score, which would avoid the need for computing the square root or the maximum value and is a simpler process?\n2.\tIs the running time for HE-based data pruning included in the total running time reported in the Tables in Section 4.2? If so, what proportion of the total running time does the HE-based data pruning constitute? \n3.\tHow does the proposed method compare to a baseline that directly randomly samples a subset of ciphertext at each epoch, instead of computing importance score and merging ciphertext? I think this baseline is simpler and more efficient."
},
{
"confidence": 3,
"rating": 7,
"review_id": "ZOq1NO0ZdX",
"review_text": "This paper focuses on the scenario where the client encrypts the model and dataset with homomorphic encryption and outsources them to the server for training. It accelerates the training process through dynamic data pruning. This paper makes the following three contributions: \nFirst, this paper is the first to use dynamic data pruning to accelerate model training in homomorphic encryption (HE) scenarios. Second, because using the plaintext data pruning method in the HE scenario incurs significant overhead, this paper proposes an HE-friendly method for evaluating the importance of data samples. Lastly, because of the high cost of sorting in HE, this paper proposes that the client undertake this part of the computation. Additionally, since a single SIMD ciphertext can contain multiple data samples, pruning may not reduce the number of ciphertexts, even though the samples within each ciphertext become more sparse. To address this issue, the paper proposes to combine several sparse ciphertexts to reduce HE computation.\n\n1) This paper is the first to apply dynamic data pruning to accelerate model training in HE scenarios.\n2) It introduces a HE-friendly important score to make data pruning more efficient.\n3) This paper uses ciphertext-wise pruning to reduce the number of ciphertexts while keeping detailed information.\n\n1) The HE-friendly score needs more explanation. In this paper, the score is directly introduced without any theoretic proof of its effectiveness. \n2) The work is a bit incremental. Applying data pruning in the HE scenario doesn't seem very challenging, and there is no significant difference between data pruning in plaintext and ciphertext.\n\n1) Is there any theoretic proof of the effectiveness of the proposed HE-friendly score?"
},
{
"confidence": 4,
"rating": 5,
"review_id": "CsAyNselKx",
"review_text": "1. The paper introduces a Homomorphic Encryption (HE)-based confidential training framework that enhances training efficiency through encrypted data pruning.\n2. The paper proposes HE-Friendly Score (HEFS), an enhancement over the existing EL2N score, to efficiently assess the importance of encrypted data samples.\n3. Due to the high complexity of sorting scores and calculating the pruning mask on the server, the paper introduces a method that generates data pruning masks with the assistance of the client, enabling the server to perform pruning.\n4. The paper proposes a method for pruning at the ciphertext level to reduce sparsity in the encrypted data, thereby accelerating the training process.\n5. The performance of HEFS, CAM, and CWP is evaluated on diverse datasets such as MNIST, CIFAR-10, Face Mask Detection, DermaMNIST, and SNIPS. The results are compared with the previous method, HETAL (ICML2023), to demonstrate improvements in training speed and accuracy.\n6. The experimental results indicate that the proposed methods can accelerate confidential training by up to 16 times with minimal loss in accuracy.\n\n1. This paper tackles the novel problem of accelerating confidential training through encrypted data pruning, a topic that appears to have not been previously explored in existing research.\n2. The methodologies and experimental procedures are clearly explained, ensuring the reproducibility of the results by providing their code (although I have not tested the code yet).\n3. Considering that HE training is a very challenging subject due to the computational complexity of operations on homomorphically encrypted data, it is noteworthy that the authors have implemented detailed techniques such as pruning within the homomorphic encryption framework for the first time. However, it is crucial that the design carefully considers both security and performance limitations.\n\n1. There are concerns regarding the privacy threat setting in this paper. The focus is solely on the importance of the client's data privacy, without explicitly considering the server's model privacy. In other words, the server's model is assumed to be publicly available information. This assumption is reflected in the final step, where the client recovers the weights of the trained model and sends them back to the server.\n2. If the server's model is publicly available, it would be more efficient for the client to process the data in plaintext after receiving the pretrained model. If the scenario involves a massive pretrained model, such as a large language model (LLM), which individual clients cannot train, then training such an LLM in an encrypted state would require at least 1000 times more computation on the server due to the difference in computational overhead between homomorphic encryption and plaintext operations. This level of computation may be unmanageable, and the final decryption of the model by the client would also be infeasible due to the enormous size (100 trillion parameters) of the model.\n3. If the primary concern is the client's privacy, it would be significantly more efficient to have the client train on a pretrained model in plaintext, use federated learning, or adopt other methods rather than struggling with encrypted training on the server, as proposed in this paper. The bottom line is that if the server's model privacy is not considered in the security threat model, it is questionable whether this approach is practical or appropriate.\n4. The HE.cmp operation does not yield a precise 1 or 0; rather, it produces a fractional value when the two compared values are similar. During the sorting process in Algorithm 1, swapping based on HE.cmp might introduce noise if the score differences are not substantial. This could affect performance. To avoid this, HE.cmp would need a high-degree approximation. The paper does not provide sufficient information on HE.cmp, and additional explanation would be beneficial.\n5. Due to the overhead of homomorphic encryption operations, involving the client in the training process because of the complexity of sorting diminishes the method's utility. While the paper does not consider the server's model privacy, allowing the client to access intermediate values during training does not pose an additional security risk. However, in scenarios where the server's model privacy is a concern, this method could enable the client to gain critical information about the model. Increasing the client's role in plaintext processing during training could significantly reduce the server's burden and enhance overall performance. In extreme cases, the client could potentially handle the entire training process in plaintext. The paper needs to explain why the client should only assist with sorting.\n\nIs it true that the pruning mask is also encrypted? The distinction between encrypted and non-encrypted elements in the algorithm is not entirely clear. While an overline appears to indicate encrypted values, there are instances where this is not consistently applied.\nIn scenarios where the server does not send scores to the client, the mask would presumably remain encrypted. If this is the case, how does the server determine which parts to prune, and how does it use rotation to merge ciphertexts and create pruned ciphertexts? Is it assumed that the client must decrypt the data and create the mask in plaintext? If so, the paper should explicitly state this assumption."
},
{
"confidence": 4,
"rating": 5,
"review_id": "jP3pHSAtPF",
"review_text": "This paper presents a method for pruning data in a utility-preserving way under homomorphic encryption, evaluating the method to demonstrate that the savings from training on pruned data outweighs the costs of encrypted data pruning computations. The methods for determining how relevant data items are to improving training performance are similar in spirit to those in the active/few-shot learning literature, but the paper does not explicitly draw this parallel.\n\nAlthough the paper offers a concrete threat model related to \"private training\" (where model training is outsourced to a third party), several aspects of the threat model seem not to achieve the stated goal of limiting the outsourcing party/vendor's ability to learn useful facts about the data. As the current state-of-the-art is to bind third-party infrastructure providers contractually, I'd like to see a map from how the approach in this paper, in a threat model that is weak, could be strengthened to a threat model that would obviate these purely contractual limitations. For example, there are standard constructions to move from honest-but-curious models to models where the third party is more adversarial (although some reliance must always be assumed in the case of outsourced computation). More difficult are the problems of separating data flows from inferences about the content of training data through various sorts of indirect leakage (training time, ciphertext size, dependence of the presence/absence of the allowed \"early stop\" signal, etc). Although this problem is difficult in general, I suspect there are ways to organize the threat model here so that it can make stronger claims around solving them. At a minimum the threat model should declare indirect data flow out of scope while recognizing that it can leak data items to the outsourcing partner.\n\nA few structural aspects of the paper confuse an otherwise solid presentation: a core assumption in the setup of the model in which the protocol is used is that the outsourcing partner will compare the sorted data items (under encryption) to a threshold importance score determined by the utility loss of pruning, but how this threshold is set/computed is left open (an experiment uses an ex vacuo value of 0.9 for this parameter, but why is not explained even in this concrete context!); although it is clear that the goal is only to outsource training, it is not clear that an organization without the infrastructure capability for training will have infrastructure capability for things like serving - this should be declared out of scope or unpacked/discussed a bit; approximations made to simplify computation under encryption, such as the replacement of $\\ell_2$ with $\\ell_1$ norm at 204-205, are not directly evaluated or justified; in general, the grouping of samples into batch ciphertexts is assumed but not explained - the paper should explain its necessity and the benefits it provides vs. the simple solution of putting each sample in its own ciphertext.\n\nAlthough the evaluation is valuable in supporting the core claims of the paper, there are some structural issues there as well: although the experiments characterize the tradeoff between utility and pruning ratio, this tradeoff is very different for the two example datasets. How general ought a tradeoff curve for this be? How data dependent might it be? Relatedly, might pruning affect performance for different classes differentially, especially in situations where class balance is poor? Much of the \"fairness\" literature focuses on ways that aggregate analysis breaks down when distinct classes might or ought to be treated differently by the model. Does this affect the analysis? Could it in some cases?\n\nLast, I observe that 4MB of communication overhead for a tiny database (CIFAR-10, 43750 samples) is manageable but there is no discussion of scaling here. Are the proposed applications small like this? If not, at what point is the overhead too much? Does scale cause this to break down? I here recognize that the paper inherits the inherent inefficiency of FHE constructions.\n\n* The research question is well motivated and the solution a useful tool to making private outsourced training a more realistic option. I am not well versed enough in the FHE training literature to evaluate the novelty of this specific approach, but the core idea is sound.\n* Evaluations justify the theoretical claims nicely, even where improvements are available.\n* The overall argumentation is strong, even if some details are never defined or explained as noted in the summary.\n\n* The summary notes several places where definitions can be sharpened or details can be re-ordered to improve the presentation.\n* There are a handful of places where some copyediting would improve the presentation, although the high-level argument structure is in general strong.\n* The key question of how the pruning ratio is determined must be explained, since it is an input to most of the provided private training algorithms and also a key determinant of the model in which the protocol is meant to be used.\n\n* How does the pruning ratio get determined in practice? Is it necessary to do many private training runs and measure utility on the resultant models? How does this cost compare to the cost of non-private training?\n* Is the scale problem being swept under the rug at 237-240 an issue for larger data sets, or is the idea that this should only apply to data sets at the scale demonstrated? How does communication overhead scale as the training runs become larger?\n* Can the issue of indirect data leakage be managed somehow, or must it simply be assumed away in the security model? It's a hard problem, so solving it may well be out of scope here, but also past computation-on-encrypted-data methods have failed catastrophically due to indirect leakage problems so it's necessary to say something about it. What can or should the paper say?"
}
] | |
y10avdRFNK | Learning diffusion at lightspeed | Diffusion regulates numerous natural processes and the dynamics of many successful generative models. Existing models to learn the diffusion terms from observational data rely on complex bilevel optimization problems and model only the drift of the system.
We propose a new simple model, JKOnet*, which bypasses the complexity of existing architectures while presenting significantly enhanced representational capabilities: JKOnet* recovers the potential, interaction, and internal energy components of the underlying diffusion process. JKOnet* minimizes a simple quadratic loss and outperforms other baselines in terms of sample efficiency, computational complexity, and accuracy. Additionally, JKOnet* provides a closed-form optimal solution for linearly parametrized functionals, and, when applied to predict the evolution of cellular processes from real-world data, it achieves state-of-the-art accuracy at a fraction of the computational cost of all existing methods.
Our methodology is based on the interpretation of diffusion processes as energy-minimizing trajectories in the probability space via the so-called JKO scheme, which we study via its first-order optimality conditions. | https://openreview.net/pdf/71e85a95e3f40ebd277c5df65f9dff3c748e2ddb.pdf | [
{
"confidence": 3,
"rating": 7,
"review_id": "YxSGqQR3XZ",
"review_text": "This paper considers learning diffusion dynamics from observational data of populations over time, identified as learning the energy functional in Equation 3. Past research has confronted this inverse problem via complex bilevel optimization, limited to potential energies. This paper proposes an alternative model JKOnet* that can work with potential, internal, and interaction energies, efficiently minimizes a quadratic loss instead of a complex bilevel optimization, has much lower computational complexity, and out-performs baselines in simulations. A variant for linearly parameterized functionals has a closed form solution. The paper's new method reconsiders the JKO scheme using first-order optimality conditions, resulting in decompose the problem into first computing optimal transport plans between adjacent populations and then optimizing a loss for fixed plans.\n\n- Inferring diffusion dynamics from observational data is a difficult and significant problem for which this paper appears to provide a solid contribution. The paper substantially improves upon JKOnet in terms of multiple directions: better performance (Figure 3), simpler optimization objective (Equation 11), better scalability and efficiency (e.g. Table 1, Section 4.2), and improved generality (Table 1, Section 4.3). These dimensions are analyzed in experiments across a range of different energy functionals, where the gains are shown in log-scale displaying orders of magnitude improvement. The paper makes a convincing argument for using JKOnet* over JKOnet.\n- The methodology appears quite strong, well-motivated, and original, with solid intuition given by the authors throughout the paper.\n\nMinor weaknesses:\n- While the results are strong, occasionally the language feels too imprecise. For example, \"runs at lightspeed\" seems inaccurate compared to \"runs very efficiently\". The authors also mention that they rely upon weeks-old advancements in optimization in the abstract which seems unneeded.\n- The paper is generally very well-written except for the introduction which could use editing. It introduces a lot of terminology and details from past research. Similarly, Figure 1 is referenced multiple times including in the introduction but it was hard to understand until after reading Section 3. \n- The construction of the optimal transport plans does not seem to be included in the computational complexity comparisons. While this is computed once for JKOnet*, it is additional expense over JKOnet.\n\n1. What is JKOnet_l in Table 1?\n\n2. In Section 4.2, the authors conclude that JKOnet* is well-suited for high-dimensional tasks. Does this include computing the optimal transport maps?\n\n3. The discussion in Figure 3 in the text focuses primarily on the speed improvement, yet the performance gains are also quite large, including seemingly between JKOnet* and JKOnet*_l. Can the authors comment on why the linear parameterization was useful in their experiments?"
},
{
"confidence": 4,
"rating": 8,
"review_id": "0i18x4FbZP",
"review_text": "The authors study diffusion processes from the perspective of Wasserstein gradient flows. Based on the recent fixed-point characterisation for Wasserstein proximal operator methods, they introduce Jordan-Kinderlehrer-Otto (JKO) type methods for learning potential and interaction energies that govern the diffusion process. Such methods are assuming that a sample of the population distribution at each time step is at hand (not necessearily obtained by tracking individual particles) implying important applications across various fields. While theoretical novelties are present (w.r.t. paper [26] that lies in the foundation of this work), the main contribution is the overall methodology for learning diffusion processes.\n\nPaper is, besides minor issues reported bellow, excellently written - very clear, precise and intuitive with well balances technical details between main text and the appendix. Existing ideas are neatly combined to obtain significant improvements of the JKO-type methods and extensive empirical evaluation is presented. The proofs seem correct and well-written.\n\nWhile I do not find important weaknesses, I feel that next several small issues can be addressed to further improve readability:\n\n1. When addressing content presented in the appendix it would be good to refer to the section, e.g. see Figure 6 in Appendix A.\n\n2. It would be good to say what $\\rho_t$ is in Example 2.1\n\n3. While Table 3.1 reports per-epoch complexity for all the methods, it would be important to note that JKOnet$^*$ have additional computational complexity for solving $T$ OT problems of size $N$ in $d$-dimensions. Detailed remark on the initial computational complexity, depending of the algorithm used, should be reported. \n\n4. In Section 4 it would be helpful to introduce the problems, that is to better explain the task of each experiment and the role of functionals ($V(x)$ ?!) appearing in Appendix F. Maybe giving an example on Styblinski-Tang functional appearing in Figures 2, 3 and 4, and then referring to other ones by their names and/or reference equations.\n\n1. In the implementation of the method, a priori computed optimal transport plans are obtained by solving entropy-regularised OT via Sinkhorn-type algorithms or some other methods?\n\n2. What do you think about the applications and/or limitations of the JKOnet$^*$ for the setting of long-trajectories to infer the behaviour in equilibrium, e.g. detection of meta-stable states of Langevin dynamics?"
},
{
"confidence": 3,
"rating": 6,
"review_id": "SZp39GZ3R5",
"review_text": "This paper introduces JKOnet*, a new method for learning diffusion processes from data. It uses first-order optimality conditions of the JKO scheme instead of complex bilevel optimization. JKOnet* can recover potential, interaction, and internal energy components of diffusion processes. The authors provide theoretical analysis and experiments showing JKOnet* outperforms baselines in accuracy, speed, and ability to handle high-dimensional data. They also derive a closed-form solution for linearly parameterized functionals. JKOnet* offers improved computational efficiency and representational capacity compared to existing approaches for modeling diffusion dynamics from population data.\n\n- Develops JKOnet*, a method using first-order optimality conditions of the JKO scheme to learn diffusion processes, avoiding bilevel optimization and improving computational efficiency.\n- Provides theoretical analysis and proofs for JKOnet*, including a closed-form solution for linearly parameterized functionals, backed by comprehensive experiments across various test functions.\n- Demonstrates improved performance in terms of Wasserstein error and computation time compared to existing methods like JKOnet, especially in high-dimensional settings.\n- Enables recovery of potential, interaction, and internal energy components of diffusion processes, expanding the model's applicability to more complex systems and improving interpretability.\n\n- The experimental evaluation is limited to synthetic datasets. Real-world data applications would strengthen the practical relevance of the method.\n- While the paper discusses limitations, it does not thoroughly explore potential failure cases or boundary conditions where JKOnet* might underperform.\n- The paper does not provide a comprehensive comparison with other recent approaches in learning diffusion processes beyond JKOnet, which could provide broader context for the method's improvements.\n\n- The authors demonstrate JKOnet*'s performance on synthetic datasets. How well does the method perform on real-world diffusion processes? Additional evaluations on empirical data would help understand the method's practical applicability.\n- The paper focuses comparison mainly with JKOnet. How does JKOnet* compare to other recent approaches in learning diffusion processes? \n- In Section 3.4, the authors discuss different parameterizations. How sensitive is JKOnet* to the choice of neural network architecture for the non-linear parameterization case?"
},
{
"confidence": 3,
"rating": 5,
"review_id": "p2Q3Rh1Jup",
"review_text": "This paper studies the problem of learning a diffusion process from samples. It proposes a new scheme based on learning the \"causes mismatch\" of the process, rather than the \"effects mismatch\" as in previous works. The new method is significantly more efficient than the schemes from prior works, and works well in practice.\n\nThe paper is well-written, and the scheme proposed seems to work well in practice on the examples it was tested on. The loss is intuitive, and resembles the score-matching loss from diffusion models, but is the analogous version for arbitrary diffusion processes. Overall, this seems like a paper that people at NeurIPS would be interested in.\n\nI am not familiar enough with the literature, but it seems surprising to me that this scheme has never been proposed before. In particular, the loss is exactly the score-matching in the case of diffusion models, and there are works [1], [2] that have proposed a similar loss for arbitrary diffusion processes. \n\n[1]: https://arxiv.org/abs/2208.09392\n[2]: https://arxiv.org/abs/2209.05442\n\n1) Can you provide a more thorough comparison with prior literature, especially the works I have linked above?"
}
] | |
xzCuBjHQbS | Random Function Descent | Classical worst-case optimization theory neither explains the success of optimization in machine learning, nor does it help with step size selection. In this paper we demonstrate the viability and advantages of replacing the classical 'convex function' framework with a 'random function' framework. With complexity $\mathcal{O}(n^3d^3)$, where $n$ is the number of steps and $d$ the number of dimensions, Bayesian optimization with gradients has not been viable in large dimension so far. By bridging the gap between Bayesian optimization (i.e. random function optimization theory) and classical optimization we establish viability. Specifically, we use a 'stochastic Taylor approximation' to rediscover gradient descent, which is scalable in high dimension due to $\mathcal{O}(nd)$ complexity. This rediscovery yields a specific step size schedule we call Random Function Descent (RFD). The advantage of this random function framework is that RFD is scale invariant and that it provides a theoretical foundation for common step size heuristics such as gradient clipping and gradual learning rate warmup. | https://openreview.net/pdf/1ec86b65423b53f88a8ae3ca99f4895a30b0617d.pdf | [
{
"confidence": 3,
"rating": 5,
"review_id": "VRMrMCjeg1",
"review_text": "The authors derive a novel gradient descent step schedule from a Bayesian point of view, establishing a connection between Bayesian optimization and classical optimization. The theory gives support to some commonly chosen step schedules and is validated on MNIST dataset.\n\n1. The paper is well written, with clearly explained and carefully chosen notations. The figures are very pretty. It's a pleasure to read.\n2. The paper has a good motivation. Worse-case theory, in general, can mislead people. Average-case studies are desired. The disparity between Bayesian optimization and classical optimization is quite obvious, and one can imagine there can be many optimization algorithms with mixed characteristics of both genres. The direction the paper explored is promising.\n3. The research is very detailed and solid. The authors give sound proofs to their theorems and organise the results in a clear manner. The experiments are very extensive and well displayed.\n\n1. The so-called \"average case study\" is not fully justified. The expectation of $J(w_n)$ is not in general equal to the expectation of $J(\\theta)$ with $\\theta$ fixed and then replaced by $w_n$. This is because $w_n$ is by itself a random variable. More concretely, suppose that $J$ is sampled randomly from $\\mathcal{N}(\\mu, C)$ with $\\mu$ being a constant, say $\\mu_0$. Then the expectation of $J(w_0)$ would be $\\mu_0$ but the expectation of $J(w_n)$ for $n$ large would be much smaller than $\\mu_0$. The method in this paper can only be thought of as average case study in the initial stage of optimization. The authors mention \"forgetful\" but I believe the problem is more serious than it looks. The authors also mention \"risk-affine\", but I don't necessarily agree with it. The claim \"Since RFD is defined as the minimizer of an average instead of an upper bound – making it more risk affine\" feels weak, because I don't think it's well justified yet that RFD is the minimizer of an average.\n2. Incomplete story and lack of depth. Overall, there are lots of results but none of them are highlighted enough to be a gem. On the theory side, it's not clear whether there is any nontrivial key technical contribution in the proofs. It's not obvious that the derivation of the step schedule from a Bayesian viewpoint involves more than straightforward calculation. It needs more to stand as a strong theoretical paper. Furthermore, it would be better if there was a clear table presenting a convergence rate comparison of this new method and classical ones. On the empirical side, only MNIST is not enough, although the authors did a lot of experiments on MNIST. So as a new methodology paper, we need stronger empirical evidence. It's understood that the authors are studying a very hard problem, but excuses cannot serve as strengths of the paper.\n\nIn the introduction, the authors mention that classic BO is limited to relatively small dimensions. Does RFD improve upon that?"
},
{
"confidence": 3,
"rating": 7,
"review_id": "6FIbPu3JHy",
"review_text": "The current paper studies random function descent, draws connection between RFD and SGD, and derives an adaptive step size scheduler. More specifically, the authors study minimizing a stochastic first Taylor approximation of random functions, which has similar form of gradient descent when the random function is Gaussian process. This connection also hints a step size scheduler for standard GD method. The authors then explore this step size scheduling scheme and study its asymptotic performance, which helps explain some recent step size scheduling tricks such as gradient clipping and warmup. Finally, the authors propose a practical way to evaluate necessary statistics required for the newly found step size scheduler with current ML mini-batch loss. The authors show simulation results for MNIST data to exemplify the effectiveness of the drawn step size scheduler.\n\nThis paper is very well-written, theoretically sound, and the findings seem new and pretty insightful, thus I feel it makes good contribution to research on optimizer learning rate scheduling. \n\nThe main topic is about random function descent, and the minimization of stochastic first Taylor approximation of random function results in a gradient-type method (when Gaussian random function is considered) is surprising and impressive. \n\nThe writing is well-organized, with all terms being properly defined and all theorems (Theorem 2.3, 4.2, 5.2, 6.2) well-formulated and capture core ideas. Theorems that are more representative is presented in main text for better digestion with more complete/general versions listed in Appendix. Theorems and Definitions are followed by simple and efficient explanations (i.e., discussion after Definition 2.1, Definition 2.2, around Theorem 2.3, and many others). Plots and tables are provided and are clean and easy to interpret.\n\nThe math is clean, sound, and rigorous, with very complete proofs (i.e., D.1.1 and D.1.2). Extensions are well-explored (Section E) and more general cases are discussed (Section E.3 for example). From first derivation of step size (Theorem 4.2), to its asymptotic version A-RFD (Definition 5.2) and its stochastic version S-RFD (Theorem 6.2), all are interesting and important findings.\n\nPracticality of the proposed method has been considered. Though the proposed step size scheduler looks complicated, the authors figure out ways to evaluate necessary statistics required to put the step size scheduler into use (Section 6), and the effectiveness of proposed method applied to current ML tasks is also exemplified by examples (Section 7).\n\nThe research topic is valuable. Learning rate scheduling has been an open research area for a long time in optimization field. Currently in machine learning/deep learning research, a great deal of pressure comes from comparing with baseline methods which involves arduous hyperparameter tuning, among which learning rate is often the core. Thus studying learning rate scheduling is of great importance and this paper provides a novel connection between RFD and GD (with also extended comparison to Adam in Section E.1) which is very encouraging. Moreover, classical convergence result for optimization algorithms are mainly with worst case bound, RFD is instead for average case performance, the authors try hard and derive partial result for convergence (Corollary 5.3), and we expect there would be more study of difference between worst case performance and average case performance.\n\nThough I appreciate the presentation quality, theoretical soundness, and novelty of the work. The main drawback of current paper boils down to three parts: lacking comparison with prior work, potential concerns with the practicality and effectivenss of the proposed method, and the (relatively) strict assumption of the theory.\n\n1. The current paper doesn't involve literature review section, though it draws connection to prior work dispersedly, no systematic review has been intended. I currently make my evaluation of the novelty of the work based on my own (might be poor) understanding. I feel adding a related work section is desirable and then a more fair evaluation of value of current work can be made.\n\n2. Still about prior work but for baseline method comparison. The simulation results (mainly Figure 3) only compares the proposed method with SGD/Adam with tuned fixed learning rates. More recent work such as D-Adaptation [1] also studies tuning-free learning rate scheduler for SGD/Adam, from not RFD perspective but more classical optimization angle, hasn't be mentioned/compared against. Moreover, the experiment in current paper seems much simpler and less thorough than the setting considered in D-Adaptation. \n\n3. With respect to practicality, though the authors provide empirical ways to evaluate covariance in mini-batch training, the recipe still looks a bit complex, i.e., one should go evaluate $C$ and $C'$ from the observation first. Unlike current adaptive gradient method such as Adam/AdamW, or even D-Adaptation, which only depends on some statistics involving current/past gradient/function values. Moreover, since RFD is measuring average case performance, it's more risk-affine and tends to predict larger learning rate, which may be harmful for convergence in some cases.\n\n4. Despite that I feel minimizing stochastic Taylor approximation of random function is interesting and worth exploring, the derived GD-type algorithm is for Gaussian random function (Theorem 4.2), though the authors mention this assumption was also used in [2], it might be desirable to more demonstrate to which extent one should expect this assumption to be close to real settings.\n\n\n[1] Learning-Rate-Free Learning by D-Adaptation (Aaron Defazio and Konstantin Mishchenko).\n\n[2] Yann N Dauphin et al. “Identifying and Attacking the Saddle Point Problem in High Dimensional Non-Convex Optimization”.\n\n1. Could the author please add a literature review section to discuss related works (and probably comparisons with current work)? \n\n2. The derived explicit RFD is for Gaussian random function, and I see that there is some relaxation in Section E.3. Could the authors please demonstrate more on to which extent this RFD method is close to real problem setting confronted in machine learning?\n\n3. I feel the part that discusses connection to Adam (Section E.1) can be partly moved to main text since Adam and its variant is pretty dominant in current ML (especially DL) training. Moreover, do the authors think RFD with component-wise estimated variance can match Adam performance with tuned learning rates in DL training?\n\n4. In line 300, it says \"on CIFAR-100, the step sizes given by RFD were too large\". I don't see these experiment results, could the authors please add this part of result for completeness (even if the result is not ideal).\n\n5. How do the authors expect the performance of proposed method compared with D-Adaptation, will they coincide in certain settings? It seems D-Adaptation is applicable for more general (larger model/more recent dataset) cases, and RDF is more limited since it involves more steps for variance estimation and its risk-affine property might be harmful.\n\n(potential) writing issues:\n\n1. In line 244, it seems \"Since all $Z_b$ have the same underlying of cost $J$\" should be \"Since all $Z_b$ have the same underlying cost $J$\"?\n\n2. In line 271, there seems missing a comma between loss $J$ and stochastic errors $\\epsilon_i$"
},
{
"confidence": 2,
"rating": 6,
"review_id": "oX7e8t4Ify",
"review_text": "Many machine learning model have parameters that are optimized by some form of gradient descent. Given a parameters $\\omega$ in a space $\\Omega$ and a loss function $\\textbf{J}: \\Omega \\to \\mathbb{R}$, typical gradient descent proceeds by picking a starting point $\\omega_0$ and iteratively taking steps in the direction of steepest descent\n\n$$\n\\omega_{n+1} = \\omega_n - h \\nabla J(\\omega_n) = \\omega_n - \\eta \\frac{\\nabla J(\\omega_n)}{||\\nabla J(\\omega_n)||}\n$$\nwhere $h$ is the learning rate, or similarly $\\eta = $ is the step size. The learning rate/step size is an exogenous pre-determined user hyper parameter.\n\nThis paper proposes a method to automatically determine the steps size parameter. (I may have misunderstood and I welcome any correction by the authors) this method, called Random Function Descent (RFD), take a point $\\omega$, computes the function value and gradient $J(\\omega)$, $\\nabla J(\\omega)$, which is then used to fit a Gaussian process model. The GP model has a constant prior mean mean and a stationary, isotropic kernel, and by fitting one data point and it's gradient vector, the constant prior mean is updated to a still mostly constant surface however with a single local deformation at $\\omega$ resulting in a peak in the uphill direction from $\\omega$ and a trough on the direct opposite downhill side, . the RFD method jumps straight to the bottom of the trough, mathematically\n$$\n\\omega_{n+1} = \\text{arg min}_{\\omega'} \\mathbb{E}[J(\\omega') | J(\\omega), \\nabla J(\\omega') ]\n$$\nwhere the expectation is the posterior mean of the GP having been fit to the one data point. As the GP kernel is isotropic, there is no prior bias in any direction and the direction of the trough is exactly the direction of the gradient, consistent with normal gradient descent.\n\nThe paper considers many of the technical and theoretical hurdles and provides solutions in each case. Finally experiments with MNIST are provided. \n\nI somewhat struggled with the paper and have set my confidence score to low accordingly.\n\n- tuning the baselines in the numerical experiments\n\nUnfortunately for me, I struggled to understand much of the paper, I believe this could be partially be due to writing style, I have tried to keep my technical and writing comments separate\n\nI apologize if my understanding is incorrect, and look forward to the authors response to correct any such errors.\n\n- all the parameter updates are using euclidean distance in parameter space. In contrast, Natural gradient descent makes parameter updates that have equal distance in output distribution space. In practice, I believe an approximation is implemented by using inverse squared gradients for each parameter similar to ADAM/RMSprop that use root mean squared gradients. Obviously, \n\n- the numerical experiments seem a little lacking, RFD doesn't appear to show a significant improvement on Figures 3, 6, 7. MNIST and FashinMNIST are very small and perhaps too easy, any optimizer will \"max out\" any model pretty quickly I assume.\n\n\nThe below points are my personal subjective comments on the writing.\n- I am a little reluctant to agree that this paper has much to do with Bayesian optimisation as suggested by the abstract and introduction. RFD fits a GP model to a single data point and only uses the posterior mean, it is the same as kernel ridge regression.\n- I felt the terminology of \"stochastic Taylor Expansion\" was rather unhelpful and somewhat counterproductive. In my mind, zeroth/first/second order Taylor expansion refer to constant/linear/quadratic local polynomial approximations to a function, however the given function approximations are non-linear (lemma 4.12) this description unfortunately rather mis-directed my thoughts.\n- L69: as above, \"it naturally incorporates covariance based trust\" assumes a lot of context that has not been introduced in the paper at this point, upon first reading I was rather lost, upon second reading it makes sense but felt out of place.\n- (there are many topics and details covered the main paper, would it be possible to focus on a few big ideas?)\n- Table 1, Figure 2, what is the scale \"s\", I assume the length scale in the covariance $C()$ function? This appears not to be introduced in the paper.\n- L62, should the final term of the equation be $\\frac{L}{2}||\\omega - \\Theta||^2$?\n\n- is it possible to extend to use individual rescaling for each parameter whilst keeping the isometric assumption? Sacrificing the isotropy assumption would require fitting a GP model to 1 + d values which has O(d**3) complexity hence would be impossible for network models. Preserving isotropy avoids this issue."
},
{
"confidence": 4,
"rating": 5,
"review_id": "bUABArmE2c",
"review_text": "### Summary\n\nThe paper \"Random Function Descent\" explores the limitations of classical worst-case optimization theory in explaining the success of optimization in machine learning and selecting appropriate step sizes. It establishes a connection between Bayesian Optimization and classical optimization through a \"stochastic Taylor approximation,\" rediscovering gradient descent. This rediscovery introduces a new step size schedule called Random Function Descent (RFD), which is scale-invariant. The analysis provides a theoretical foundation for common step size heuristics such as gradient clipping and gradual learning rate warmup. The paper also proposes a statistical procedure for estimating the RFD step size schedule and validates this theory with a case study on the MNIST dataset.\n\nIn the introduction, the paper emphasizes the importance of cost function minimization in machine learning, typically performed using gradient-based methods that require step sizes chosen by established heuristics. The paper aims to enhance the theoretical understanding of these heuristics and proposes RFD as a new algorithm based on this deeper insight. The authors highlight that classical optimization theory, which relies on \\(L\\)-smoothness, provides conservative learning rates unsuitable for average cases, necessitating the reliance on step size heuristics in machine learning.\n\nThe authors bridge the gap between Bayesian Optimization (BO) and gradient-based methods by introducing a stochastic Taylor approximation based on a forgetful BO posterior. This results in the RFD optimization method, which combines the properties of gradient descent with scale invariance and a complete step size schedule derived from BO. The contributions include proving the scale invariance of RFD, discussing common distributional assumptions in BO, establishing the connection between RFD and gradient descent, and investigating the step size schedule suggested by RFD.\n\nThe paper further develops a non-parametric variance estimation method robust to covariance kernel choices and extends RFD to mini-batch losses. The case study on the MNIST dataset demonstrates the practical application and effectiveness of the proposed RFD algorithm compared to traditional methods like Adam and stochastic gradient descent (SGD). The discussion includes limitations and potential extensions of the proposed method, emphasizing the need for new mathematical theory to address the risk-affine nature of RFD and its larger step sizes.\n\n### Strengths\n\n1. **Innovative Approach**: The paper introduces a novel connection between Bayesian Optimization and gradient descent through the stochastic Taylor approximation, leading to the development of Random Function Descent (RFD). This approach provides a new perspective on step size selection and optimization in machine learning.\n \n2. **Theoretical Foundation**: The analysis of RFD step sizes offers a solid theoretical foundation for commonly used heuristics such as gradient clipping and learning rate warmup. This bridges the gap between empirical practices and theoretical understanding.\n \n3. **Scale Invariance**: RFD's scale invariance is a significant advantage, making it robust to different scales of input parameters and cost functions. This property is stronger than the affine invariance offered by the Newton method.\n \n4. **Practical Validation**: The statistical procedure for estimating the RFD step size schedule and its validation on the MNIST dataset demonstrate the practical applicability and effectiveness of the proposed method. The case study shows that RFD can outperform traditional optimization methods like Adam and SGD.\n \n5. **Comprehensive Analysis**: The paper provides a thorough investigation of the step size schedule suggested by RFD, including explicit formulas, asymptotic behavior, and explanations for gradient clipping and learning rate warmup. This comprehensive analysis enhances the understanding of RFD's behavior and potential benefits.\n\n### Weaknesses\n\n1. **Complexity and Accessibility**: The theoretical development and mathematical derivations in the paper are complex, which might limit the accessibility and understanding for practitioners who are not well-versed in advanced optimization theory and Bayesian methods.\n \n2. **Assumptions and Simplifications**: The paper relies on certain assumptions, such as isotropic Gaussian random functions, which might not hold in all practical scenarios. The need for these assumptions could limit the generalizability of the proposed method.\n \n3. **Risk-Affine Nature**: RFD's risk-affine nature, resulting in comparatively larger step sizes, might lead to instability in certain cases. The paper acknowledges this limitation and suggests that further work is needed to address this issue and develop new mathematical theories for convergence guarantees.\n \n4. **Empirical Validation Scope**: While the MNIST case study is a valuable demonstration, the empirical validation is limited to a single dataset and a specific neural network architecture. Additional experiments on diverse datasets and models would strengthen the evidence for RFD's effectiveness.\n \n5. **Variance Estimation Procedure**: The non-parametric variance estimation method, while robust, involves a bootstrapping procedure that could be computationally intensive. This might pose challenges for large-scale applications and require further optimization for practical use.\n\nMy main issue with this paper is understanding its final message. It feels more like a collection of relevant results rather than a cohesive argument, and I would appreciate a comment on this. If you provided two paragraphs explaining what you proved, why it is important, and what you aim to prove in the future, I still could not grasp the overall vision of the paper."
}
] | |
xymhWyiZOp | On the Use of Anchoring for Training Vision Models | Anchoring is a recent, architecture-agnostic principle for training deep neural networks that has been shown to significantly improve uncertainty estimation, calibration, and extrapolation capabilities. In this paper, we systematically explore anchoring as a general protocol for training vision models, providing fundamental insights into its training and inference processes and their implications for generalization and safety. Despite its promise, we identify a critical problem in anchored training that can lead to an increased risk of learning undesirable shortcuts, thereby limiting its generalization capabilities. To address this, we introduce a new anchored training protocol that employs a simple regularizer to mitigate this issue and significantly enhances generalization. We empirically evaluate our proposed approach across datasets and architectures of varying scales and complexities, demonstrating substantial performance gains in generalization and safety metrics compared to the standard training protocol. The open-source code is available at https://software.llnl.gov/anchoring. | https://openreview.net/pdf/31b62781b7295d311f43718b5f5a178ec72948c1.pdf | [
{
"confidence": 2,
"rating": 7,
"review_id": "T9jOZdfGXJ",
"review_text": "This paper identifies a major problem with anchored training, that the performance of anchored training does not increase with increasing reference set size, and proposes a simple regularization approach to overcome this problem. This approach is evaluated on OOD generalization, calibration and anomaly rejection, and task adaptation, and various facets of anchored training are analyzed.\n\nThe paper makes the interesting finding that the performance of anchored training does not increase with increasing reference set size, and that this problem is not alleviated by more sophisticated inference strategies. The paper also proposes a simple reference-masking regularization technique to help alleviate this problem. The experiments show the effectiveness of the proposed approach, and there is also analysis of how the method interacts with data augmentation and noisy labels. An ablation study of the $\\alpha$ parameter is also performed. Training recipes are also provided, making the paper easier to reproduce.\n\nOne weakness is that the reference set selection strategy and reference set sizes are not explained for the experiments. \n\nThe impact/novelty is a bit limited because of the lack of comparisons to non-anchored training works.\n\nMinor points: in the tables, decreases in performance could be colored in a color other than pink. Figure 1 could be improved with error bars. One highlighting was missed in Table 3. The abbreviation LP is not defined.\n\n1. Is there any explanation for why the accuracy stays relatively constant (Figure 1) regardless of reference set size?\n2. Does training for more epochs help alleviate the reference set size problem?"
},
{
"confidence": 4,
"rating": 7,
"review_id": "GXj3uKGrWW",
"review_text": "In this paper, the authors propose a new strategy to train anchoring-based models, significantly improving performance, training efficiency, and model generalization compared to previous approaches. The key to the method is the added masking strategy that allows the model to better profit from anchoring-based training. The authors demonstrate that modifications only in inference (using several samples or searching for the best references) or the number of used references do not improve model performance, while the application of the masking procedure significantly improves it, as shown on various image classification datasets, specifically CIFAR-10, CIFAR-100, and ImageNet, using different architectures (both CNN and attention-based). The experiments demonstrate the effectiveness of the proposed method and the significant benefit of using it for improved generalization.\n\n* The paper is clearly written and easy to follow. The idea is intuitive and easy to grasp. The related work section provides an adequate discussion of existing approaches to anchoring-based training. The analysis narrative, with the presented drawbacks of existing methods, is very clear and easy to understand.\n\n* The idea of masking the reference input argument is very clear and logical. The intuition behind why the problem could occur: 1) argument size grows combinatorially, and therefore 2) the model could learn to ignore the reference argument; seems correct, which is further clearly supported by the experiments.\n\n* The authors provided an extensive evaluation of their approach, spanning different datasets and architectures, which provides a solid grounding to support the proposed method.\n\n* It seems that the evaluation could benefit from an additional comparison with other existing state-of-the-art OOD/uncertainty methods to better represent the quality of the results (not just in comparison with former anchoring-based approaches, but overall).\n\n* From the perspective of the experimental evaluation, I would be curious to see evidence that the behavior demonstrated in the paper would hold in other domains, such as texts, graphs, more complicated vision tasks (e.g. segmentation), not limiting to image classification task.\n\n* Why do the authors focus on vision models when the method seems to be very generic and applicable to other domains as well?\n\n* One of the claims the authors make is that the proposed masking procedure helps with the problem of the model ignoring reference input. They support this claim with, for example, Figure 2—an experiment showing that without this masking, we do not observe improvements in terms of performance, which is only a proxy for the claim. Is it possible to measure the sensitivity of the model with regard to reference inputs (for example, by adding noise to it and measuring the change in the outputs)?\n\n* As far as I understand from the method description, the final method in Section 4 uses only one reference image for inference. How does the performance change with an increased number of references? The lack of improvements in performance (e.g., as in the right plot in Figure 2) seems strange to me since we would observe the opposite behavior in all existing ensembling approaches (e.g., [1, 2, 3, 4]). How would one explain such behavior? Additionally, it would be good to see some comparisons with these methods or at least include them in the discussion. \n\n[1] Lakshminarayanan, Balaji, Alexander Pritzel, and Charles Blundell. \"Simple and scalable predictive uncertainty estimation using deep ensembles.\" NeurIPS 2017\n\n[2] Wen, Yeming, Dustin Tran, and Jimmy Ba. \"Batchensemble: an alternative approach to efficient ensemble and lifelong learning.\" ICLR 2020\n\n[3] Durasov, Nikita, et al. \"Masksembles for uncertainty estimation.\" CVPR 2021\n\n[4] Laurent, Olivier, et al. \"Packed-ensembles for efficient uncertainty estimation.\" ICLR 2023"
},
{
"confidence": 3,
"rating": 8,
"review_id": "1lSyh74jXd",
"review_text": "This paper presents a thorough discussion on the use of anchoring for training vision models. In particular, the paper tackles 1) the problem of reference diversity when training with anchoring to explain how superior generalization can be achieved 2) addresses the problem of spurious correlations learnt between the residual and 3) how different inference-time strategies can enable greater out-of-support generalization. Overall, this comprehensive study of anchoring provides useful guidelines for how anchoring should be applied to extract maximum performance. The paper empirically confirms this via the proposed anchoring scheme outperforming prior work noticeably.\n\n1) Clarity: The paper is very clearly written and easy to follow. Readers unfamiliar with the literature like myself are able to understand what anchoring is, how it can be useful for (out of support) generalization and how current methods fail to apply anchoring in the most effective way. \n\n2) Thoroughness of Evaluation: The paper conducts thorough ablations on several components of the anchoring pipeline. Reference diversity, reference masking, inference procedure etc. More\n\nNo obvious weaknesses.\n\nHave the authors compared the out of support generalization of anchoring procedures to other methods for domain generalization (which tackles a similar problem)? Considering datasets and baselines from *In Search of Lost Domain Generalization* (https://arxiv.org/abs/2007.01434) can further broaden the impact of this paper."
},
{
"confidence": 4,
"rating": 8,
"review_id": "7wqzz6f0K0",
"review_text": "The authors analyze the effect of anchored training through a series of small experiments and find that, contrary to claims in prior works, increasing the size of the reference set is not beneficial and that this shortcoming cannot be mitigated through existing inference strategies. The authors provide a simple yet efficient fix by randomly masking out the reference during training, and forcing the model to make high entropy predictions in those cases. This solution does not incur any training overhead, and the authors demonstrate in extensive experiments that the fix is applicable to different models and datasets, yields improvements for OOD performance over various distribution shifts, and improves calibration and anomaly resilience.\n\n1. The paper is very well written and structured and is overall easy to follow. The initial experiments highlight the studied problem well.\n2. The authors showcase an important limitation to existing anchoring techniques that was unknown to the community.\n3. The proposed solution is simple and is demonstrated to consistently improve performance across models and datasets.\n4. The experiment section is extensive and covers both OOD performance as well as safety-relevant metrics. The results convincingly demonstrate the effectiveness of the proposed method.\n\nThe paper is very well written, I don't see any major weaknesses that would prevent an accept.\n\nMinor weakness: The optimal $\\alpha$ is determined when using the entire dataset as a reference set. However, as is clear from the motivation, risk of spurious shortcuts is larger with a smaller reference set. Wouldn't this imply that the optimal $\\alpha$ would be larger for smaller reference sets? How should this value be chosen in practice and for datasets larger than ImageNet-1k?\n\n1. Tab. 2 Do you have any insights why the improvements on ImageNet-S and ImageNet-R are drastically different for SWIN transformers and ViT?\n2. (minor) The formatting of paragraph headers in the introduction is weird and inconsistently using underline.\n3. (minor) Erroneous comma in L.165\n4. (minor) It is hard to visually assess from Fig. 4 whether the optima are significantly different."
}
] | |
xxY8d4rnSb | ManiPose: Manifold-Constrained Multi-Hypothesis 3D Human Pose Estimation | We propose ManiPose, a manifold-constrained multi-hypothesis model for human-pose 2D-to-3D lifting. We provide theoretical and empirical evidence that, due to the depth ambiguity inherent to monocular 3D human pose estimation, traditional regression models suffer from pose-topology consistency issues, which standard evaluation metrics (MPJPE, P-MPJPE and PCK) fail to assess. ManiPose addresses depth ambiguity by proposing multiple candidate 3D poses for each 2D input, each with its estimated plausibility. Unlike previous multi-hypothesis approaches, ManiPose forgoes generative models, greatly facilitating its training and usage. By constraining the outputs to lie on the human pose manifold, ManiPose guarantees the consistency of all hypothetical poses, in contrast to previous works. We showcase the performance of ManiPose on real-world datasets, where it outperforms state-of-the-art models in pose consistency by a large margin while being very competitive on the MPJPE metric. | https://openreview.net/pdf/aa9e7681c86797dad7f2bb93ba5e0c36ea2de62a.pdf | [
{
"confidence": 4,
"rating": 3,
"review_id": "5uQkjgVeRT",
"review_text": "This paper presents a method to estimate 3D human keypoints from a sequence of monocular 2D keypoints observations. It builds upon an existing sequence-to-sequence architecture (MixSTE), with a different output parameterization exploiting a kinematic skeletton prior, and different training losses. Lengths of the skeletton bones are predicted for the whole sequence to ensure consistency across frames (and maybe also left/right symmetry of the skeletton), and five 3D pose hypotheses with associated scores are predicted for each frame, parameterized as a list of 3D relative orientation for each bone with respect to its parent in the kinematic tree.\n\nThe authors develop theoretical arguments regarding the benefits of enforcing such structural priors in the predictions, and illustrate with a toy example the interest of having multiple predictions in case of ambiguous multimodal output. They validate their approach on Human3.6M and MPI-INF-3DHP datasets.\n\nThe motivation for exploiting bone lengths constraints is well expressed, with a clear and detailed discussion provided in Section 4. The discussion of experimental and ablation results is insightful and shows – in a setting dependent on an oracle – benefits of the proposed approach.\n\nThe idea of enforcing body priors (constant bone length here) is not novel and has actually been heavily exploited in a whole line of work relying on more advanced parametric models such as SMPL [100]. This line of work would deserve being considered in the paper, as it encompass approaches suitable for 2D-to-3D sequence lifting such as e.g. [101].\n\nThe authors present a pose space consisting in 3D coordinates of joints linked by some rigid segments. Based on this definition, a natural pose parameterization would consist in the 3D direction of each segment, yet the authors chose to overparameterize poses by using relative 3D bone orientation instead. I understand that such choice can have practical benefits in term of biomechanical constraints and additional supervision signal when ground truth data is available, but such choice should be properly motivated, discussed and ablated in the paper.\n\n\nThe authors describe two ways of aggregating results L247 but do not state which one they use for MPI-INF-3DHP, and they only report oracle results on Human3.6M and for the ablations.\n\nIn my understanding, pose hypotheses are selected independently for each frame and there are no temporal terms in the training objectives or aggregation method. Since the proposed approach deals with temporal sequences, it would be worth evaluating the temporal consistency of the predictions, through qualitative video examples and quantitatively e.g. using joint acceleration metrics. Having multiple hypotheses for each frame brings combinatorial questions worth discussing in my opinion.\n\nReferences:\n- [100] Loper at al., “SMPL: A Skinned Multi-Person Linear Model”, at SIGGRAPH Asia 2015.\n- [101] Baradel et al., “PoseBERT: A Generic Transformer Module for Temporal 3D Human Modeling”, in TPAMI 2022.\n\nSee the weaknesses section for a list of suggestions."
},
{
"confidence": 4,
"rating": 8,
"review_id": "BGrsQODrp8",
"review_text": "This paper proposes a MCL-based framework for multi-hypothesis 3D human pose estimation. This framework predicts skeletal parameters so that the predicted 3D poses in a sequence are constrained to one smooth manifold. To prove the superiority of such a framework, the paper presents detailed theoretical analysis on the drawback of unconstrained single-hypothesis HPE and why MPJPE alone is not enough for pose evaluation. The experiments show the proposed framework is capable of keeping the consistency of predicted poses and achieving state-of-the-art MPJPE in the meantime.\n\n* Simple and reasonable manifold representation. The proposed framework keeps the predicted human pose on the target manifold by representing the human pose with bone lengths and orientations, and the 3D pose is a direct inference from forward kinematics. The manifold is represented by the kinematics itself.\n \n* Inspiring theoretical analysis on basic problems in 3D HPE. The paper arrives at some theoretical conclusions (line178-183), along with detailed proofs. They can provide some refreshing ideas on the innate drawbacks of traditional loss functions and MPJPE metrics.\n \n* Good performance under both MPJPE and consistency measures, as validated in Table 2 and 3.\n\n* Theoretical analysis on the advantage of multi-hypothesis methods over single-hypothesis ones could be added. Specifically, why a **constrained multi-hypothesis** method performs better than an **unconstrained single-hypothesis** method in MPJPE? Though this is already validated by the experiments, I personally believe it would make the paper more solid if the authors could make this analysis.\n\nMinor problem:\n* In Fig.4 (C) and (D), it is not quite clear how the estimations (crosses and triangles) correspond with the inputs (black dots). There might be some unexpected shifts, as the projections of the predicitons do not strictly align with the inputs (like in B).\n\nWhat is the quality of the score for each hypothesis? If the multiple hypotheses are fused to one (e.g. by taking the one with the largest confidence or taking the weighted average), then how will the MPJPE, MPSCE, and MPSSE change?"
},
{
"confidence": 3,
"rating": 7,
"review_id": "BTVnbkAVRh",
"review_text": "This paper presents a new method to estimate 3D human pose from 2D observations (lifting). To ensure the body symmetry and temporal consistency, the authors disentangle human skeleton to two parts: temporally consistency bone scales and temporally variable bone rotations. The authors use fancy formulas to prove that, minimizing MSE loss could not gurantee manifold consistency. The quantitative and qualitative results on Human3.6m and MPI-INF-3DHP datasets show the superiority of the proposed method.\n\n1. The evalution results in this paper is quite impressive, especially the newly proposed consistency metric. Figure 1 clearly shows the superiority of the proposed method. \n\n2. The authors try to prove the theoretical optimal of the proposed method, which is worth encouraging.\n\nI am not an expert in manifold theory, therefore my questions only relate to human pose estimation. \n\n1. How to constrain the rotation space during training? \n\n2. The pose lifting method is quite similar to Anatomy3D (bone length + rotations). Can I view this paper an multi-hypothesis extension of Anatomy3D? Why? \n\n3. Previous paper \"POSE-NDF: MODELING HUMAN POSE MANIFOLDS WITH NEURAL DISTANCE FIELDS\" is similar to this paper in concepts. SMPL naturally guarantees bone length symmetry, and the learnable parameters (rotations and shape parameters) are similar to this paper in its functionality. It would be better to cite it. \n\n4. Suppose that, there is a virtual dataset, all 2D human joints are rendered (projected) from strictly symmetric 3D joints, then, could learning the lifting function on this virtual dataset using MSE loss guarantee the results all lie on manifold? \n\n5. (An optional question) The ground truth 3D joints of Human3.6M datasets come from the marker tracking on body surface, which naturally could not guarantee skeleton length consistency. Why learning symmetric bones yields better results (both Anatomy3D and the proposed methods)?\n\n1. The citation style is weird. They are not NeurIPS style, please correct them."
},
{
"confidence": 4,
"rating": 5,
"review_id": "GMaY77IJXe",
"review_text": "This paper propose ManiPose, a manifold-constrained multi-hypothesis model for 3D human pose lifting. The authors provide empirical and experimental evidence to show that joint position regression leads to inconsistent skeleton lengths. And they propose to predict globally consistent pose scale and individual joint rotations per frame (rather than joint positions) to constrain the predictions to the pose manifold. Empirical results demonstrates that the proposed ManiPose framework improves the pose consistency.\n\n* The paper provides valuable theoretical analysis to support their arguments and provides intuitive toy examples to illustrate the ambiguity in pose lifting.\n* The paper conducts extensive experiments on H36M and MPI-INF-3DHP datasets.\n\n* The paper uses a multi-head design to predict multiple hypotheses. This design loses the flexibility of sampling different numbers of hypotheses and limits the maximum number of hypotheses to a small number. This often results in limited hypothesis diversity. In the experimental section, the authors do not provide numerical of visual measurements of hypothesis diversity.\n* According to the comparison in Table 4, the manifold constraint proposed in this paper sacrifices MPJPE to improve pose consistency, serving as a trade-off approach between accuracy and consistency. Although the consistency is improved, it lags behind the traditional position regression or manifold regularization in accuracy, and does not bring essential improvement (improve both in accuracy and consistency) compared with these two methods.\n* Missing comparison with two recent multi-hypothesis methods. [1] GFPose: Learning 3D Human Pose Prior with Gradient Fields. [2] DiffPose: Toward More Reliable 3D Pose Estimation.\n\nPlease review the Weaknesses Section. If the author can address or respond to the above issues well in the rebuttal stage, I will consider increasing my score."
}
] | |
xvYI7TCiU6 | Measuring Mutual Policy Divergence for Multi-Agent Sequential Exploration | Despite the success of Multi-Agent Reinforcement Learning (MARL) algorithms in cooperative tasks, previous works, unfortunately, face challenges in heterogeneous scenarios since they simply disable parameter sharing for agent specialization. Sequential updating scheme was thus proposed, naturally diversifying agents by encouraging agents to learn from preceding ones. However, the exploration strategy in sequential scheme has not been investigated. Benefiting from updating one-by-one, agents have the access to the information from preceding agents. Thus, in this work, we propose to exploit the preceding information to enhance exploration and heterogeneity sequentially. We present Multi-Agent Divergence Policy Optimization (MADPO), equipped with mutual policy divergence maximization framework. We quantify the policy discrepancies between episodes to enhance exploration and between agents to heterogenize agents, termed intra-agent and inter-agent policy divergence. To address the issue that traditional divergence measurements lack stability and directionality, we propose to employ the conditional Cauchy-Schwarz divergence to provide entropy-guided exploration incentives. Extensive experiments show that the proposed method outperforms state-of-the-art sequential updating approaches in two challenging multi-agent tasks with various heterogeneous scenarios. Source code is available at \url{https://github.com/hwdou6677/MADPO}. | https://openreview.net/pdf/b7229947dbdbbf04ca5c8c83d49e4cd55a3a0c39.pdf | [
{
"confidence": 3,
"rating": 6,
"review_id": "ZRJJUP3zkJ",
"review_text": "The authors study MARL in heterogeneous settings, where agents are not allowed to share their parameters, and make use of the sequential updating scheme under the CTDE schema. They propose a method which exploits the preceding information to improve exploration and heterogeneity sequentially. This method is equipped with a mutual policy divergence maximization framework, which utilizes the discrepancies between episodes to enhance exploration and between agents to heterogenize agents. Interestingly, the authors propose the conditional Cauchy-Schwarz divergence to provide entropy-guided exploration incentives.\n\n- The problem of exploration in settings with heterogeneous agents is important in MARL and not well-explored in literature. \n\n- The paper is the first to study the effectiveness of policy divergence maximization in the sequential updating schema, upon which important related work has been built.\n\n- The paper proposed the conditional Cauchy-Schwarz (CS) divergence as an alternative to the popular KL-divergence in MARL. Such an alternation may be interesting to the broader RL community. Interestingly, unlike KL-divergence which can expload for small values of the denominator, CS divergence has a provable good lower bound ($-\\log(n)$) only dependent on the number of finite actions.\n\n- The proposed method displays good performance, in comparison to strong SOTA methods (including MAPPO, HAPPO), on benchmarks with heterogeneous agents.\n\n- The proposed framework is simple and easy-to-implement.\n\n- The paper is generally well-written and easy-to-follow.\n\n- The improvement over the baselines (standard KL-divergence, entropy term, no incentive) does not seem to be quite consistent in the ablation study, due to (a) high variance in the results of the no incentive, and (b) very close improvement over the KL divergence baseline in terms of best episodic reward in 2 out 3 tasks.\n\n- Since the CS divergence is new in MARL and RL, a table containing the running times of the evaluated algorithms is missing. How costly is the CS divergence?\n\n- The authors mention: \"To the best of our knowledge, there is no exploration method that can adapt to both heterogeneous scenarios with sequential updating and homogeneous scenarios with simultaneous updating\". But can the proposed method adapt to homogeneous scenarios with simultaneous updating? No experiments in such settings have been provided. Could the proposed intrinsic rewards be used to improve exploration in MARL settings with homogeneous agents?\n\n- Why do the authors use $\\lambda$ and $1 - \\lambda$ for weighting the intrinsic rewards, instead of arbitrary weights (not in a convex combination)? How important is it to the performance?"
},
{
"confidence": 3,
"rating": 7,
"review_id": "hWFW28UsTG",
"review_text": "The paper proposes a novel training objective where it encourages the policies to diverge from each other and from the previous policy under heterogeneous multi-agent tasks based on sequential recently proposed sequential policy update. It utilizes CS divergence for calculation of \"distance\" between policies for tractable and stable optimization compared to KL divergence. The evaluation is done in high-dimensional multi-agent mujoco and and bi-dexhands environments, outperforming existing state-of-the-art sequential algorithms.\n\n- The paper is well written and easy to understand; Fig. 1 is very informative.\n- The problem of exploration under agent heterogeneity is an important problem in multi-agent learning\n- The proposed method is sound and is backed by theory\n\n- The evaluation is hard to judge whether the proposed method is actually performs better than the baselines, this is a deal breaker. I suggest the authors also incorporate aggregate quantities from https://agarwl.github.io/rliable/\n\nI'm willing to increase the score if the authors show that the improvement is statistically significant\n\n- Is it possible to have a \"cyclic\" problem where the 1st, 3rd, 5th, ... (and also 2nd, 4th, 6th, ...) policies have the same behavior despite optimizing the proposed training objective?\n- Can the authors explain why CS is chosen over Jensen–Shannon divergence (JSD)?\n- Is there a guideline for tuning the coefficients for the intrinsic rewards?"
},
{
"confidence": 4,
"rating": 7,
"review_id": "Bgh8GeHSl3",
"review_text": "This paper is situated in the problem setting of heterogeneous cooperative agents, under the sequential update framework. The paper introduces the novel MADPO algorithm, in which agents maximize the Cauchy Schwarz divergence between agents and between episodes of data gathered by the same agent, to improve exploration. Empirical validation is performed on the Multi-Agent Mujoco and Bi-DexHands benchmark suites, demonstrating that the MADPO outperforms baselines.\n\nOverall, the paper is clear, succinct, and the main idea is clear and easy to understand. The format, and figures are good, with all expected components included. The idea of maximizing the inter/intra agent divergences is intuitively appealing. Further, the authors address the pitfalls of naively maximizing intra-agent divergences by adopting the Cauchy Schwarz divergence. It's especially nice that maximizing the CS divergence implies maximizing the policy entropy as well. Experiments are done on a large number of tasks, with comparisons against expected baselines and parameter sensitivity analyses all present.\n\n1. The motivation of the paper is not altogether clear to me. The paper seems to suggest that exploration is more challenging in the sequential update setting, necessitating devoted algorithms. Why would this be the case? \n2. In many of the presented domains, the improvement of MADPO over the next best method is not very large. Sometimes, confidence intervals of MADPO overlap those of the next best method. Can the authors provide statistical significance tests for the main results in Figures 2 and 3, comparing MADPO to the next best method?\n3. Some minor suggestions:\n- Please check your paper carefully for typos, as there are quite a few: \n - Line 89: \"connecting link dimension curse\"? Not sure what this is\n - No period after Figure 4\n - Trust interval -> confidence interval \n - Lacking 'and' at line 174\n - Line 204: conditoned -> conditioned\n - Line 216: extra \"of\" \n- Please be sure to state the number of trials in the main text. It is mentioned in the Neurips checklist, but I could not find it in the main text\n- Please make the colors of the methods the same for both domains (i.e. pick 1 color for MADPO and be consistent with it)\n\n1. How sensitive is the algorithm to the scale of the divergence rewards? Have you done a study on this? \n2. On the intra-policy divergence: is the policy updated every episode? If not, then wouldn't the intra-policy divergence reward often be 0? \n3. Line 180 states that it would be challenging to define an inter-agent divergence in the simultaneous update scheme. Why not consider the divergence between $\\pi^i_k$ and $\\pi^j_{k-1}$? But this does not seem any more challenging to compute, and can be computed under CTDE assumptions.\n4. Would it be possible to implement this exploration scheme in the CTDE setting? If so, it would be interesting to see how well the method performs. \n5. Proposition 2 states that the CS divergence has a lower bound. Does it also have an upper bound?"
},
{
"confidence": 4,
"rating": 5,
"review_id": "tULqsMfWlJ",
"review_text": "This paper introduces a novel multi-agent reinforcement learning (MARL) method called Multi-Agent Divergence Policy Optimization (MADPO), which enhances exploration and heterogeneity through a mutual policy divergence maximization framework. MADPO leverages a sequential updating scheme and quantifies discrepancies between episodes and agents, termed intra-agent divergence and inter-agent divergence, respectively. To address the instability and lack of directionality in traditional divergence measurements, the paper proposes using conditional Cauchy-Schwarz divergence to provide entropy-guided exploration incentives. Experiments demonstrate that the proposed method outperforms state-of-the-art sequential updating approaches in two challenging multi-agent tasks with various heterogeneous scenarios.\n\n1. **Innovation**: The paper introduces MADPO, a novel MARL method that enhances agent exploration and heterogeneity through mutual policy divergence maximization. \n \n2. **Theoretical Foundation**: The use of conditional Cauchy-Schwarz divergence to address instability and directionality in traditional divergence measurements is a contribution.\n \n3. **Experimental Validation**: The experiments conducted on two challenging multi-agent tasks with different heterogeneous scenarios convincingly demonstrate the effectiveness and superiority of MADPO in enhancing exploration and heterogeneity.\n\n1. The paper lacks analysis and comparison with relevant literature on sequential decision-making, such as:\n - Liu J, Zhong Y, Hu S, et al. Maximum Entropy Heterogeneous-Agent Reinforcement Learning[C]//The Twelfth International Conference on Learning Representations. (This paper extends SAC to heterogeneous sequential decision-making scenarios, and the relationship between this work and the current paper remains unclear.)\n\n2. It is unclear whether the intrinsic reward method proposed in this paper can ensure that the resulting trained policies are consistent with the original policies.\n\n1. Could you provide a detailed comparison between your proposed MADPO method and the approach presented in \"Maximum Entropy Heterogeneous-Agent Reinforcement Learning\" by Liu et al.? Specifically, how does MADPO improve upon or differ from this method in terms of handling heterogeneous sequential decision-making scenarios?\n\n2. I may have missed some details, but could you clarify whether the intrinsic reward method in MADPO ensures that the trained policies remain consistent with those optimized solely based on the original rewards?"
}
] | |
xvVeSZoVJO | RCDN: Towards Robust Camera-Insensitivity Collaborative Perception via Dynamic Feature-based 3D Neural Modeling | Collaborative perception is dedicated to tackling the constraints of single-agent perception, such as occlusions, based on the multiple agents' multi-view sensor inputs. However, most existing works assume an ideal condition that all agents' multi-view cameras are continuously available. In reality, cameras may be highly noisy, obscured or even failed during the collaboration. In this work, we introduce a new robust camera-insensitivity problem: how to overcome the issues caused by the failed camera perspectives, while stabilizing high collaborative performance with low calibration cost? To address above problems, we propose RCDN, a Robust Camera-insensitivity collaborative perception with a novel Dynamic feature-based 3D Neural modeling mechanism. The key intuition of RCDN is to construct collaborative neural rendering field representations to recover failed perceptual messages sent by multiple agents. To better model collaborative neural rendering field, RCDN first establishes a geometry BEV feature based time-invariant static field with other agents via fast hash grid modeling. Based on the static background field, the proposed time-varying dynamic field can model corresponding motion vector for foregrounds with appropriate positions. To validate RCDN, we create OPV2V-N, a new large-scale dataset with manual labelling under different camera failed scenarios. Extensive experiments conducted on OPV2V-N show that RCDN can be ported to other baselines and improve their robustness in extreme camera-insensitivity setting. Our code and datasets will be available soon. | https://openreview.net/pdf/50300dd9e9a38a5720ea27edc28c35276e11d4c3.pdf | [
{
"confidence": 3,
"rating": 6,
"review_id": "77fB79DyUY",
"review_text": "In this paper, the authors proposed an essential problems: how to overcome the issues caused by the failed camera perspectives, while stabilizing high collaborative performance with low calibration cost? The authors presented a robust camera-insensitivity collaborative perception with a novel dynamic feature-based 3d neural modeling mechanism to address the issue. Moreover, to verify the effectiveness of the model, the authors also provided a new large-scale dataset, OPV2V-N for this field. The experiments result showcase the model’s robustness in proposed dataset.\n\nStrength:\n1.\tThe paper presents an interesting viewpoint that is to recover noisy camera perceptual information from other agents’ views by modeling the collaborative neural rendering field representation, in which the model is divided into two stages: a time-invariant static background and time-varying dynamic foreground.s\n2.\tThe paper develops a new dataset to fill the gap of the lack of a comprehensive collaborative perception dataset that accounts for different camera noise scenarios.\n3.\tThe paper is well-organized and interesting to read.\n\n1.\tFrom my perspective, the paper lacks the theory analysis for the proposed method. Moreover, the authors fail to introduce the motivation of each sub-module in the presented model. For example, can the authors showcase the motivation of using Nerf for the static and dynamic fields, are there any dominant advantages of nerf, compared to other 3d reconstruction methods in this method?\n2.\tIt is necessary to give more rigorous mathematic analysis of equations in this paper. Furthermore, the authors are required to introduce the details of each networks, including the training parameters, learning rate, weight values in eq. 12.\n\nsee weakness part"
},
{
"confidence": 3,
"rating": 5,
"review_id": "iiMlZ8w2Q7",
"review_text": "The paper introduces a new problem: how to overcome the issues caused by the failed camera perspectives, while stabilizing high collaborative performance with low calibration cost? Therefore, RCDN, a Robust Camera-insensitivity collaborative perception with a novel Dynamic feature-based 3D Neural modeling mechanism is introduced. To validate the new method, the authors also provide a new dataset: OPV2V-N. RCDN serves as baseline here. Ablation Study shows for 5 models (F-cooper, Att-Fuse, Disco-Net, V2VNet, CoBEVT a significant improvement over their baselines, w/o RCDN.\n\nThe paper builds up on three pillars: single perception, collaborative perception and neural rendering. The base idea is novel to the best of my knowledge. The problem formulation is clear and well sounded, easy to follow. The System architecture is strong. The authors also focus on the differentiation between static and dynamic scenarios, especially for the neural fields both based on the BEV volume feature space. This differention is very important, not very often in detail discussed. The ablation study especially table 5.1 shows very accurate an increase of performance for different tasks static (lanes, free space) and dynamic perception. The experimentsl part introduces a new dataset, which is necessray for the investigation.\n\nThe overall system architecture sounds good. However, there are some open points for me, the impact of section 4.3 and 4.4, i.e. the neural fields part, seems open in terms of clarification. Example: What is difference between sf w , sbw in equation (7)?\nThe experimental section is a bit too short. I feel its not finished yet. However, there is limited space. The overall approach is not usable for realtime.\n\nWhat is difference between sf w , sbw in equation (7)?\nHow many message exchange tasks could be used overall (Figure 2.) Will baseline code be publishe in combination with the dataset? When?"
},
{
"confidence": 2,
"rating": 5,
"review_id": "5j8eWuHRGU",
"review_text": "The paper presents RCDN, a method to aggregate multi-sensor perception signals in dynamic environment.\nThe key idea, is to improve the aggregated multi-agent feature with the multi-view rendering loss.\nAt its core, RCDN gathers input streams at varying timesteps of multiple agents. The gathered images are fused into Birds Eye view (BEV) then further decoded into volume. \nThe volumetric features are learned into static scene and dynamic scene components with NGP based representation.\nOverall procedure is supervised with rendering loss, (cyclic) optical flow consistency.\n\n\nThe method is evaluated on new dataset, OPV2V-N, which is an updated version of OPV2V, with additional masking and optical flow. \nThe results show that RCDN helps BEV segmentation with various backbones, compared to the model used without RCDN.\n\nThe main benefit of the RCDN, is that it is fairly easy to apply into different existing feature backbones, as it is the post-processing step built on top of BEV features. \nExperimentally, the usage of RCDN significantly improves the segmentations which implies that the features are better aligned throughout the noisy signals. \nThis makes the work to be a great off-line data augmentation / preparation pipeline for generating BEV segmentation features. \nThe paper additionally proposes OPV2V-N dataset, which may be somewhat valuable addition to the community.\n\nAside from technical perspective, the paper is easy to follow and well-written.\n\nThe paper's main weaknesses are two folds. \n1. The paper does not evaluate on tasks other than BEV segmentation. \nWhile I believe that the pixel-aligned features from NGP would give benefits over various vision tasks, the paper only demonstrates on smaller domain of work which undermines its actual potential. It would have been more interesting to compare how it impacts in different downstreaming tasks, such as detection / tracking.\n\n2. Technical contribution seems to lack novelty. \nThe paper is a mix of two known-to-work solutions; BEV feature decoding for segmentation (used with various baselines in the experiments), and NGP (or radiance field based) multi-view pixel / density alignment through rendering loss. Usage of rendering loss to improve segmentation map is well-investigated in different literatures in the NeRF community (e.g, semantic-nerf).\n\nThese are few questions that I would like the authors to answer in the rebuttal.\n1. How real is the synthetic OPV2V-N dataset? In other words, how can features learned in OPV2V-N dataset be translated to real-world usage? Moreover, are there any real-world quantitative results on model trained on synthetic data?\n\n2. Have authors evaluated the method on different down streaming task other than segmentation? How does one verify that the volumetric features are geometrically correct? (how accurate is the Geometric BEV features?)\n\n3. How is BEV segmentation evaluation differ on non-flat surfaces like hills or bridges?"
},
{
"confidence": 3,
"rating": 5,
"review_id": "bAIV9dxvmP",
"review_text": "The paper proposed Bird Eye View (BEV) semantic segmentation pipeline from collaborative perception, robust to motion blur, sensor noise, occlusion and even failure. The proposed a pipeline that adapts neural rendering techniques to overcome the noise/malfunction in camera capture and occlusion. With the proposed method combined with prior methods, performances on OPV2V-N (the proposed BEV semantic segmentation dataset) are improved.\n\nThe paper proposed to apply neural rendering concept for ‘robust’ collaborative-perception BEV segmentation. It is natural way of thinking to overcome noise/malfunction in the caption system but the way the paper adapts neural rendering to BEV segmentation is novel. And, the performance is verified with OPV2V-N dataset.\n\nEvaluation is only performed with OPV2V-N dataset which may result in overfitting. More evaluation with different dataset is required. The author may need to compare methodologies on other dataset although the existing dataset do not have noise. The author also may add random noise to the prior dataset and run experiments.\n\nThe manuscript was uneasy to read and understand. The paper should re-written. The comments below are without understanding supplemental materials fully.\n- The way proposed algorithm is combined with prior method is unclear. The reviewer guessed that the MCP module can be replaced with prior methods, but it is not stated.\n- Many abbreviations are not explained sufficiently and terminologies the author defined are ambiguous and may be incorrect. \n- MCP is short for the multi-agents collaborative perception process but the paper did not explain MCP module in details with no reference\n- BEV, no full name, no reference.\n- “Camera-insensitivity” can be understood terminologies related to camera sensor sensitivity (how much the camera sensor accept photon…).\n- Robust Camera-Insensitivity: Robust == Camera-sensitivity? The latter one may be redundant\n- Line 6. introduce a new robust camera-insensitivity problem: cam be replaced “introduce BEV segmentation when the camera capture are unreliable (or noisy)?” Should be more concrete without ambiguous words\n- Line19 “Ported to” mean?\n- There are more unclear sentences.\n\n."
},
{
"confidence": 3,
"rating": 5,
"review_id": "onAGybRG08",
"review_text": "The paper introduces RCDN, a novel method for robust camera-insensitivity collaborative perception. This method aims to overcome challenges associated with noisy, obscured, or failed camera perspectives by using dynamic feature-based 3D neural modeling. RCDN constructs collaborative neural rendering field representations to recover failed perceptual messages sent by multiple agents. The proposed system consists of two collaborative field phases: a time-invariant static background field and a time-varying dynamic field. To validate RCDN, a new dataset called OPV2V-N was created. The paper demonstrates that RCDN improves the robustness of baseline methods in extreme camera-insensitivity settings.\n\n*Innovative Problem Addressing*: The paper tackles a significant real-world problem of camera insensitivity in multi-agent collaborative perception, which is crucial for autonomous systems.\n\n*Novel Methodology*: The introduction of dynamic feature-based 3D neural modeling and the construction of collaborative neural rendering field representations are innovative approaches.\n\n*Comprehensive Dataset*: The creation of the OPV2V-N dataset, which includes various camera failure scenarios, provides a robust platform for testing and validating the proposed method.\n\n*Performance Improvement*: The extensive experiments and quantitative evaluations show significant improvements in robustness and performance over baseline methods.\n\n*Detailed Evaluation*: The paper includes both quantitative and qualitative evaluations, along with ablation studies, which thoroughly demonstrate the effectiveness of RCDN.\n\n*Complexity and Computation*: The proposed method involves complex modeling and multiple steps. The author should provide the latency.\n\nGeneralizability: The performance of RCDN is primarily validated on the OPV2V-N dataset, which may limit the generalizability of the results to other datasets or real-world scenarios.\n\n\n*Failure Cases*: It would be nice if the authors provide failure cases, which is important.\n\n*Dataset Diversity*: Have you tested RCDN on any datasets other than OPV2V-N? How does it perform on real-world data?\n\n*Real-Time Feasibility*: What are the computational requirements of RCDN, and how feasible is it for real-time applications in autonomous systems?\n\n*Scalability*: How well does the method scale with an increasing number of agents and cameras? Are there any performance bottlenecks?"
}
] | |
xvTMc9Ovx3 | On-Road Object Importance Estimation: A New Dataset and A Model with Multi-Fold Top-Down Guidance | This paper addresses the problem of on-road object importance estimation, which utilizes video sequences captured from the driver's perspective as the input. Although this problem is significant for safer and smarter driving systems, the exploration of this problem remains limited. On one hand, publicly-available large-scale datasets are scarce in the community. To address this dilemma, this paper contributes a new large-scale dataset named Traffic Object Importance (TOI). On the other hand, existing methods often only consider either bottom-up feature or single-fold guidance, leading to limitations in handling highly dynamic and diverse traffic scenarios. Different from existing methods, this paper proposes a model that integrates multi-fold top-down guidance with the bottom-up feature. Specifically, three kinds of top-down guidance factors (i.e., driver intention, semantic context, and traffic rule) are integrated into our model. These factors are important for object importance estimation, but none of the existing methods simultaneously consider them. To our knowledge, this paper proposes the first on-road object importance estimation model that fuses multi-fold top-down guidance factors with bottom-up feature. Extensive experiments demonstrate that our model outperforms state-of-the-art methods by large margins, achieving 23.1% Average Precision (AP) improvement compared with the recently proposed model (i.e., Goal). | https://openreview.net/pdf/87bd5670b38ec3e2f101018aef40eb48c6c26a89.pdf | [
{
"confidence": 3,
"rating": 7,
"review_id": "V6EMXkI2kv",
"review_text": "1.This paper this contributes a new large-scale dataset named Traffic Object Importance (TOI) to addresses the problem of on-road object importance estimation, which utilizes video sequences captured from the driver’s perspective as the input.\n2.The author also proposes a model that integrates multi-fold top-down guidance with the bottom-up feature.\n\n1.This paper describes in great detail the specialized methodology and the structure of the models.\n\n2.The scarcity of large-scale publicly available datasets hinder the development of on-road object importance estimation.\n\n3. This paper considers the effect of traffic rule on object importance and successfully models this abstract concept by proposing an adaptive object-lane interaction mechanism.\n\n1.In page 3 the author mentions that the traffic rule is crucial for object importance and focus on the traffic line rules , but the influence of traffic rules is varied, such as signalization. Therefore, in page 4 of table 1, the author is able to provide statistics on the scenario categories of TOI dataset and the traffic rule constraints within the dataset in experiment. \n\n2.In page 6, the author uses three common intention behaviors in driving to reflect the driver intention (i.e., turning left, going straight, and turning right). Since the video clip length is set at 16 frames, it is important to clarify if each of the three intentions corresponds to individual frames with the 16-frame clip cut during the training and testing phases, or if multiple intentions are present within the 16 frames. The authors should further elaborate and provide the proportion of each intention in the dataset.\n\n3.Insufficient evaluation of indicators in the experimental section. The author may add another evaluation indicator.\n\n4.The section three can include a schematic diagram of the annotation process for the dataset.\n\n1.I consider whether 16 frames constitute interval sampling or continuous sampling, and how many types of intentional behaviors can be expressed using 16 frames in the paper.\n\n2.The author may can add another evaluation metric for the experiment."
},
{
"confidence": 4,
"rating": 6,
"review_id": "BtKYEriAyA",
"review_text": "This paper collects a new large-scale dataset and proposes a novel method that integrates multi-fold top-down guidance with the bottom feature to address the problem of on-road object importance estimation. Specifically, the dataset is almost three times larger than the current publicly dataset for on-road object importance. In addition, this paper considers an adaptive mechanism for object-lane interaction, effectively modeling the impact of traffic rules on object importance. Experiments on several benchmarks validate the effectiveness of the proposed method.\n\nThis paper makes several key contributions and demonstrates strengths for on-road object importance estimation\n\n(1) This paper introduces a novel, extensive dataset, set to be released to the public, which is nearly three times the size of the current largest public dataset. \n\n(2) The method is well-motivated and straightforward. It estimates the importance of objects on the road, integrating various top-down guidance factors with bottom-up features, marking the first of its kind.\n\n(3) The proposed method addresses the pivotal role of traffic rules in estimating object importance, an aspect previously overlooked by existing methods. It successfully encapsulates this concept through an innovative, adaptive mechanism for object-lane interaction.\n\nThis paper has also two weaknesses: \n\n(1) The paper does not provide a detailed discussion on the computational efficiency of the proposed method, which is crucial for real driving scenarios. Moreover, it is recommended to compare the model parameters and latency with other methods.\n\n(2) Another concern lies in the practicality of the method. This method and the proposed dataset are both for single-camera scenarios, but in real autonomous driving scenarios, surrounding view is a more widely used type and a safer option. Will the proposed method also work well in the surrounding view?\n\n(1) Can the proposed method be applied to surrounding view images? I suggest that the authors should consider the application on the current perception pipeline for vision-based autonomous driving pipeline.\n\n(2) I suggest that the authors should analyze the latency of the proposed method, which determines whether the method can be integrated into the practical driving scenarios."
},
{
"confidence": 3,
"rating": 6,
"review_id": "WwItFq6FX2",
"review_text": "This paper presents a novel dataset for on-road object importance estimation. More data about which objects are important for self-driving is included and is promised to be released. Moreover, a novel method that integrates driven intention, semantic context, and traffic rule is devised to tackle the related problem. The paper is well-written.\n\nA new dataset is introduced with rich data and labels. The presented method is novel and shown to be effective for the studied problem. Details about the dataset and the method are comprehensive and technically sound. Results are also promising.\n\nSome of the concepts lack sufficient details to explain. See questions below.\n\n(1) Regarding the task, my major concern is the definition of importance. It is shown that surrounding objects that follow the traffic rules are not considered as important. Only the objects ahead of the car or have an intersection with the ego-car's direction are important. I doubt whether this is strictly appropriate. For example, if a pedestrian walking along the road, he/she will not be considered as important. However, what if this pedestrian suddenly steps into the road ahead, potential collisions would happen. Therefore, I think a nearby walking pedestrian should be considered as important or at least recognized into a third category like \"needs care\". I wonder how the authors solve this problem in the dataset.\n(2) Regarding the driver's intention, it is indeed difficult to define appropriately. The authors have mentioned this in the paper, but the strategy introduced to accommodate this is still not clear to me. The authors mentioned learning the intention values based on driving behaviors, but how do we know the driving behaviors? Are these behaviors (e.g. turning left) already provided in the dataset? \n(3) More visualization about the labels and method comparisons are better to be presented for more clarity."
},
{
"confidence": 3,
"rating": 6,
"review_id": "RvO3e7ptG9",
"review_text": "This work addresses the issue of estimating the importance of on-road objects using video sequences from a driver’s perspective, a critical task for enhancing driving safety. The authors introduce the Traffic Object Importance (TOI) dataset, which is significantly larger and more diverse than existing datasets, and propose a novel model that integrates multi-fold top-down guidance factors—driver intention, semantic context, and traffic rules—with bottom-up features for more accurate importance estimation. Experimental results demonstrate that the proposed model significantly outperforms state-of-the-art methods in on-road object importance estimation.\n\n1. The introduction of the Traffic Object Importance (TOI) dataset, which is significantly larger and more diverse than existing datasets, provides a robust foundation for training and evaluating models in on-road object importance estimation, thereby addressing a major limitation in the field.\n\n2. The proposed model effectively integrates multi-fold top-down guidance factors—driver intention, semantic context, and traffic rules—with bottom-up features, which showed good performance for the TOI task.\n\n1. Lack of description of the annotation details. \nHow many annotators are involved in the annotation procedure? It would be good if the authors can provide some annotation procedure samples regarding the double-checking annotation mechanism and the triple-discussing annotation mechanism.\n\n2. It seems this annotation will be varied according to different traffic rules. Since KITTI is collected in Germany, the annotators should be familiar to germany traffic rules. However the authors did not mention this information in their submission, thereby the label quality is doubtful.\n\n3. The authors are encouraged to build up the first benchmark based on the proposed dataset by using various existing object detection methods, e.g., Yolo, with the proposed head or simpler head. It is interesting to see how the existing object detectors work on this new task.\n\n4. More statistics of the dataset are encouraged to be given, e.g., the number of important object of different categories, etc.\n\n1. How many annotators were involved in the annotation procedure for the dataset? Can the authors provide detailed examples of their double-checking and triple-discussing annotation mechanisms?\n\n2. Were the annotators familiar with German traffic rules, given that the dataset was collected in Germany (KITTI dataset)? How was the expertise of the annotators in relation to German traffic laws ensured and validated?\n\n3. Have the authors considered building the first benchmark using their dataset with existing object detection methods, such as YOLO? What were the performance outcomes of these existing methods when applied to the new task?\n\n4. Can the authors provide more detailed statistics about the dataset, such as the number of important objects in different categories? How do these statistics compare to other datasets in the same domain?"
}
] | |
xutrKezbPF | CIFD: Controlled Information Flow to Enhance Knowledge Distillation | Knowledge Distillation is the mechanism by which the insights gained from a larger teacher model are transferred to a smaller student model. However, the transfer suffers when the teacher model is significantly larger than the student. To overcome this, prior works have proposed training intermediately sized models, Teacher Assistants (TAs) to help the transfer process. However, training TAs is expensive, as training these models is a knowledge transfer task in itself. Further, these TAs are larger than the student model and training them especially in large data settings can be computationally intensive. In this paper, we propose a novel framework called Controlled Information Flow for Knowledge Distillation (CIFD) consisting of two components. First, we propose a significantly smaller alternatives to TAs, the Rate-Distortion Module (RDM) which uses the teacher's penultimate layer embedding and a information rate-constrained bottleneck layer to replace the Teacher Assistant model. RDMs are smaller and easier to train than TAs, especially in large data regimes, since they operate on the teacher embeddings and do not need to relearn low level input feature extractors. Also, by varying the information rate across the bottleneck, RDMs can replace TAs of different sizes. Secondly, we propose the use of Information Bottleneck Module in the student model, which is crucial for regularization in the presence of a large number of RDMs. We show comprehensive state-of-the-art results of the proposed method over large datasets like Imagenet. Further, we show the significant improvement in distilling CLIP like models over a huge 12M image-text dataset. It outperforms CLIP specialized distillation methods across five zero-shot classification datasets and two zero-shot image-text retrieval datasets. | https://openreview.net/pdf/dd4b28772c38804f39a2eef11f4f97a9f8bf0f5a.pdf | [
{
"confidence": 4,
"rating": 5,
"review_id": "K2R4Fbzpma",
"review_text": "Some existing methods alleviate the capacity gap between the teacher and student by setting up Teacher Assistants (TAs), introducing a large number of additional parameters and computational costs. Based on this, this paper proposes to train multiple RDM modules and connect multiple independent classification heads to generate branches with different performance to simulate the TA model. The authors think that this hierarchical model of extracting teacher knowledge can help alleviate the capacity gap between the teacher and student.\n\n* Assistants and teachers sharing shallow modules are more efficient in terms of parameter quantity compared to multiple independent Assistants models.\n* Although a large number of fully connected layers have been introduced, the proposed method hardly introduces any additional training overhead.\n\n* I noticed that there is a significant difference in baseline performance between Table 1 and the original text, and Table 3 only uses a single Assistant for TAKD and DGKD, while the author's proposed method uses three RDM headers, which is not a fair comparison.\n* There are significant differences in the value of R across different datasets, R=1 for CIFAR but R=10^4 for ImageNet. This means that the selection of hyperparameters on unfamiliar datasets is challenging, and the parameter tuning process may introduce multiple computational costs, which limits the versatility of the method.\n* I noticed that the method proposed by the author introduces and trains at least $3N$ additional layers of MLP, and the forward process and loss calculation cost in distillation is also increased several times. However, the training cost is even the same as the method NormKD based solely on Logits Distillation without additional modules (Figure 1 (b)). Can you present the results with specific numerical values? What is the key to introducing so many parameters without introducing additional computational overhead?\n\n* The experiment was conducted when there was not much difference between the teacher and student models (common settings in current KD tasks). It cannot prove that the proposed method can alleviate capacity differences, as many methods have proven that such settings (e.g., Res34 $\\to$ Res18) do not require the additional assistant model.\n* Why use the penultimate layer feature of the teacher? Is this from theoretical analysis or empirical summary?\n* For this paper, I think it would be better to place Related Works at the front."
},
{
"confidence": 4,
"rating": 5,
"review_id": "Z31ejFHovR",
"review_text": "Inspired by Shannon’s rate-distortion theory, this paper proposes two modules, namely the Rate-Distortion Module and the Information Bottleneck Module, to construct intermediate representations for knowledge distillation. Extensive experiments on various datasets demonstrate the effectiveness of this method.\n\n1. This paper is well-presented and easy to understand.\n2. This method not only works for traditional CNN networks but also performs well on modern CLIP models.\n3. Extensive experiments demonstrate the effectiveness of this method.\n\n1. The author's motivation and explanation for TA distillation are not very convincing. In my view, the RDM and IBM proposed in this work can be interpreted as two adapters connected to the teacher and the student respectively for distillation, and the principle is similar to the FT method.\n2. In Table 2, 9 and 10, most of the compared methods were published in 2022 or before. The authors are encouraged to compare your method with recent state-of-the-art methods such as MLKD[1], CTKD[2], and LSKD[3].\n3. In Eq. 5, I am a little confused about the author's formula representation. Generally speaking, the left side of the comma is the network to be trained, and the right side is the learning target. But the author seems to have it reversed here.\n\nReferences: \n[1]. Multi-Level Logit Distillation. CVPR 23. \n[2]. Curriculum Temperature for Knowledge Distillation. AAAI 23. \n[3]. Logit Standardization in Knowledge Distillation. CVPR 24.\n\n1. $q(\\hat{Y})$ is not clearly marked in Fig. 2. Which part of the network produces it? \n2. It would be more beneficial if the author could add a figure on how to perform CIFD distillation on the CLIP model."
},
{
"confidence": 4,
"rating": 5,
"review_id": "47i3IOWiak",
"review_text": "The paper presents a new distillation method, CIFD, designed based on *Shannon’s Rate-Distortion theory* and **Information Bottleneck Principle (IBP)*. CIFD contains Rate-Distortion Modules (RDM) for the teacher to substitute heavy Teacher Assistant (TA) and Information BottelNeck Module (IBM) for the student to mimic the features from several RDMs. Experiments demonstrate the effectiveness of the method.\n\n1. The paper is organized well.\n2. The experiments on CLIPs are good, verifying the broader effectiveness of the method.\n\nMy main concerns are from three aspects: **i) the story of the paper; ii) the reason why CIFD works; iii) insufficient experiments and comparisons.** Some concerns are mixed among the three aspects. And I will list them one by one.\n\n1. **Insufficient experiments on verifying the basic settings of the paper.**\nThe story starts with *\"When the teacher model is significantly larger than the student, previous works that utilize TAs induce high training costs.\"* I would believe the basic settings of this work as the teacher-student network pairs are in large parameter scale differences. From this point of view, the paper should contain more systematic experiments to verify the efficacy under this setting. Specifically, CIFD should be compared with previous methods on ImageNet with teacher-student network pairs with large different parameter scales, not just traditional ResNet-34 -> ResNet-18 and ResNet-50 -> MobileNet-V1.\n\n2. **The trade-off between the story and the empirical solutions.** In my opinion, the paper is a little bit overdecorated and overclaimed. The author proposes many concepts, such as *Shannon’s Rate-Distortion theory* and *Information Bottleneck Principle (IBP)*, and claims **\"This is the first application of Shannon’s Rate-Distortion theory to aid knowledge distillation\"**. I don't mean that the aforementioned statement is misleading. But, if we go deeper into the design, the reason why the method works may come from the **noise-adding and noise-removing process**. Many previous works have verified that the above process could benefit the learning process in computer vision, like MIM and diffusion models, which have been empirical solutions. In KD, there also exist distillation methods following MIM and diffusion models, like MGD and DiffKD. From this point of view, the authors should not claim ***\"the first\"*** only, but make a deeper analysis of related methods and make detailed comparisons. ***I strongly encourage the authors to make a good balance between the story and the verified empirical solutions.*** Even though it seems not as novel as this version, it would provide the readers with more useful knowledge and insights.\n\n3. **The design may alter the network architecture of the student.** It seems that the IBM module would also be included in the validation stage. If my judgment is true, the added module (though lightweight) would also benefit the performance. Under such circumstances, the comparisons with previous methods, especially for light-weight models, are unfair.\n\nSee Weaknesses."
}
] | |
xtpY1kQmW9 | Double-Bayesian Learning | Contemporary machine learning methods will try to approach the Bayes error, as it is the lowest possible error any model can achieve. This paper postulates that any decision is composed of not one but two Bayesian decisions and that decision-making is, therefore, a double-Bayesian process. The paper shows how this duality implies intrinsic uncertainty in decisions and how it incorporates explainability. The proposed approach understands that Bayesian learning is tantamount to finding a base for a logarithmic function measuring uncertainty, with solutions being fixed points. Furthermore, following this approach, the golden ratio describes possible solutions satisfying Bayes' theorem. The double-Bayesian framework suggests using a learning rate and momentum weight with values similar to those used in the literature to train neural networks with stochastic gradient descent. | https://openreview.net/pdf/aa91502b48a844a98dd2159618a01f2c71c921bc.pdf | [
{
"confidence": 2,
"rating": 1,
"review_id": "FyhbfNR2bX",
"review_text": "This paper appears to suggest that any decision is composed of two Bayesian decisions and it trys to evaluate the implications of this idea. \n\nI am very confused by this paper and really don't know what to make out of it. For example, the conclusion seems to be only a brainstorming session of random ideas and the rest of the paper does not appear to be much better.\n\nAt the very least, it is not well written, at worst the proposed approach does not make any sense.\n\nGiven that I don't properly understand what exactly the authors want to achieve, I am unable to formulate the strengths of this paper.\n\nThe presentation is very messy. The paper jumps from topic to topic without me understanding their relations to each other.\n\nsee above"
},
{
"confidence": 4,
"rating": 2,
"review_id": "8eArP1I16A",
"review_text": "The paper discusses the implications of Bayes' theorem, making assumptions inspired by a thought experiment of communicating a message. Prior (and model) elicitation by solving a fixed point equation is discussed.\n\n* The paper takes a fresh look at decision marking under uncertainty, which is at the center of machine learning.\n* The generality of the setting makes the discussion applicable to virtually all of ML.\n\nWhile I am sensible to the topic of prior and model elicitation from coherence arguments, I believe the paper needs a thorough revision focussing on clarity. While I have some intuition now, it is still not crystal clear to me what the exact goal or claims of the paper are. See bullets below for constructive comments.\n\n## Major\n1. Section 4: what is the probability $P$? What is the underlying space and sigma algebra? What are they supposed to represent? \n2. Section 4 introduces several very strong assumptions, like $1-P(A\\vert B) = P(B\\vert A)$ (is it for all $A,B$ in some sigma-algebra or for a specific pair of events?), that are motivated by an analogy about communicating a message. It is not clear why I should be prepared to make these strong assumptions. The fact that I don't know what $P$ is supposed to model or serve as does not help. Is it a joint probability over the variables describing a decision problem, as in decision theory? In that case, will it be used in conjunction to a loss function to make decisions? Will it be judged by some measure of decision accuracy? Or are we in a de Finetti framework, coming up with a personal probability $P$ which we will use to make predictions about unobserved variables? My intuition is that we are dealing with the latter kind, but this should be explained. And the strong assumptions need to be motivated by more than an analogy about communication.\n3. The information analogy which motivates imposing the fixed point equation (9) is unclear, as well to what probability and what events it should apply.\n4. p5 L179: the sentence about the parameter being a dynamic parameter for a learning system is unclear. We haven't discussed any learning algorithm yet.\n5. I am not sure I see where Eqn (11) comes from. $\\lambda$ has been chosen to derive (10) from Bayes' theorem, but it doesn't have to be the right base to write (11), right? Same remark for (18).\n\n## Minor\n1. p7 L248: Although neural networks have been a popular class of models and algorithms, supervised learning is not synonymous with neural network training.\n2. p7 L252: the meaning of \"the $\\lambda$ expression\" is unclear.\n\n* Can you formally rephrase the goal and claims of the paper?\n* Can you explain what $P$ is representing? Is it a personal probability in the spirit of de Finetti, or a model of the data generating process? Or maybe something else?\n* Can you formalize and list the assumptions you make on $P$, and justify them in the context of predicting a categorical variable?"
},
{
"confidence": 4,
"rating": 4,
"review_id": "587cCFgOED",
"review_text": "The purpose of this paper is to investigate the optimality of a classifier. It is known that the Bayes classifier is optimal, and it is likewise known that an explicit computation of the Bayes classifier is often very challenging if not impossible. This paper offers an analysis of the Bayes classifier as a sequential solution of two problems. An analysis and interpretation of a vase / faces example is presented and some theory is developed to further understand it. The paper concludes with an application.\n\nThe authors are exploring an idea which is novel, and the whole thinking about Bayes classifiers as comprising two sub-problems seems novel and worth pursuing.\n\nI did not really understand the discussion with the vase, the sender and receiver. Perhaps the authors should somehow connect the Bayesian ideas to the description of the problem earlier? I think the paper would really benefit from rewriting Section 4 with the vase as a running example, because it is hard to connect the various decisions with the probabilities. Maybe it's worth to add more illustrations / diagrams for this? The authors are presenting novel ideas and it's hard to understand them as they are currently presented.\n\nFor the theoretical implications, I think it would be better to illustrate the approach on a simpler model like a linear one. \n\nThe paper started by mentioning the Bayes classifier but does not come back to it as an example. \n\nThe paper states that the Bayes classifier is broken up into two decisions, but those are just briefly mentioned in the vase / faces example. The authors should carry this thread of reasoning through the whole paper.\n\nIn line 134, you say that \"...if the message is known, then whether the foreground needs to be swapped is unknown.\" But isn't knowing the message \"vase\" or \"faces\" enough? How will swapping the background help?\n\nHow are the fixpoint solutions connected to the whole vase / faces example?"
}
] | |
xtK3gZjQDC | Towards Human-AI Complementarity with Prediction Sets | Decision support systems based on prediction sets have proven to be effective at helping human experts solve classification tasks. Rather than providing single-label predictions, these systems provide sets of label predictions constructed using conformal prediction, namely prediction sets, and ask human experts to predict label values from these sets. In this paper, we first show that the prediction sets constructed using conformal prediction are, in general, suboptimal in terms of average accuracy. Then, we show that the problem of finding the optimal prediction sets under which the human experts achieve the highest average accuracy is NP-hard. More strongly, unless P = NP, we show that the problem is hard to approximate to any factor less than the size of the label set. However, we introduce a simple and efficient greedy algorithm that, for a large class of expert models and non-conformity scores, is guaranteed to find prediction sets that provably offer equal or greater performance than those constructed using conformal prediction. Further, using a simulation study with both synthetic and real expert predictions, we demonstrate that, in practice, our greedy algorithm finds near-optimal prediction sets offering greater performance than conformal prediction. | https://openreview.net/pdf/2b252bb64aa670e885a730dbaf8f392c032ec70b.pdf | [
{
"confidence": 3,
"rating": 7,
"review_id": "hnM7bUiFtD",
"review_text": "The paper analyzes decision support systems based on prediction set algorithms. The authors show that: (i) the usage of conformal prediction techniques is generally sub-optimal in terms of accuracy; (ii) the problem of finding the optimal prediction sets under human assistance is NP-hard. Moreover, they provide (iii) a greedy algorithm that is guaranteed to find prediction sets that are better than those provided by conformal predictors. Experimental evaluation on synthetic and real data show the effectiveness of the considered approach.\n\nThe main strengths of the paper are:\n\n1. the actual paper contribution is well framed;\n2. the theoretical analysis is sound;\n3. the proposed algorithm improves over existing approaches.\n\nI think this work is a good paper, without major weaknesses, as it provides solid theoretical insights.\nThe concerns I have are mainly due to typos/details missing. I will point out here these and a few remarks that might be considered for the final version of the paper.\n\n1. It seems to me that Table 2 and Figure 3 are missing the BRUTE FORCE baseline.\n2. regarding the style of the paper, I found lines 135-146 very dense. Maybe providing a more concrete example (e.g., what could 1,2,3 represent?) might help the reader getting through it.\n3. In Algorihm 1, I think adding a comment to the pseudo-code (from lines 4 to 13) could be useful\n4. regarding the limitation section (evaluation) a useful reference might be [Stutz et al., 2023], where the authors evaluate the possibility that human experts might not be approximating the true probability distribution \n5. the experimental analysis (on real data) could be enriched with other popular Learning-to-Defer datasets, such as Cifar10H or hatespeech.\n\n[Stutz et al., 2023] - Stutz, David, Abhijit Guha Roy, Tatiana Matejovicova, Patricia Strachan, Ali Taylan Cemgil, and Arnaud Doucet. \"Conformal prediction under ambiguous ground truth.\" Transactions on Machine Learning Research (2023).\n\nI have a couple of questions/remarks:\n\n1. Can you elaborate a bit more on lines 91-93? I am not fully sure I understand the point there.\n2. Can you add the results for BRUTE FORCE SEARCH in Table 2 and Figure 3?"
},
{
"confidence": 3,
"rating": 5,
"review_id": "w4PJnXKOnJ",
"review_text": "The authors first show that conformal prediction sets may not lead to human decision optimality. The authors then introduce a greedy algorithm to generate candidate prediction sets that improve human decisions regarding the accuracy metric.\n\nThe authors find the sub-optimality of conformal prediction sets on providing candidates for human decisions. Thereby, they propose a novel method to produce prediction sets that helps to improve human prediction.\n\n* The presentation somewhere is unclear: \n * Line 86. Please break the sentence properly.\t\n * Line 40/43/48: It is unclear for readers when the authors mention “optimal” multiple times but delay its explicit definition later.\n * Line 197: It is confusing when the authors refer to the role of $a$. What is the value of $a$?\n\n* The authors claim they propose an efficient algorithm. However, I am not sure which part is efficient. Are there any numerical metrics, e.g., running time, supporting this contribution? Additionally, how should we understand this restriction of “for a large class of non-conformity scores and expert models” in line 51?\n\n* Line 90: But you also miss the possibility outside the prediction set, especially when the prediction set is not that good. I think the authors need to discuss the exploitation-exploration dilemma.\n\n* The authors use the scores related to softmax and APS. Other papers propose alternative scores like RAPS and SAPS. I think they should be included.\n\n* Typo in the title: Predictions --> Prediction\n\n* Did you include the results from the conformal prediction sets by varying the value of $\\alpha$?\n\n* Why choose those values of $\\omega$? I think their magnitudes are close. The authors may consider even smaller and larger values of $\\omega$ to show its sensitivity."
},
{
"confidence": 4,
"rating": 6,
"review_id": "1TvSIBwYpt",
"review_text": "The paper shows the conformal prediction set may not be the optimal set recommendation to humans if humans follow certain choice models. The authors then propose a greedy algorithm by modeling $P(y|x)$ and the choice model of humans assuming it follows MNL model. Authors compare the proposed method against the standard conformal prediction set under synthetic human experts and the proposed method has a slightly better performance compared to traditional conformal sets.\n\nThe authors consider conformal prediction in the human-in-the-loop setting, which is an important problem. The first part of the paper shows the conformal prediction set may not be the best recommendation set for humans, which is easy to understand since most conformal sets arrange the set in a ranked order and we can play with the human choice models to create an example that conformal sets may not be the best recommendation set.\n\nThe problem setting is not realistic: The authors do not allow humans to select outside the conformal prediction set. However, in the setups of most empirical successes of human-AI collaboration with conformal prediction, this is allowed. Similarly, if the authors do not allow humans to select outside the conformal prediction set, humans' value is greatly reduced and the optimal thing to do may be just to use fully automated AI prediction and in all the toy examples the authors provided, kicking humans out of the loop is the optimal system (humans only make things worse). \n\nThe theoretical analysis seems useless: I think the theoretical analysis is useless for two reasons: 1) while identifying the optimal set is NP-hard, in practice the metric we care about is $\\mathbb{E} g(S|x)$, not identifying the optimal set. If an algorithm can get a good rate of convergence for this regret, then this problem is not hopeless, so I think authors need to show for all conformal prediction algorithms, what is the regret lower bound for $\\mathbb{E} g(S|x)$; 2) while I can see that sometimes the label set can be large. In practice, the theoretical results may not be a big issue for many problems since most problems have small label set (binary or three classes). This negative results may not seem that severe as the authors presented in the paper. \n\nThe solution is disconnected and not useful in human-AI collaboration: 1) The proposed solution does not enjoy the distributionally-free guarantee, which is the main reason why people use conformal prediction. I would expect authors to provide a conformal prediction algorithm that is human-centered, rather than directly switch lanes to traditional prediction methods. 2) The proposed solution requires $P(y|x)$ and the true human choice model, which is too strong to be realistic. If I know $P(y|x)$, why should I involve humans in the loop anymore (recall that authors can restrict humans only select from prediction set so humans are not necessary in the system). The optimal strategy would be directly use $P(y|x)$ to select actions. \n\nBaselines: For human-AI collaboration tasks, I expect to see the proposed solution is better than human working alone or AI working alone. The authors should compare with AI only baseline using $P(y|x)$. Based on the toy example and my current understanding of the paper, the proposed solution cannot beat AI only baseline.\n\nSee weakness."
},
{
"confidence": 4,
"rating": 6,
"review_id": "PuBhlvUJD1",
"review_text": "This paper aims to construct optimal prediction sets under which experts can achieve the highest accuracy. The authors claim that human experts cannot attain maximum accuracy with the prediction sets generated by conformal predictors. To address this issue, the paper proposes an efficient greedy algorithm based on maximum marginal gain to find prediction sets that outperform those generated by conformal predictors. The paper offers two main theoretical contributions: the first proves that finding the optimal prediction set is an NP-hard problem, while the second demonstrates that the proposed method enables experts to achieve higher accuracy than conformal predictors. Empirical results further validate the effectiveness of the proposed approach.\n\n1. The paper is well-motivated and easy to follow.\n \n2. The authors provide a theoretical analysis for their motivation and offer a theoretical guarantee for the superior performance of the proposed greedy algorithm.\n \n3. The paper presents an extensive set of experiments, including both synthetic and real data.\n\n1. Further validation on more realistic datasets, such as ImageNet and CIFAR100, could strengthen the main points of the paper.\n \n2. The experiments lack comparison with other classical score functions, such as Regularized Adaptive Prediction Sets.\n\n1. In Figure 3, how is the Empirical Success Probability for each image calculated?\n \n2. In line 210, why does the score function of APS discard the random variable? In other words, does the random variable affect the performance of the empirical average test accuracy?\n \n3. Can you report the empirical coverage of the Greedy algorithm, since valid coverage is the fundamental guarantee for conformal prediction?"
}
] | |
xse8QMGnyM | Toward Approaches to Scalability in 3D Human Pose Estimation | In the field of 3D Human Pose Estimation (HPE), scalability and generalization across diverse real-world scenarios remain significant challenges. This paper addresses two key bottlenecks to scalability: limited data diversity caused by 'popularity bias' and increased 'one-to-many' depth ambiguity arising from greater pose diversity. We introduce the Biomechanical Pose Generator (BPG), which leverages biomechanical principles, specifically the normal range of motion, to autonomously generate a wide array of plausible 3D poses without relying on a source dataset, thus overcoming the restrictions of popularity bias. To address depth ambiguity, we propose the Binary Depth Coordinates (BDC), which simplifies depth estimation into a binary classification of joint positions (front or back). This method decomposes a 3D pose into three core elements—2D pose, bone length, and binary depth decision—substantially reducing depth ambiguity and enhancing model robustness and accuracy, particularly in complex poses. Our results demonstrate that these approaches increase the diversity and volume of pose data while consistently achieving performance gains, even amid the complexities introduced by increased pose diversity. | https://openreview.net/pdf/f77e33cf32c4c31dd7e4130f762aac8101938e30.pdf | [
{
"confidence": 4,
"rating": 6,
"review_id": "sHXkfoc40i",
"review_text": "Existing data in 3D human pose estimation are typically collected indoors with human actors. To address this scalability issue, the authors propose to synthesize 3D human pose data via an Osteo-kinematic model and introduce biochemical constraints for better physical plausibility. Additionally, to deal with the inherent ambiguity in single-view depth estimation, the authors introduce Binary Depth Coordinates to explicitly model the relative spatial relation between adjacent joints. Extensive experiments verify the effectiveness of the proposed approach.\n\n1. Leveraging biomechanical prior knowledge to synthesize physically plausible human data is $\\textbf{intuitive}$ and $\\textbf{interesting}$.\n2. Comprehensive experiments verify the effectiveness of the proposed data augmentation approach (BPG) and Binary Depth Coordinates (BDC). Specifically, BDC can be applied to different methods, e.g., image-based and lifting-based, showing superior generalization ability.\n\n1. $\\textbf{Repeated text}$: The first paragraph of Sec.2 appears to be a copy-paste from the abstract, which is highly discouraged.\n2. $\\textbf{Requirement of camera intrinsics}$: While BDC shows notable performance gains to baselines, solving depth requires camera intrinsics (principal point and focal length), typically not required by current 3D HPE methods. This requirement may introduce additional constraints for in-the-wild inference.\n\n1. In Fig. 2 and Fig.4, adding synthesized data consistently decreased performance for some baseline methods, e.g., GFpose. This seems counterintuitive to me. As the authors mentioned, overfitting might be a reason; do the authors have any other insights regarding this? Does this phenomenon indicate there is still a gap between the real data and synthesized data?"
},
{
"confidence": 4,
"rating": 5,
"review_id": "aegUTJu6me",
"review_text": "This paper introduces two components aimed at addressing challenges in 3D human pose estimation, specifically in terms of scalability and generalization. The authors propose a Biomechanical Pose Generator (BPG), which incorporates biomechanical principles to generate plausible new 3D poses. They also introduce Binary Depth Coordinates (BDC), a component designed to mitigate the depth ambiguity encountered when lifting a 2D pose to 3D. The paper includes ablation studies to demonstrate the impact of each component, and compares these new approaches to existing pose augmentation methods.\n\nThe paper’s focus on addressing the challenge of limited datasets and enhancing the generalizability of the method is interesting and to the best of my knowledge the idea of biomechanical pose generator which does not rely on a source dataset is novel. Also, the authors’ attention to the depth ambiguity in 3D pose estimation from a single image adds a value to the field. The authors have conducted comprehensive experiments and ablation studies, which provide valuable insights into the effectiveness of the proposed components. The inclusion of cross-dataset evaluation is crucial, as it allows for a robust assessment of the Biomechanical Pose Generator (BPG) component’s effectiveness.\n\n1- The paper is generally well-written, but some parts could be clearer. Including a figure to illustrate the entire system could significantly help reader comprehension. For example, a diagram showing the VPose (or any baseline) architecture and the integration of the BDC component might be more effective than a text-only description. Additionally, including some implementation details about the BDC component in the main paper could improve the flow of information.\n\n2- There are some ambiguities in the experiment section that need clarification. When referring to the “source-dataset”, it would be helpful to specify whether this refers to the Human 3.6M dataset or the newly synthesized poses. Similarly, when discussing evaluations on 3DHP and 3DPW, it would be beneficial to mention the specific subset used, such as the test set.\n\n3- There appears to be some confusion between Table 1 and the results in Figure 4 (left). While Table 1 shows improvements in the Human 3.6M results when adding new poses generated from BPG, Figure 4 (left) indicates that adding more data increases the MPJPE error (without integrating BDC). This seems contradictory and could benefit from further explanation.\n\n4- Typos: There are a couple of typographical errors that need correction. On Line 177, (xi) is repeated twice instead of yi. On Line 287, BDC should be corrected to BPG.\n\nAs I mentioned in the weaknesses section, I would like to learn more about the effect of adding more synthesized poses to Human 3.6M and evaluating on the same source of data as currently the results in the Table 1 and Figure 4 are a bit confusing to me."
},
{
"confidence": 4,
"rating": 5,
"review_id": "4ivtAGZGqM",
"review_text": "The authors propose a 3D human pose estimation framework that incorporates data augmentation and depth ordering information. The main contributions are two-fold: First, the proposed Biomechanical Pose Generator (BPG) generates plausible body poses based on kinematic constraints, which is used for data augmentation. Second, the Binary Depth Coordinates (BDC) disambiguate the projective depth of each joint by classifying whether the joints are positioned towards or away from the camera. The proposed framework achieved state-of-the-art performance in single-frame 3D human pose estimation settings.\n\n- The proposed method achieves state-of-the-art results in various 3D HPE datasets.\n- The effect of data augmentation is validated in cross-domain learning settings.\n\nMy major concern lies on the novelty of the contribution.\n\n- There are numerous research papers that regularize 3D human pose based on kinematic constraints. The authors did not clarify the distinctiveness of BPG from these conventional works, except for stating that BPG achieved better performance. An analysis showing how the proposed BPG generates more plausible poses compared to previous augmentation methods is required, either by displaying the generated poses or by showing qualitative estimation results.\n- The concept of BDC is similar to [1] which learns ordinal depth information. The authors should cite the paper and discuss the difference.\n\nThe paper also contains\tambiguously explained parts or lacks details about their methods. Please refer to Questions section.\n\n[1] G. Pavlakos et al., \"Ordinal depth supervision for 3d human pose estimation\", CVPR 2018\n\nMethod\n- In line 155, the focal length of the camera matrix is set to 1 for BPG, is it also the case for the datasets used? Or the camera matrix provided in the datasets are used?\n- In line 179, what is the meaning of \"depth relative to the plane of the image\". I guess $s_i$ is the depth relative to the preceding joint not the image plane.\n\nExperiments\n- Why did the authors use different baseline architectures in Sec. 5.1 and 5.2?\n- How much portion of augmented data from BPG used for experiments in Sec. 5.1?\n- Given that using only BPG increases the error in Fig. 4 left, how could it be possible to achieve better performance in Table 1 and 2 when only BPG is used?\n- Why didn't the authors use BPG in Table 6?\n- What is the difference between Variant E and BPG in Table 8?"
},
{
"confidence": 4,
"rating": 5,
"review_id": "1xmTXkfYvC",
"review_text": "This paper address the task of 3D Human Pose Estimation from monocular RGB. The authors make two main contributions: The Biomechanical Pose Generator (BPG) and the Binary Depth Coordinates (BDC). BPG is a 3D human pose generator that leverages the \"Normal Range of Motion\" (NROM) that is used in the medical field to describe standard biomechanical limitations. With it, BPG is capable of generating biomechanically sound 3D human poses by randomly sampling joint angles and bones that lie within a certain ratio to each other.\nBDC is a coordinate system that decompose a 3D pose into constituents. Specifically, it decomposes it into the 2D coordinate, bone length, a binary depth parameter indicating the closeness to the image plane as well as the 3D coordinates of the parent joint. This decomposition, so the authors claim, allows models to better deal with depth ambiguity. \nExperimental results demonstrate that the proposed approach achieves better performance over the compared related work on a variety of datasets (cf. Tbl 1-4). Ablative studies demonstrate that BDC helps keep performance steady even in the face of larger depth ambiguity (Tbl. 5) and that related work can benefit as well from switching to the proposed coordinates (Tbl 6.)\n\n- The authors properly motivate and evaluate their approach. Depth ambiguity in monocular RGB is a challenging problem to address. I particularly liked Tbl. 5 that demonstrated that BDC is capable of handling even larger depth ambiguities.\n- The paper was easy to digest and understand.\n- One of the main strength of this paper is that BDC can be combined with other related work, yielding improvements (Tbl. 6)\n\n- My biggest concern about the paper is that BDC is very similar conceptually to \"Hand Pose Estimation via Latent 2.5D Heatmap\nRegression\", Iqbal et al., ECCV'18. Yet there is no mention of the paper, let alone any comparisons. The mentioned paper also addresses with depth ambiguity by decomposing the 3D pose into 2D pose and a root-relative depth vector. Addressing the differences, performing comparisons with this approach would better contextualize as well as strengthen the contribution of the paper.\n- BPG shows to improve performance by improving the 2D to 3D lifting component. Yet, it's contribution is rather sparse, as it essentially amount to performing forward kinematics on bounded joint angle and bone lengths. It does not take into consideration statistics on poses. Certain poses are more common, due to them corresponding to actual human movement patterns (such as walking) that are affected by gravity. Randomly sampling poses without taking such statistics into consideration may generate a range of synthetic poses that are unrealistic, leading to non-optimal improvements.\n\n- How would BPG compare to randomly sampling SMPL poses?"
}
] | |
xrbgXJomJp | Inverse Factorized Soft Q-Learning for Cooperative Multi-agent Imitation Learning | This paper concerns imitation learning (IL) in cooperative multi-agent systems.
The learning problem under consideration poses several challenges, characterized by high-dimensional state and action spaces and intricate inter-agent dependencies. In a single-agent setting, IL was shown to be done efficiently via an inverse soft-Q learning process. However, extending this framework to a multi-agent context introduces the need to simultaneously learn both local value functions to capture local observations and individual actions, and a joint value function for exploiting centralized learning.
In this work, we introduce a new multi-agent IL algorithm designed to address these challenges. Our approach enables the
centralized learning by leveraging mixing networks to aggregate decentralized Q functions.
We further establish conditions for the mixing networks under which the multi-agent IL objective function exhibits convexity within the Q function space.
We present extensive experiments conducted on some challenging multi-agent game environments, including an advanced version of the Star-Craft multi-agent challenge (SMACv2), which demonstrates the effectiveness of our algorithm. | https://openreview.net/pdf/a8143c20b40d30d3986378da10c5654405072f65.pdf | [
{
"confidence": 4,
"rating": 6,
"review_id": "E0xMJaXQkc",
"review_text": "This paper extends the IQ-Learn method to cooperative multi-agent settings. The main insight is to use mixing networks to enable centralized training via decentralized Q functions.\n\n- The paper is quite relevant to NeurIPS and it is indeed important to extend IQ-Learn (or similar inverse learning algorithms) to multi-agent systems.\n\n- The major concern that I have is that, if my understanding is correct, the paper assumes access to the global state information. This is not realistic. In real application, this will never be the case. So the algorithm does not seem useful in practice.\n- Typo: In line 62, it should be \"generalization\" instead of \"generation\",\n- In line 72, \\citet should be used instead of \\cite or \\citep so that the author names will become a part of the sentence.\n- In line 162, \\eqref should be used instead of \\ref so that the parenthesis will appear around the equation number.\n- The architecture figure is in page 7. It would significantly increase the readability if it came earlier. \n- By the time the reader reads line 191, the IGC principle is still undefined. This makes reading very difficult.\n- The same thing is true at line 203, too.\n- Typo: In line 241, it should be \"makes\" instead of \"make\".\n- Typo: In line 242, it should be \"yields\" instead of \"yield\".\n\n- How do the agent have access to the global state information. If this is the case, why does the paper even define observations? Is the global state information available only in training or after deployment, too? In what settings is this applicable?\n- How could one adapt this algorithm for non-cooperative settings? Is there a straightforward way or does it require completely new approaches?"
},
{
"confidence": 3,
"rating": 6,
"review_id": "nZDCtmISHx",
"review_text": "This paper addresses the problem of extending a single-agent imitation learning algorithm, inverse soft-Q learning (IQ-learn, Garg et al. Neurips 21) to the multi-agent cooperative setting. The proposed algorithm, MIFQ, leverages the ideas of mixing networks and the individual-global-max (IGM) principle, to perform the extension. Experimental evaluations of MIFQ are conducted on SMAC-v2, MPE, and Gold Miner, and demonstrate that MIFQ improves over baselines across various domains and with varying numbers of demonstrations.\n\nThe paper addresses the challenge of generalizing a key imitation learning (IL) algorithm from single-agent to multi-agent settings, offering a novel approach with MIFQ. The problem is clearly specified and represents an important contribution to the MARL literature. \n\nThe empirical results are robust:\n- MIFQ outperforms most baselines with various demonstrations.\n- Extensive experiments across multiple domains and tasks confirm MIFQ's superior performance.\n- Comprehensive comparisons with baselines (BC, independent IQ learning, alternative Q-factorization methods, etc.) highlight MIFQ's advantages.\n\n1. Some aspects of the method do not seem fully justified to me: \n - The authors claim in lines 143-148 that a shortcoming of the IQ learn method is that the objective depends on the centralized state and joint action. However, Section 5.4 of the IQ Learn paper presents a state-only objective (independent from the actions). I wonder if the authors could discuss whether a simple state-only extension of IQ Learn, where critic depends on the centralized state as usual, but the actor depends on the observations, would be sufficient to sidestep many of the concerns addressed by IQ Learn? \n - The authors also claim in Section 4.1.2 that the straightforward Independent Inverse Q-learning is not a satisfactory solution because the method \"…has limitations in addressing the interdependence between agents and the global information available during the training process.\". Can the authors more explicitly discuss what the shortcomings of an independent version of IQ-learn is not satisfactory? Does it suffer from convergence problems? \n\t\t\t\t\t\n2. The current experimental analysis is somewhat shallow, and essentially amounts to a description of the plots. The authors could improve the analysis of MIFQ by considering the following additional questions: \n - The original IQ learn paper plots the rewards to validate that their method recovers the ground truth reward. Can the same be done here? \n - Why does MIFQ perform worse than BC on MPE, particularly the reference and spread tasks?\n3. There are some issues with how the experimental results have been reported. \n - What is the number of trials for each of the results? Please include this in the main paper. \n - The caption of Figure 2 is missing key information to understand the figure. What is the number of demonstrations used to train each of the methods? What does the shaded region mean? Based on the std devs reported in the Appendix, I assume it is the standard deviation; please see the note below and instead compute 95% confidence intervals. \n - No measurements of uncertainty are provided in Table 2, and standard deviations are provided only in the Appendix. Standard deviations reflect the underlying variance in models learned by the algorithm, rather than providing a measure of statistical significance. Please also compute 95% confidence intervals to enable readers to judge the statistical significance of the gaps in mean test returns -- ideally, bootstrapped confidence intervals. See this paper for a reference on best practices: https://arxiv.org/abs/2304.01315\n3. There are also some minor clarity issues: \n - IGC is used in line 192, but is only explained in the following Section 4.2.2\n - Definition 4.2 - this definition is not specific enough to be useful. It handwaves by only requiring that the joint policy be 'equivalent' to the collection of individual optimal policies. Equivalent in what sense?\n\n1. Questions about experiments: \n - What are some reasons why MIFQ does not achieve expert level performance? While the other methods also do not achieve expert level performance, the original IQ learn algorithm does have this ability. \n - How does the method perform with demonstrations not sourced from MAPPO (an algorithm that learns gaussian policies)? For example, demonstrations sourced from QMIX, which learns 'hard max' policies? \n - Why does the method need an order of magnitude more demonstrations than IQ Learn needs on complex single-agent tasks? \n \n2. Method: \n - Why is it necessary to maintain Q and V networks separately? Why not derive the global V function by computing the softmax of the Q functions as described in line 163-164? \n - Why is it necessary to compute Q^tot via Q^tot = -M (-Q)? What is the purpose of the double negation? The stated justification is that this enable the method \"to achieve the IGC principle and the convexity\", but why exactly is this? Requiring the networks to be multi-layer feedforward w/nonnegative weights and convex activation functions (lines 194-195) is enough to ensure that Q^tot is monotonic w.r.t. the local Q functions, thus ensuring the IGC principle and convexity. \n - Would major changes be necessary to enable this algorithm to operate on continuous action spaces? Did the authors consider continuous action space settings?"
},
{
"confidence": 2,
"rating": 7,
"review_id": "oDWjapxaMW",
"review_text": "This paper presents a novel algorithm, Multi-agent Inverse Factorized Q-learning (MIFQ), for cooperative multi-agent imitation learning (IL). It extends the inverse soft-Q learning framework to multi-agent settings by introducing a mixing network architecture for centralized training with decentralized execution. This enables learning local and joint value functions effectively. The authors conducted extensive experiments across multiple challenging environments, demonstrating that their approach outperforms existing methods.\n\n- The introduction of a multi-agent extension of inverse soft-Q learning using factorized networks is a significant and novel contribution to the field of IL. \n- This paper is well-written and organized, and provides a sound theoretical analysis.\n- The empirical results across three different environments, including a complex version of the StarCraft multi-agent challenge, are impressive. The proposed method outperforms existing baselines.\n\nAs someone who is not an expert in the field of imitation learning, I perceive no significant weaknesses in this paper from my perspective.\n\n- In Figure 2, the semi-transparent curves are not standardly explained. If these do not represent standard deviations, what statistical measure do they depict?\n- Minor Error: On Line 62, the term \"generation\" is used where \"generalization\" might be intended. Could the authors clarify or correct this in the context?"
},
{
"confidence": 3,
"rating": 5,
"review_id": "isfjEH7dis",
"review_text": "The paper addresses the imitation problem in cooperative Multi-Agent Reinforcement Learning (MARL). It extends inverse soft-Q learning to the multi-agent domain by leveraging value factorizations under the Centralized Training with Decentralized Execution (CTDE) paradigm. Experimental results demonstrate the effectiveness of the proposed approach across several environments.\n\n- The study of imitation learning in MARL is a valuable and relevant research problem, and the paper provides promising solutions.\n- The experimental results are robust and convincingly support the proposed method's effectiveness.\n\n- The paper's organization could be improved. The current structure alternates between theory and architecture without a clear flow.\n\n- The similarity between IGC and IGO[1] requires further clarification.\n\n- The objective function (6) introduces sub-optimality compared to the original objective (3) due to the restriction that $Q^{tot}$ and $V^{tot}$ must be monotonic. Additionally, since $Q^{tot}$ and $V^{tot}$ use different mixing networks, the relationship between them violates Equation (2). This indicates that Equation (6) does not represent the same objective as Equation (3), even without considering the sub-optimality introduced by factorization. These issues need further theoretical exploration and discussion.\n\n- Although the experimental results are promising, the superior performance seems to stem from the QMIX algorithm's advantage over other MARL algorithms. An important missing baseline is the soft actor-critic version of IQ-Learn, which uses a centralized Q function with decentralized critics and does not seem to violate the original objective.\n\n[1] Zhang, et al., FOP: Factorizing Optimal Joint Policy of Maximum-Entropy Multi-Agent Reinforcement Learning, ICML 2021.\n\n1.\tIs BC trained online, given that it shows learning curves with environment steps? If so, why not use DAGGER?\n2.\tCould the authors explain why MIFQ significantly outperforms IQVND? Is it solely due to the factorization structure?\n3.\tWhy does the paper state that QPLEX is unsuitable for the proposed method? QPLEX also has $\\partial Q/\\partial Q_i>0$."
}
] | |
xqrlhsbcwN | Approximated Orthogonal Projection Unit: Stabilizing Regression Network Training Using Natural Gradient | Neural networks (NN) are extensively studied in cutting-edge soft sensor models due to their feature extraction and function approximation capabilities. Current research into network-based methods primarily focuses on models' offline accuracy. Notably, in industrial soft sensor context, online optimizing stability and interpretability are prioritized, followed by accuracy. This requires a clearer understanding of network's training process. To bridge this gap, we propose a novel NN named the Approximated Orthogonal Projection Unit (AOPU) which has solid mathematical basis and presents superior training stability. AOPU truncates the gradient backpropagation at dual parameters, optimizes the trackable parameters updates, and enhances the robustness of training. We further prove that AOPU attains minimum variance estimation in NN, wherein the truncated gradient approximates the natural gradient. Empirical results on two chemical process datasets clearly show that AOPU outperforms other models in achieving stable convergence, marking a significant advancement in soft sensor field. | https://openreview.net/pdf/b37fda1b4909f85ae3e295ff8e8ca5de81f5920d.pdf | [
{
"confidence": 3,
"rating": 5,
"review_id": "0wOzk0Ncjx",
"review_text": "The paper proposes a novel training framework for regression tasks called the Approximated Orthogonal Projection Unit (AOPU), optimized using truncated natural gradients. The authors utilize the Rank Rate (RR) of the augmented data covariance matrix as a metric. They demonstrate that their method offers more stable training than existing architectures and optimizers, which is crucial for industrial applications requiring online training during production. Additionally, the authors provide a comprehensive analysis of their setup's convergence.\n\n1. Detailed introduction on the background and intuition.\n2. The method is very simple.\n3. A thorough theoretical analysis of the method was provided.\n\n1. Poorly arranged paper; conclusions are at the end of the appendix.\n2. In the introduction, the authors claim that their methods improve interpretability, but they do not explain later why that matters. Also, many existing works explain the behavior at a neuron level; it is not clear why one has to track the parameter itself.\n3. Experimental qualities are not good; no hyperparameter search is mentioned in the paper, which is essential when the authors claim that their method improves training stability.\n\n1. Did the authors try to tune the learning rate or other hyperparameters carefully for all other methods? Those bad-performing/unstable training curves in Fig 5 might be because the authors used only one specific learning rate across all those settings.\n2. Could the authors offer more insights on the augmented data $\\tilde{x}$? Why did the authors choose a random Gaussian matrix as the togo augmentation?\n3. Could the authors explain how the data was fed into the model? Suppose we have sensors U1-U7 in Debutanizer, each with 2394 records. I assume those data are the inputs to the network, so what are the targets? Also, for sequence models, the way those data were fitted into sequence matters, so how did the authors implement that?\n\nI am willing to increase my rating if the authors could address my concerns."
},
{
"confidence": 3,
"rating": 6,
"review_id": "qZUAzeQEBS",
"review_text": "The paper introduces the Approximated Orthogonal Projection Unit, the basis for a new neural network, designed to enhance the stability and interpretability of regression models, particularly in industrial soft sensor applications. The primary aim is to address the need for stable and immediate optimization in online settings, where traditional NN training techniques fall short. The paper introduces the theoretical background and demonstrates the effectiveness on two tasks, while also introducing ablations and comparisons to several other techniques.\n\n- The proposed method appears novel and straightforward.\n- The paper provides a solid theoretical foundation.\n- The paper imrpoves interpretability of the neural network's behavior and training dynamics by differentiatiating between trackable and untrackable parameters, enhancing the interpretability.\n- The authors demonstrate superior performance of AOPU in experiments with two chemical process datasets, showcasing its practical effectiveness in achieving stable convergence compared to existing models.\n- Practical Relevance: Tailors the AOPU framework specifically for industrial soft sensor applications, addressing the need for immediate optimization and stability in online settings.\n- Limitations, such as numerical stability issues during matrix inversion in the training process, are discussed.\n\n- Code not published. The justification provided is somewhat questionable, since easy reproducibility should also enable the authors to provide code (possibly mirrored from the code implemented at the company).\n- While the page limit is formally met, the authors make extensive use of the Appendix, including core elements of the paper. The Conclusion and Limitations, for example, are in the appendix.\n- There is no mention of thorough hyperparametertuning and its results.\n\n- The resoning/phrasing of lines 25-29 is not clear to me. Could you please elaborate?\n- I am not sure, how the augmentation of Eq. (4) helps. Isnt it just a linear transformation, that introduces no new representation?\n- Could you please provide detaisl of how the hyperparameters (Section 4.3) were found and potentially how sensitive these methods are to changes? It appears that the true value of some of the algorithms might be obscured by improper hyperparameter settings. How many random seeds were used? It is unclear how robust the results are against random variations (see for example the second DNN plot of Figure 5, which appears to be somewhat of an outlier compared to the shorter and longer sequence length).\n- I noticed the following typos: Line 9: parameters'; Line 14: missing 'the', Line 106: integrated; Line 153; Line 156: a\n- Please keep heading capitalization consistent!"
},
{
"confidence": 3,
"rating": 5,
"review_id": "4ZG0CjEThU",
"review_text": "This paper introduces a new model for soft sensor tasks, the Approximated Orthogonal Projection Unit (AOPU), to enhance the stability and interpretability of regression networks. AOPU incorporates trackable and dual parameters, which are treated differently during the inference and training processes. AOPU truncates the gradient backpropagation at dual parameters, optimizes the trackable parameters updates, and enhances the robustness of training. The paper provides theoretical proof that AOPU is an approximation of both MVE and Natural Gradient Descent (NGD).Experimental results on two chemical process datasets demonstrat that AOPU outperforms other models in achieving stable convergence.\n\n1. The proposed method is novel and has a strong theoretical basis. The authors provide detailed proofs of theorems in the appendix.\n2. If the contents in the appendix are considered, this paper analyses the proposed AOPU from many aspects, and provide sufficient experimental results and ablation study to validate the advantage of AOPU.\n\n1. Due to the limitation of paper length, the contents in the formal paper is incomplete. Many important content like quantitative analysis and ablation study are put in the appendix. The formal contents also lacks a conclusion section. For the quality of publishing, I suggest submitting the paper to other platforms like a IEEE Transaction, where the paper length can be longer.\n\n2. The proposed method are not incorporated into DNN structures, therefore its expressive power is limited in more complicated tasks. Considering the requirements of industrial soft sensor tasks, this is not a critical flaw, but it still hinders AOPU from challenging AI applications.\n\nI do not have specific questions about the paper."
}
] | |
xqc8yyhScL | Is Programming by Example Solved by LLMs? | Programming-by-Examples (PBE) aims to generate an algorithm from input-output examples.
Such systems are practically and theoretically important:
from an end-user perspective, they are deployed to millions of people, and from an AI perspective, PBE corresponds to a very general form of few-shot inductive inference.
Given the success of Large Language Models (LLMs) in code-generation tasks, we investigate here the extent to which LLMs can be said to have "solved" PBE.
We experiment on classic domains such as lists and strings, and an uncommon graphics programming domain not well represented in typical pretraining data.
We find that pretrained models are not effective at PBE, but that they can be fine-tuned for much higher performance, provided the test problems are in-distribution.
We analyze empirically what causes these models to succeed and fail, and take steps toward understanding how to achieve better out-of-distribution generalization.
Collectively these results suggest that LLMs make strong progress toward solving the typical suite of PBE tasks, potentially increasing the flexibility and applicability of PBE systems, while also identifying ways in which LLMs still fall short. | https://openreview.net/pdf/c6f0a06631003e14e1689d1d352541b2fc07a831.pdf | [
{
"confidence": 4,
"rating": 5,
"review_id": "iZpqbEYUE9",
"review_text": "This paper investigates the effectiveness of Large Language Models (LLMs) in solving Programming-by-Example (PBE) tasks. Evaluations are conducted on three classic PBE domains including lists and strings, as well as a graphics programming domain. The findings suggest that while pretrained LLMs are not inherently effective for PBE, fine-tuning significantly enhances their performance on in-distribution tasks.\n\n- Thorough evaluation and detailed analysis.\n\n- Clear cases and illustrations.\n\n- Addressing the challenge of small datasets for fine-tuning LLMs.\n\n- In the experiments, there are no LLM competitors in the graphics domain. Any reasons?\n\n- Why are only FlashFill and LambdaBeam compared in the experiments of Figure 6?\n\n- The adaptation method used to improve out-of-distribution performance exposes the model to the test set content beforehand. Especially in string tasks, directly selecting the adaptation seed program from all test cases may be unfair. \n\n- The examples used in the experiments are relatively weak and do not closely resemble real-world programming tasks.\n\n- If the adaptation's seed program is not provided, even after fine-tuning, the out-of-distribution generalization ability of LLMs still appear to be quite weak.\n\nTypos:\nin abs: potentially increasingly the flexibility -> potentially increasing the flexibility\n\n- How does GPT-4 perform on the entire PROSE dataset?\n\n- Would the key factors that lead to the success or failure of LLMs differ across problems in three different domains?"
},
{
"confidence": 5,
"rating": 6,
"review_id": "cuz0XWgFoP",
"review_text": "The paper focuses on the classical task of Programming By Example (PBE): given some (input,output) pairs, the goal is to generate a program that \"fits\" these examples (producing the outputs when given the inputs), and also generalizes well to new inputs.\nThe paper evaluates mostly 7B and also 33B LLMs on three PBE tasks. The paper finds that finetuning these LLMs on these tasks further boosts their accuracy.\nThe paper also investigates out-of-distribution (OOD) generalization, and finds that OOD can be improved using a semi-supervised approach, where the model is given (input,output) pairs from the new domain (but not the desired program); then the LLM samples potential programs that solve the (input,output) pairs; if the program is correct (which can be validated) - it is added to the training set and the LLM is trained / finetuned again iteratively.\n\n1. The paper is very clear and easy to follow, and it contains many examples that are visualized nicely.\n1. The paper connects modern LLMs with the classical problem of Programming By Example (PBE)\n\n1. Undefined, non-scientific, message - the title of the paper is \"Is Programming by Example solved by LLMs?\". This title leads the paper (\"We investigate here the extent to which large language models pretrained on source code can solve PBE\"), but I think that it's an undefined question. What does \"solve\" mean? By construction, and according to the \"no free lunch\" theorem, PBE can never be \"solved\". So \"solving\" PBE just depends on the difficulty of the questions. Even if we could define \"solve PBE\", how would you measure it? Is 90% considered \"solved\"? Is 80% \"solved\"? This problem is further expressed in L214: \"absolute performance in LOGO remains poor\" - 16% accuracy is not \"poor\" when you do not compare it to anything. Any accuracy number below 100% is considered as \"unsolved\" as any other number, and 100% is not possible on a hard enough dataset (because of \"no free lunch\").\n\n1. Novelty - this is mostly an evaluation paper, that does not introduce any new approach or technique. Further, from the empirical evaluation, the answer to the question \"Is Programming by Example solved by LLMs?\" is, as expected, is \"somewhat, but not quite\": nothing in the empirical results was surprising or unusual: (a) finetuning LLMs on task-specific data works well; (b) semi-supervision on OOD data helps; (c) using Python as the output programming language works much better than the DSLs of classical work, because modern LLMs were trained on much more Python data than niche DSLs.\n\n1. The OOD claim is a bit weak, because only in the relevant section it is said that \"assuming we have access to problems drawn from the testing distribution\" (without their labels, but these labels can be sampled and validated).\n\n1. The paper compares its approach (a finetuned LLM) to classic symbolic, DSL-based (non-learning / non-neural) approaches several times throughout the paper, and speaks in favor of the LLM-based approach. This comparison to classic approaches is a bit of a strawman, since it is quite obvious that 33B LLMs are much more powerful than Flashfill (which is a paper from 2011) (Table 1).\nThe paper also mentions that:\n>We also find that the resulting system can cover a broader scope of problems than classic symbolic methods, owing to the use of a Turing-complete language, which, at least theoretically, allows learning any computable function.\n\nAnd I think that such claims completely miss the point: the reason that LLMs are better than classic symbolic methods is **not** the use Turing-complete languages. LLMs would have been better than classic symbolic methods even if the classic symbolic DSLs were turing-complete as well. The reason is that LLMs were trained on trillions of Python tokens.\n\n6. Another trivial claim: in Section 4.2, the authors find that \"posterior description length is more predictive than program size and prior description length\". Simplifying the paper's claim, without using words from probability, basically says: the perplexity of the desired output sequence is predictive of its accuracy on downstream tasks. I think that this claim is quite trivial, and is very common in practice in LLM training: measuring perplexity on a validation set is usually closely correlated with success on downstream tasks. Isn't this posterior iexactly what the model was *trained* to predict?\n\n1. Can the authors evaluate the baseline where the \"program\" is the LLM itself? That is, the LLM is trained/prompted to predict output to unseen inputs, without going through an explicit program. I am asking this specifically in light of Figure 4 - the examples there seem to be much more easy to solve directly with an LLM (in a few-shot prompting fashion, possibly with chain-of-thought), than to write an explicit program for.\n1. In L96 the authors write: \"we use samples from a generative model to train an inference network, but we do not further train the generative model itself\" - what does this exactly mean? What kind of model is each of the \"generative model\" and \"inference network\"? Which of them is a pretrained LLM? And why not further training the generative model itself?\n1. What exactly does the \"Search Budget (Num Samples)\" mean in the experimental section? Does that mean \"accuracy@k\" - sample $k$ different outputs, and consider the output as correct if *any* of these $k$ outputs is correct?\n1. In Figure 3 - What temperature was used, and what other temperatures did the authors explore, for their finetuned model and for the baselines such as GPT-4? Since evaluation depends on sampling of up to 200 outputs, the temperature might have a drastic effect on the success of each model. With a proper tuning of temperature, the order of curves in Figure 3 might be different.\n\n## Summary\nOverall, the paper is not wrong and is presented nicely, but its novelty is limited, I'm not sure about the validity of some of the results such as Figure 3, and most of its conclusions are expected. I am thus voting for a borderline reject."
},
{
"confidence": 3,
"rating": 5,
"review_id": "5Xm8kEDL7o",
"review_text": "The paper performs a relatively thorough study on using LLM for example-guided program synthesis tasks. The results presented in the paper suggest that LLMs make strong progress toward solving the typical suite of example-guided synthesis tasks, potentially increasingly the flexibility and applicability of PBE systems.\n\n- The PBE problem is interesting and well-motivated. Major papers in the field are well cited and referenced\n- Extensive amount of traditional datasets are being evaluated\n- The insights derived from experiments are somewhat valuable\n\n- CoT and other simple prompting methods are not evaluated\n- While there is an extensive amount of experiments and comparisons, we find that the outcome is relatively predictable.\n- While the writing is generally okay and easy to understand, multiple typos and mistakes found in the writing (also mentioned in questions). Please consider fixing them.\n- The LOGO visual examples are converted to an ASCII grid of characters (Fig. 8b). This might not be the most intuitive representation. Details about the transformation is not shown, such as how each number (0-9) is derived, the resolution of the ASCII grid, etc. With this design, it does not make sense for a non-fine-tuned LLM to solve the task. But technically you could still fine-tune GPT-3.5 with these inputs, but I guess it is okay to not include this experiment.\n\n- (Typo) line 150, there should be a space between (Tbl. 1,Fig. 3b)\n- (Typo) figure 6a “Sygus” -> “SyGuS” \n- (Grammar) last sentence of Figure 4 caption has grammar mistakes\n- (Grammar) last sentence of Table 1 caption has grammar mistakes\n- Appendix A.2 is empty\n- I see in the prompt the authors wrote “You are a CS professor”. As far as I know this might not be the perfect prompt for code generation (this is just a joke)."
},
{
"confidence": 3,
"rating": 7,
"review_id": "m3BuChX7kA",
"review_text": "This paper investigates whether the long-studied programming by example task is \"solved\" by large language models with Turing-complete languages like python.\nTheir evaluation is on three domains: lists, strings, and LOGO/Turtle graphics.\nThey evaluate three LLM-based approaches, including a self-instruct-like fine-tuning approach that tunes LLMs on synthetic labeled data, and an adaption approach assuming access to problems (not solutions) from the testing distribution.\nCompared to several symbolic, neurosymbolic, and LLM baselines, the proposed approaches perform better.\nThe analysis of the correlation between different aspects of the target program indicates that the fine-tuned model is beyond blind guess-and-check.\n\n1. The experiments are comprehensive, and the analysis of different predictors of model performance is helpful in understanding the extent to which LLMs solve PBE.\n2. The proposed methods make use of the fact that PBEs problems can be accurately synthesized using model-generated inputs and programs. The experiment results show that they are effective in solving in-domain problems and adapting out-of-distribution ones at test time.\n3. This paper answers some interesting questions regarding the role of LLMs for PBE and points out what researchers might work on in the future.\n\nContamination. As the authors acknowledged on Line 148, the problems could be in LLMs' pertaining data. I wonder if the authors have an idea of how much of a role such potential contamination plays in LLMs' superior performance. Is there anyway to rule out or minimize the impact of that confounder?\n\n1. How much does a turing-complete language help in solving PBE, excluding the fact that LLMs have seen lots of python code? Is the expressiveness of a turing-complete language itself helpful?\n2. How far can the adaption go? Right now the adaption discussed is still within the same category of problems (such as lists), I imagine a more general PBE system might be able to adapt to problems that are more different."
}
] | |
xpRUi8amtC | Scene Graph Generation with Role-Playing Large Language Models | Current approaches for open-vocabulary scene graph generation (OVSGG) use vision-language models such as CLIP and follow a standard zero-shot pipeline – computing similarity between the query image and the text embeddings for each category (i.e., text classifiers). In this work, we argue that the text classifiers adopted by existing OVSGG methods, i.e., category-/part-level prompts, are scene-agnostic as they remain unchanged across contexts. Using such fixed text classifiers not only struggles to model visual relations with high variance, but also falls short in adapting to distinct contexts. To plug these intrinsic shortcomings, we devise SDSGG, a scene-specific description based OVSGG framework where the weights of text classifiers are adaptively adjusted according to the visual content. In particular, to generate comprehensive and diverse descriptions oriented to the scene, an LLM is asked to play different roles (e.g., biologist and engineer) to analyze and discuss the descriptive features of a given scene from different views. Unlike previous efforts simply treating the generated descriptions as mutually equivalent text classifiers, SDSGG is equipped with an advanced renormalization mechanism to adjust the influence of each text classifier based on its relevance to the presented scene (this is what the term “specific” means). Furthermore, to capture the complicated interplay between subjects and objects, we propose a new lightweight module called mutual visual adapter. It refines CLIP’s ability to recognize relations by learning an interaction-aware semantic space. Extensive experiments on prevalent benchmarks show that SDSGG significantly outperforms top-leading methods. | https://openreview.net/pdf/a6ceccff861a11ddd07c003913a667e72407795a.pdf | [
{
"confidence": 3,
"rating": 6,
"review_id": "LYf4k5zxgB",
"review_text": "This paper proposes SDSGG, a novel open vocabulary scene graph generation(OVSGG) algorithm that leverages the reasoning capability of a LLM to better determine the relations between objects in the scene. It achieves this goal by first prompting a LLM with multiple persona prompts to expand a simple relational predicative to a list of detailed visual descriptions, which are subsequently used to augment the classification process. It also introduces a novel mutual visual adapter, which better captures the interaction between subjects and objects. Experiments show that these proposed designs are effective.\n\n1. Incorporating a LLM to augment the predicate labels for scene graph generation is a novel idea. This paper provides meaningful insight to future works in this area.\n2. The experiment results (table 1-2) are strong, significantly outperforming previous methods.\n3. The authors conducted extensive ablation studies on various design elements.\n\n1. Prompting the LLM is a key element of the method, however some crucial details are missing. For example, how are the prompts constructed? While the author provided the prompt in Appendix Fig 5, it is unclear how the \"{scene content to be discussed}\" is generated. The author did show some examples throughout the paper, but they are not sufficient for the reader to understand the underlying process. In particular, in L167, the author showed example #1 \"Imagine there is an animal that is eating, \". In Fig 1c, there is example #2 \"Assuming that the scene has a man riding a horse.\" These two descriptions have two different granularity, as one only includes the generic concept of \"an animal that is eating\" while the other has specific class names \"man\" and \"horse\". The authors should clearly describe what information is included into the prompt, and discuss the scalability and cost of generating such prompts. I suppose if the prompts are like example #1, they can be generated offline based on predicative label sets. However, if the prompts are like example #2, they need to be generated for every possible triple of (subject, predicative, object) over the label space, or be generated online over possible objects in a scene. It is unclear which is the case.\n\n2. Additional discussions and experiments are required to justify some of the design choices. For example,\n\n 2.1 in eq 8, the loss of descriptions marked by possible coexistence is to make the prediction \"close to those of CLIP.\" (L255). If this is the case, why not directly use CLIP results for these possible coexistence descriptions at inference time (eq 2)?\n\n 2.2 some discussion is needed on if CLIP is good at classifying the generated descriptions. What are the nature of these descriptions and do they fit well with CLIP's pretraining pipeline (i.e. object-level image caption)? As a concrete example, can CLIP properly distinguish descriptions involving counting, such as \"with four legs\", and \"with two legs\", mentioned in the examples?\n\n 2.3 what happens if we discard \"possible coexistence\" descriptions and only use definite coexistence and contradiction? Table8 shows that it is ideal to have a low weight for \"possible coexistence\" loss. What happens if we set the weight to 0 and remove it at inference pipeline?\n\nSee weakness."
},
{
"confidence": 4,
"rating": 6,
"review_id": "3IH1ieRl2V",
"review_text": "This paper aims to solve the open-vocabulary scene graph generation problem. Previous methods mainly adopt scene-agnostic prompts as text classifiers. The authors argue that using the fixed text classifiers not only struggles to model visual relations with high variance, but also falls short in adapting to distinct contexts. Therefore, the authors propose the scene-specific description based OVSGG framework. They employ an LLM and ask it to play different roles. Besides, they design the mutual visual adapter to encode visual features. Extensive experiments show that the proposed method significantly outperforms top-leading methods.\n\nThe motivation and idea of this paper are innovative and interesting. Simply applying LLM to SGG cannot effectively reason the relationships. The authors consider employing the context and introducing multiple roles of LLM, which is shown to be effective for solving the OVSGG problem.\n\nBesides, the experiments are convincing. Plenty of ablation studies are provided.\n\nMy main concern is Computational Complexity: The proposed framework involves multiple stages, including generating descriptions, renormalizing them, and applying mutual visual adapters. This multi-step process could be computationally intensive, making it less practical for real-time applications or scenarios with limited computational resources.\n\nPlease read the weaknesses part."
},
{
"confidence": 5,
"rating": 6,
"review_id": "qro8MwccwF",
"review_text": "This paper starts by discussing methods for Open-vocabulary Scene Graph Generation (OVSGG) based on the CLIP model, highlighting the issue that current OVSGG methods do not differentiate between various scenes, which limits their effectiveness. The authors introduce SDSGG, a scene-specific description-based OVSGG framework that improves both the textual and visual parts, enhancing the model's open-vocabulary relationship prediction capabilities.\n\n1. The novelty of this paper lies in its analysis of the issues present in current OVSGG methods, leading to the conclusion that differentiating between scenes is necessary to enhance the performance of OVSGG. The proposed Scene-specific Descriptions are particularly insightful.\n2. The paper validates its findings on two datasets, VG and GQA, with experimental results showing significant performance improvements over previous state-of-the-art methods.\n\n1. The description in Sec3.1, Scene-specific Text Classifiers, of the paper is somewhat confusing. This confusion arises primarily because the text section includes multiple different naming conventions and several distinct modules. It is recommended that this section be rewritten to make it easier for readers to understand. Additionally, the terminology used in this section is inconsistent with that in lines 64~77, leading to comprehension difficulties.\n2. For the OVSGG method, it is suggested to also train the model on a full set of relations and compare its performance with conventional SGG methods to ensure that it achieves good performance under standard settings.\n3. Is the model robust to different base/novel splits? It is recommended to train and test the model on different base/novel dataset divisions to assess its robustness.\n4. It is advised to train and test the model on the PSG dataset as well.\n\n1. Regarding the selection of multiple personas, the ablation study shows that not using this approach results in a significant performance decrease. My question is, what exactly are the \"standard prompts\" referred to in line 329 of the document? What would be the effect if only one persona is used, and among the three personas mentioned in the document, which persona demonstrates the most significant performance?"
}
] | |
xojbzSYIVS | LLM-ESR: Large Language Models Enhancement for Long-tailed Sequential Recommendation | Sequential recommender systems (SRS) aim to predict users' subsequent choices based on their historical interactions and have found applications in diverse fields such as e-commerce and social media. However, in real-world systems, most users interact with only a handful of items, while the majority of items are seldom consumed. These two issues, known as the long-tail user and long-tail item challenges, often pose difficulties for existing SRS. These challenges can adversely affect user experience and seller benefits, making them crucial to address. Though a few works have addressed the challenges, they still struggle with the seesaw or noisy issues due to the intrinsic scarcity of interactions. The advancements in large language models (LLMs) present a promising solution to these problems from a semantic perspective. As one of the pioneers in this field, we propose the Large Language Models Enhancement framework for Sequential Recommendation (LLM-ESR). This framework utilizes semantic embeddings derived from LLMs to enhance SRS without adding extra inference load. To address the long-tail item challenge, we design a dual-view modeling framework that combines semantics from LLMs and collaborative signals from conventional SRS. For the long-tail user challenge, we propose a retrieval augmented self-distillation method to enhance user preference representation using more informative interactions from similar users. To verify the effectiveness and versatility of our proposed enhancement framework, we conduct extensive experiments on three real-world datasets using three popular SRS models. The results consistently show that our method surpasses existing baselines. The implementation code is available in Supplementary Material. | https://openreview.net/pdf/154f17c1f444becfea5d4859af7ffcf05d69ce31.pdf | [
{
"confidence": 4,
"rating": 7,
"review_id": "pNStOGMM2r",
"review_text": "The paper presents a framework that integrates large language models (LLMs) into sequential recommendation systems (SRS) to tackle the long-tail challenges. The framework includes dual-view modeling, which combines semantic embeddings from LLMs with collaborative signals, and a retrieval-augmented self-distillation method to enhance user preference representation. The authors validate their approach through extensive experiments on three real-world datasets, demonstrating significant improvements over existing methods.\n\n1)\tThe dual-view modeling and retrieval-augmented self-distillation methods are novel contributions that enhance the performance of SRS.\n2)\tUtilizing LLMs to derive semantic embeddings for items and users adds a new dimension to the traditional collaborative filtering methods.\n3)\tThe extensive experimental evaluation, including comparisons with multiple baselines and ablation studies, strengthens the validity of the findings.\n4)\tThe paper provides comprehensive details on the methodology, including mathematical formulations and algorithmic steps, facilitating reproducibility.\n\n1) There is a risk that the semantic embeddings might overfit to the training data, especially if the textual descriptions are not diverse enough.\n2) The performance of the framework might be sensitive to the choice of hyper-parameters, which is not extensively explored in the paper.\n\n1) How do the authors mitigate the risk of overfitting with semantic embeddings, especially in scenarios with limited textual data?\n2) Can the authors elaborate on the hyper-parameter tuning process and its impact on the performance?"
},
{
"confidence": 4,
"rating": 7,
"review_id": "qjDnoRE8io",
"review_text": "This paper introduces a novel framework designed to address the long-tail challenges in sequential recommendation systems (SRS). By leveraging semantic embeddings from large language models (LLMs) and combining them with collaborative signals, the authors propose a dual-view modeling framework and a retrieval-augmented self-distillation method. This approach aims to enhance recommendations for both long-tail users and items without adding significant inference load. Extensive experiments on three real-world datasets demonstrate the effectiveness of the proposed framework.\n\n1.\tThe paper successfully integrates LLMs with SRS to address long-tail challenges, a novel approach that leverages the semantic understanding of LLMs while maintaining low inference costs.\n2.\tThe dual-view modeling framework effectively combines semantic and collaborative signals, providing a comprehensive enhancement for SRS.\n3.\tThis method innovatively uses interactions from similar users to enhance user preference representation, addressing the long-tail user challenge.\n4.\tThe proposed framework is model-agnostic and can be adapted to any sequential recommendation model, making it highly applicable in real-world scenarios.\n\n1.\tThe proposed dual-view and self-distillation methods add layers of complexity to the SRS, which may pose challenges in practical implementation.\n2.\tThe framework assumes a certain level of similarity in user interactions, which might not hold true for highly diverse user bases.\n3.\tImpact on Popular Items: While the focus is on long-tail items and users, the potential impact on recommendations for popular items is not thoroughly explored.\n\n1.\tCould the authors provide more details on the practical implementation challenges and how they can be mitigated?\n2.\tHow does the framework handle highly diverse user interactions where finding similar users may be challenging?\n3.\tBalanced Performance: What measures have been taken to ensure that the enhancement for long-tail users and items does not adversely affect recommendations for popular items?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "Czd8ymMRG3",
"review_text": "The paper addresses the challenges in sequential recommender systems (SRS), particularly the long-tail user and long-tail item issues, which complicate user experience and seller benefits in real-world applications. The authors propose the Large Language Models Enhancement framework for Sequential Recommendation (LLM-ESR) to mitigate these challenges. The LLM-ESR framework leverages semantic embeddings derived from large language models (LLMs) to enhance SRS without increasing inference load. To tackle the long-tail item problem, the framework employs a dual-view modeling approach that integrates semantics from LLMs with collaborative signals from traditional SRS. For the long-tail user issue, a retrieval augmented self-distillation method is introduced to improve user preference representation by utilizing more informative interactions from similar users.\n\n- The work includes extensive experiments, testing multiple aspects of the model's capabilities.\n \n- The approach is quite new. Recommender systems based on LLMs are a promising direction.\n\n- The paper does not sufficiently and deeply discuss existing work, making the motivation and core idea of the paper seem less convincing, and the innovation of the paper is also insufficient.\n \n- The baselines used in the experiments are limited.\n\n- In line 44, the authors mention that existing studies perform poorly due to \"ignorance of the true relationship between items.\" What is the true relationship between items, and how does it affect recommendations?\n \n- Although SASRec is a classic model, it is not reasonable to conclude that all SRSs perform poorly in long-tail scenarios solely based on SASRec. Have the authors analyzed why SASRec performs poorly in long-tail scenarios? Do models that use other techniques specifically for long-tail scenarios have this problem? What are their limitations?"
}
] | |
xoc4QOvbDs | Evaluate then Cooperate: Shapley-based View Cooperation Enhancement for Multi-view Clustering | The fundamental goal of deep multi-view clustering is to achieve preferable task performance through inter-view cooperation. Although numerous DMVC approaches have been proposed, the collaboration role of individual views have not been well investigated in existing literature. Moreover, how to further enhance view cooperation for better fusion still needs to be explored. In this paper, we firstly consider DMVC as an unsupervised cooperative game where each view can be regarded as a participant. Then, we introduce the Shapley value and propose a novel MVC framework termed Shapley-based Cooperation Enhancing Multi-view Clustering (SCE-MVC), which evaluates view cooperation with game theory. Specially, we employ the optimal transport distance between fused cluster distributions and single view component as the utility function for computing shapley values. Afterwards, we apply shapley values to assess the contribution of each view and utilize these contributions to promote view cooperation. Comprehensive experimental results well support the effectiveness of our framework adopting to existing DMVC frameworks, demonstrating the importance and necessity of enhancing the cooperation among views. | https://openreview.net/pdf/40a1d357eea0e19182fb452e305504eaa3502b19.pdf | [
{
"confidence": 4,
"rating": 6,
"review_id": "mDV9kCmzAS",
"review_text": "This paper studies multi-view clustering and seeks to investigate the view cooperation issue. The authors consider DMVC as an unsupervised cooperative game and regard each view as a participant. Compared with the existing methods, this consideration is new and interesting. Based on the novel idea, the authors proposed SCE-MVC, a novel shapley-based cooperation enhancing multi-view clustering method. The paper is well-organized. The experiments are convincing.\n\n1. The paper proposes a new point also an interesting point for multi-view clustering tasks, i.e., considering the multi-view collaboration as a cooperative game. \n\n2. The experiments are sufficient and convincing. The authors validate the method from many aspects. The proposed SCE-MVC obtains much better performance on six diverse datasets.\n\n1. Figure 2 is confusing. The specific structure of View Cooperation Enhancing Module is not clearly presented.\n\n2. There are many formulas and symbols. It is suggested to add a notation table.\n\n3. Although the authors try to explain model (1), it is still difficult to understand Shapley Value from the model. In addition, many variables are not clearly explained. The authors should present more information about the model and explain all variables used in this model, such as S_i, {i}, s\\{i}, etc.\n\nThe article involves rich theoretical and mathematical knowledge. I have a question to the designation of the model: Which design is the key to improving model performance?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "9Y3UiFwJHL",
"review_text": "The author introduces a Shapley-based cooperation enhancement framework aimed at fostering collaboration among different views. The SCE-MVC method incorporates cooperative game theory, considering each view as a participant in the model and assessing their contributions using the Shapley Value.\n\nViewing each view as an individual player within game theory represents a fresh perspective in multi-view clustering. Also, enhancing clustering performance through balancing view contribution is both well-founded and innovative.\n\n1. Using the SCE module in an alignment-based framework only provides a marginal improvement to the model. Does this imply that the SCE module is ineffective in the alignment-based framework? \n\n2. The view contributions of alignment-based method is much balanced than view contributions of joint methods. Does this imply that the alignment-based method is much better than the joint method? It's not reasonable since the clustering performance of alignment-based methods may not necessarily be better than that of joint methods.\n\n3. Is the complexity of computing Shapley values truly O(n!)? When dealing with a larger number of views, can this evaluation framework still be utilized for computation?\n\n4. Are the loss functions L in Eqs (15) and (16) on page 6 the same? If so, there is a problem of inconsistent dependent variables. In addition, $D_ij$ in Eq. (9) is a scalar and should not be bolded.\n\nThe alignment-based method proposed in Theorem 1 will make the contribution values of several views the same. Combined with the experimental results in Table 2, does this mean that the end point of the view contribution optimization proposed in this paper is contrastive learning? If not, please explain in detail the difference between the method in this paper and the contrastive learning method?"
},
{
"confidence": 5,
"rating": 6,
"review_id": "x4fCuSG5ex",
"review_text": "The study centers on improving task performance via deep multi-view clustering (DMVC) and fostering cooperation among different views. Specifically, the study evaluates view contributions, emphasizing the significance of strengthening cooperation among views.\n\nConsidering multi-view tasks from a collaborative standpoint represents a novel approach, with the paper's motivation being notably fresh. Moreover, the paper elucidates potential contribution imbalances in the joint method and addresses them through the SCE method, thereby enhancing cooperation among views.\n\nWhen dealing with datasets comprising more than two views, such as three views, how can one assess whether the contribution of the views has become more evenly distributed after employing SCE? While the paper visually presents the contributions of the views, could a quantitative method be provided for this evaluation?\n\nIn the unsupervised multi-view scenario, what is the physical meaning of the contribution value of each view proposed in this paper? What is the relationship between the quantitative value of the view's contribution and the clustering performance of a single view ?"
},
{
"confidence": 5,
"rating": 7,
"review_id": "gQ7xYB7dex",
"review_text": "This research merges game theory with multi-view clustering by introducing the Shapley-based Cooperation Enhancing (SCE) approach. It features a module to systematically evaluate each view's contribution. The approach promotes view cooperation by adjusting the training convergence rate of view parameters based on their contributions. Extensive experiments on various datasets demonstrate the method's effectiveness when applied to different MVC frameworks.\n\n1) The paper integrates the Shapley value from game theory into DMVC, allowing for precise assessment of each view's contribution.\n2) Theoretical analysis is thorough, with clear and intuitive figures.\n3) The manuscript is well-organized and clearly written.\n\nThe article categorizes DMVC into alignment-based and joint methods. What criteria were used for this classification? Furthermore, only one DMJC method is used as a representative for joint methods.\n\n1) Figure 3(a) indicates that the method does not equalize the contribution value of each view. Why do the contribution values become identical after adding the SCE module to the comparison-based method in Table 2? Please provide a detailed discussion.\n2) What criteria were used to classify DMVC?\n3) Is DMJC representative of joint methods? Have other joint methods employed similar frameworks?"
},
{
"confidence": 5,
"rating": 6,
"review_id": "PuinxfyhQm",
"review_text": "This paper firstly considered DMVC as an unsupervised cooperative game where each view can be regarded as a participant. Then, the authors introduced the shapley value and propose a novel MVC framework termed Shapley-based Cooperation Enhancing Multi-view Clustering (SCE-MVC), which evaluates view cooperation with game theory. In summary, this paper was well written with obvious superiority.\n\n-- A MVC framework was designed that utilizeD game theory and Shapley values to evaluate and elevate inter-view cooperation. \n-- The experiments were sufficient, and the analysis of the experimental results was adequate.\n\n-- In this paper, why utilize $\\phi_i$ to measure the contribution of views instead of the view weight $w_i$? The article's explanation on this is not clear enough, and there is a lack of experiments to demonstrate the relationship between $\\phi_i$ and $w_i$.\n\nI have the following questions:\n-- What will happen if the view contribution is push away from each other? \n-- Are there scenarios where narrowing the contribution between views fails to enhance the effectiveness of multi-view clustering?"
}
] | |
xoCFd1WKpf | Unified Lexical Representation for Interpretable Visual-Language Alignment | Visual-Language Alignment (VLA) has gained a lot of attention since CLIP's groundbreaking work.
Although CLIP performs well, the typical direct latent feature alignment lacks clarity in its representation and similarity scores.
On the other hand, lexical representation, a vector whose element represents the similarity between the sample and a word from the vocabulary, is a natural sparse representation and interpretable, providing exact matches for individual words.
However, lexical representations are difficult to learn due to no ground-truth supervision and false-discovery issues, and thus requires complex design to train effectively.
In this paper, we introduce LexVLA, a more interpretable VLA framework by learning a unified lexical representation for both modalities without complex design.
We use DINOv2 as our visual model for its local-inclined features and Llama 2, a generative language model, to leverage its in-context lexical prediction ability.
To avoid the false discovery, we propose an overuse penalty to refrain the lexical representation from falsely frequently activating meaningless words.
We demonstrate that these two pre-trained uni-modal models can be well-aligned by fine-tuning on the modest multi-modal dataset and avoid intricate training configurations.
On cross-modal retrieval benchmarks, LexVLA, trained on the CC-12M multi-modal dataset, outperforms baselines fine-tuned on larger datasets (e.g., YFCC15M) and those trained from scratch on even bigger datasets (e.g., 1.1B data, including CC-12M).
We conduct extensive experiments to analyze LexVLA.
Codes are available at https://github.com/Clementine24/LexVLA. | https://openreview.net/pdf/3206cbc8f56e0bf6c85cccc342384845c2232940.pdf | [
{
"confidence": 4,
"rating": 5,
"review_id": "JsgFCDf47f",
"review_text": "The authors propose a method based on lexical representation for Visual-Language Alignment (VLA). The method relies on aligning two strong unimodal models, namely DINOv2 for the visual modality and Llama 2 for the text modality. Each backbone is fine-tuned with a few adapters or additional layers. The two modalities use separate codebooks mapping to a joint vocabulary. The authors also propose an overuse penalty to limit the excessive activation of irrelevant tokens. Finally, the authors introduce the PatchDis metric to measure patch-level alignment. Evaluation on zero-shot cross-modal retrieval datasets shows state-of-the-art performance of the method with the compared baselines. Additional experiments on the patch-level representation and sparsity showing the effectiveness of the method are also reported.\n\n- The authors proposed an effective and interpretable Lexical Representation approach for Visual-Language Alignment\n- The proposed method is described clearly\n- The experimental results show state-of-the-art performance in comparison to the baseline selected\n\n- The vocabulary is based on the Llama tokenizer which, as stated in the limitations, may split words into meaningless sub-word tokens and may also lack longer relevant words.\n- The latent baselines for zero-shot cross-modal retrieval do not include recent methods such as BEiT-3 [Wang, Wenhui, Hangbo Bao, Li Dong, Johan Bjorck, Zhiliang Peng, Qiang Liu, Kriti Aggarwal et al. \"Image as a foreign language: Beit pretraining for vision and vision-language tasks.\" In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 19175-19186. 2023.]\n- One main difference with the compared methods could be the use of the DINOv2 visual backbone and the Llama 2 textual backbone, it is possible the proposed method benefits from these strong backbones. All methods' visual and text backbones (and their potential pretraining) should be discussed in detail to enable the readers to properly judge the merit of the proposed method\n\n- Have the authors explored a simpler approach of just selecting the nouns, adjectives, and non-auxiliary verbs in a caption instead of the LLM-based lexical predictor? How many keywords are extracted by the LLM on average per caption? Does it vary with the length of the caption?\n- Eq (3), what is x with any index?\n- Table 1: it would be good to also indicate the amount of (unimodal) pretraining data (if any) used for each method e.g. the amount of data used for DINOv2 and LLama 2 for the proposed method. What are the test splits used for this experiment? Commonly, results are reported based on the splits in [Karpathy, Andrej, and Li Fei-Fei. \"Deep visual-semantic alignments for generating image descriptions.\" In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3128-3137. 2015.]. If these are not used it would be good to use them as well.\n- Figure 3: it would be good to provide the class <-> color mapping here.\n- Figure 4: bottom row, how were the local patches selected?\n- Figure 5: what does the vertical black dotted line represent? How were the different sparsity level selected?\n- The authors mention in the limitations that their “lexical vocabulary based on the Llama 2’s tokenizer (...) splits a word into several sub-word tokens.” does that also mean that some rather rare long words would not appear in the vocabulary? Have the authors studied what are these missing words? Further down the authors state “Given that random initialize a word-level vocabulary and additionally learn a projector from sub-word tokens to word-level tokens works poorly, we regard designing a word-level vocabulary that still benefit from the LLMs as a future work.”, it seems the author did conduct some experiments towards that. Even if the results were not conclusive it would be interesting to share what was tried and what was the performance.\n\nTypos etc:\n- p2-l52: missing space “LexVLAto”"
},
{
"confidence": 4,
"rating": 5,
"review_id": "OQXNj0iejA",
"review_text": "The paper proposes LexVLA, a more interpretable VLA framework that learns a unified lexical representation for both modalities without complex design.\nLexVLA uses DINOv2 as the visual model and Llama 2 as the language model, proposing an overuse penalty to avoid false discoveries.\nLexVLA outperforms baselines on cross-modal retrieval benchmarks, even when fine-tuned on a modest dataset.\nExtensive experiments were conducted to analyze LexVLA's performance.\n\n1. The paper is easy to follow.\n2. The framework does not require complex design or training configurations, making it more accessible and efficient.\n3. LexVLA outperforms baselines on cross-modal retrieval benchmarks, even when compared to models trained on larger datasets.\n4. Ablation demonstrates the decision choice and effectiveness of proposed components.\n\n1. I can't quite get the novelty of this work. The lexical representation mentioned in the paper is somehow a way to select important information and then map it to the code book. However, the codebook strategy was explored [1]. Especially the visual part, where does the concept of Lexical come in? Can the author elaborate more on this?\n2. In Table 1, the improvement is pretty limited in the bottom block compared to using CLIP in the last and first blocks. It makes readers question whether the performance was gained by the DINOv2 representation.\n3. The alignment was tested on only one task, it will be more interesting to test on other multimodal tasks such as zeroshot classification, or even grounding since it has DINOv2 representation.\n\n\n[1] Duan, Jiali, et al. \"Multi-modal alignment using representation codebook.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\n\nPlease address the questions in the weakness."
},
{
"confidence": 3,
"rating": 6,
"review_id": "RijXMyLjBF",
"review_text": "This paper presents LexVLA, a vision language alignment method integrating a pretrained vision model and a pretrained language model. To retain the original capabilities of pretrained single-modal models, it adopts a unified lexical representation with unique codebooks. Moreover, the vision model is tuned with a projector, and the text model is tuned with LoRA. A metric for patch-level alignment is proposed to evaluate interpretability. Experiments are conducted on retrieval benchmarks.\n\n- The paper is well-written and easy to follow.\n- The content is rich. An architecture, an objective, and a metric are proposed.\n- Inserting lightweight components to tune vision and language models to learn lexical representation while refraining from original capability degradation is intuitive. \n- The LexVLA can be applied to various architectures.\n- Experiments are conducted on multiple benchmarks.\n\n- Even though a new metric is proposed, the effectiveness of its reflection on interpretability is not verified quantitatively or qualitatively.\n\n- How accurately or reliably does the proposed PatchDis metric evaluate/reflect the interpretability of patch-level visual lexical representations?"
}
] | |
xnmm1jThkv | Hybrid Top-Down Global Causal Discovery with Local Search for Linear and Nonlinear Additive Noise Models | Learning the unique directed acyclic graph corresponding to an unknown causal model is a challenging task. Methods based on functional causal models can identify a unique graph, but either suffer from the curse of dimensionality or impose strong parametric assumptions. To address these challenges, we propose a novel hybrid approach for global causal discovery in observational data that leverages local causal substructures. We first present a topological sorting algorithm that leverages ancestral relationships in linear structural causal models to establish a compact top-down hierarchical ordering, encoding more causal information than linear orderings produced by existing methods. We demonstrate that this approach generalizes to nonlinear settings with arbitrary noise. We then introduce a nonparametric constraint-based algorithm that prunes spurious edges by searching for local conditioning sets, achieving greater accuracy than current methods. We provide theoretical guarantees for correctness and worst-case polynomial time complexities, with empirical validation on synthetic data. | https://openreview.net/pdf/0bca5f701021ef49d152b7de7f75842cb0a63c54.pdf | [
{
"confidence": 3,
"rating": 6,
"review_id": "33nLX89P0I",
"review_text": "This paper studies structure learning problem for additive noise model (ANM) in both linear and nonlinear settings. It proposes a hybrid constraint based approach to learn the DAG by leveraging the local ancestral relationships. The algorithm consists of ordering search and edge discovery these two steps. Correctness is shown and simulation is conducted to compare with other approaches.\n\n- Though ANM is shown to be identifiable for a long time, e.g. by RESIT, the high computational complexity and hardness in nonparametric regression and CI tests stand as roadblock. The finer analysis and exploitation of local structure in the proposal show potential to tackle this task efficiently;\n- The introducing of the proposed method is well-written and easy to follow for researchers working in relevant area.\n\n- The main contribution of this work is the exploitation of local structure to reduce the number of nonparametric regression and CI tests. However, despite of the quick discussion below thm 3.7 and 4.5, there is no explicit and formal statement on these to emphasize the contribution, and also comparison with others, e.g. RESIT. \n- The experiments are preliminary. More setups should be considered to demonstrate the superiority of proposal: e.g. different graph types like scale-free graphs, different number of edges, different noise, recovery criterion like F1 for linear setting, more benchmarks like CAM, GSGES, etc. \n- See Questions.\n\n- As the main contribution of the paper, why are the runtime results in the appendix? It is also spurious that the runtime for linear case is slower than the benchmarks in Figure 7; runtime for d=12 is faster than d=8 in Figure 8. There does not seem to be significant improvement empirically;\n- Since theoretically the number of nonparametric regression and CI tests are reduced, is it possible to establish some statistical guarantee and sample complexity dependence on the sparsity, e.g. in-degree?"
},
{
"confidence": 4,
"rating": 5,
"review_id": "othmcmjSNW",
"review_text": "In this paper, the authors present a causal discovery method by firstly determining the order of the causal variables, then determining the existence of edges between any two variables. The experimental results demonstrate the superiority of the proposed method compared to relevant methods.\n\nI thank the authors for their detailed clarifications, which address most of my concerns. I increase my score to 5.\n\n------------------------------------\n\nDespite the theoretical results simple, the idea and the method are interesting and somewhat novel. \n\nThe theoretical results seem sensible.\n\n1. Lack of necessary discussions: I think there are some similar idea in the literature, such as [1], where they maintain the order of the variables. What is the advantage of this method on [1]? The proposed method should be compared to [1] as well. \n[1] L. Solus, Y. Wang, C. Uhler, and L. Matejovicova. Consistency guarantees for permutation based causal inference algorithms. ArXiv preprint arXiv: 1702.03530 (2017)\n\n2. Lemma 4.1 is confusing. In the condition, it is required that $x_i$ is one of the parents of $x_j$. Why is it possible that $x_i$ and $x_j$ are not causally related?\n\ntypo: \nLine 202: the\n\nCould the authors further elaborate on Line 205 - 207? It is not quite clear to me why Alg. 1 cannot be used for the non-linear case.\n\nI am happy to adjust my score according to the authors' rebuttal."
},
{
"confidence": 4,
"rating": 6,
"review_id": "g3Ypf1qVWQ",
"review_text": "The paper presents theoretical results about extensions of the partial order induced by a causal DAG and uses these results to propose new constraint-based algorithms for ANMs.\n\n**Edit**: increased rating from 3 to 5, soundness from 1 to 2, and contribution from 2 to 3.\n\n**Edit 2**: increase rating from 5 to 6, after the authors fixed $A_{top}$ calculation; solid paper, but I think the impact of a hybrid causal discovery method like this is limited\n\n- Takes a simple idea (which seems original but also somewhat obvious from an order theory perspective) and turns it (creatively, originally) into causal discovery algorithms with contisency guarantees, broad applicability (ANMs), and good identifiability (specific DAG instead of MEC)\n- Very clearly written, as far as grammar, organization, motivation (but importantly, not mathematical notation)\n- Based on the theoretical results, the algorithms have potential to be very significant to the field of causal discovery\n\n1. The main (and fatal) weakness is the claims of strong performance in the abstract combined with the inadequate experimental results:\n 1. the abstract makes a claim of \"achieving greater accuracy than current methods\", but then the limited experiments only compare on simulated data (rather than real) to a few closely related algorithms (as opposed to a selection of classic or state-of-the-art methods, such as PC or GRaSP) in settings the authors have already explained are challenging for existing algorithms (very sparse DAGs, rather than a range of sparsities), and even then the proposed algorithm doesn't seem to do especially well. It also seems the NHTS algorithm is missing from the experiments.\n2. A smaller but nonetheless important weekness is notation that contradicts mathematical conventions, making the writing unecessarily difficult:\n 1. consulting introductory texts on partial orders and order theory would help clear up some of the confusion. For example, a topological sort is conventionally a linear extension of a partial order, making the introduced terms \"linear topological sort\" and \"hierarchical topological sort\" confusing. Replacing the former introduced term with just \"topological sort\", \"linear order\", or \"total order\", and replacing the latter introduced term with something that more clearly indicates it is 'between' a partial order and a linear order (i.e., it extends the partial order, but not completely into linear order), would be more natural/conventional and easier to understand.\n 2. the authors seem to use $\\mapsto$ to indicate the domain and image of the ordering functions, but $\\mapsto$ conventionally denotes how a specific element in the domain is mapped to a specific element of the image, hindering precise and easy comprehension.\n 3. other notation in Definition 2.1, such as inconsistent/unexplained indexing of $\\pi$ make the definition harder to understand/not rigorous\n 4. it's unclear what the difference between $x_j \\dashrightarrow x_i$ (called a directed path) and $x_j \\dashrightarrow \\ldots \\dashrightarrow x_k$ (called a front door path) is.\n\n1. Aren't there just $d \\choose 2$ (i.e., number of entries above the diagonal of a corresponding adjacency matrix) possible edges in a DAG for a given linear order, rather than the $d^2$ claimed on line 303?\n2. Suggestion: Include more explicit theorem statements and proofs for the complexity results."
},
{
"confidence": 3,
"rating": 7,
"review_id": "DGgJ8Qi71v",
"review_text": "The paper mainly focuses on proposing efficient search algorithms for finding the hierarchical sort ordering (linear topological sort) of variables. As mentioned in Section 5, finding such hierarchical orders can significantly improve the efficiency of causal discovery of edges, making the algorithm tractable (traditional algorithms such as PC are exponential). The paper studies two cases: linear (LiNGAMs) and non-parametric, where a complete algorithm based only on path analysis is developed for the linear case, and a combination of path analysis and layer-wise search is developed for the non-parametric case. Both algorithms improve the discovery of hierarchical order.\n\nThe paper is well structured and clearly written. The theoretical contributions, including the causal path analysis and corresponding algorithms, are interesting and also important in practice as can be told from the analysis of computational complexity. All results are properly formulated as definitions and theorems and proofs are included in the appendix. Experiments are also conducted and their results are discussed in depth in Section 6. In general, I enjoyed reading it.\n\n- In general, I suggest adding more examples to demonstrate the procedure of algorithms, probably for NHTS (Algorithm 2) so that we can see a clear cut between the two stages (root-identification and layer identification).\n- While the authors touched a bit at the beginning of Section 4, non-experts may benefit more if the paper could include additional details about the difference between the linear and non-linear cases (especially how they affect conditional independencies if any). \n- For definition 2.1, it will be great to provide a hierarchical topological sort that cannot be trivially converted to a linear topological sort; that is, we cannot simply add more layers to a hierarchical sort to obtain a linear topological sort.\n- Lemma 4.1 is a little confusing to me: if $x_i$ is a parent of $x_j$, how are PP1 and PP4 possible? $x_i$ must be a direct cause of $x_j$, right? Also, when you say \"$x_i$ and $x_j$\" are not causally related, does it mean that there is no directed edge from $x_i$ to $x_j$ or no directed path? Does \"active path\" mean any unblocked dependency path (backdoor or frontdoor)?\n\n- I'm curious if it's all the results can also be explained using independencies (d-separations) instead of regressions? This allows us to think only in terms of graphs. I guess regressions in the non-parametric setting are equivalent to d-separations, how about the linear case? Are there any independencies that hold in the linear case but not in the non-parametric case?\n- For algorithm 1 (LHTS), is stage 2 really needed? It seems that stage 2 is a special case of stage 3 when mutual ancestors = $\\emptyset$.\n- For experiments, the paper mentions a tradeoff between accuracy and encoded causal information. Would it be more fair to restrict the ordering length (say limit it to some length $k$) and compare the ordering accuracy?"
}
] | |
xjyU6zmZD7 | Take A Shortcut Back: Mitigating the Gradient Vanishing for Training Spiking Neural Networks | The Spiking Neural Network (SNN) is a biologically inspired neural network infrastructure that has recently garnered significant attention. It utilizes binary spike activations to transmit information, thereby replacing multiplications with additions and resulting in high energy efficiency. However, training an SNN directly poses a challenge due to the undefined gradient of the firing spike process. Although prior works have employed various surrogate gradient training methods that use an alternative function to replace the firing process during back-propagation, these approaches ignore an intrinsic problem: gradient vanishing. To address this issue, we propose a shortcut back-propagation method in the paper, which advocates for transmitting the gradient directly from the loss to the shallow layers. This enables us to present the gradient to the shallow layers directly, thereby significantly mitigating the gradient vanishing problem. Additionally, this method does not introduce any burden during the inference phase.
To strike a balance between final accuracy and ease of training, we also propose an evolutionary training framework and implement it by inducing a balance coefficient that dynamically changes with the training epoch, which further improves the network's performance. Extensive experiments conducted over static and dynamic datasets using several popular network structures reveal that our method consistently outperforms state-of-the-art methods. | https://openreview.net/pdf/6dfff0aec6f93d33ffa638873f008d9ca6857190.pdf | [
{
"confidence": 5,
"rating": 6,
"review_id": "ofpAsTv15H",
"review_text": "The paper trains SNNs using surrogate gradient learning. In order to mitigate the gradient vanishing problem, the paper proposed the Shortcut Back-propagation method and utilizes an evolutionary algorithm framework to balance the training of shallow and deep layers. The effectiveness of the proposed method is demonstrated through many experiments.\n\n1)\tThe shortcut backpropagation method and the evolutionary training method are novel. \n2)\tThis paper can well handle the gradient vanishing problem.\n3)\tThe paper is well-written.\n4)\tThe paper shows the effectiveness of the proposed methods through many experiments.\n\n1)\tThe author should add more mathematical proof to demonstrate that the mentioned residual structure in SNN is not very effective? The introduction of shortcut branches might add complexity to the network architecture, which could affect the interpretability of the model.\n2)\tSome recent SOTA works should be compared with too. The authors can also compare with paper [1][2] which obtains really good results by MS-ResNet-18 backbone with 1 or 6 timesteps on large imageNet datasets.\n\n\n[1]Yao M, Zhao G, Zhang H, et al. Attention spiking neural networks[J]. IEEE transactions on pattern analysis and machine intelligence, 2023.\n\n[2] Qiu X, Zhu R J, Chou Y, et al. Gated attention coding for training high-performance and efficient spiking neural networks[C]. Proceedings of the AAAI Conference on Artificial Intelligence. 2024, 38(1): 601-610.\n\n1)\tWhy are the bolded values not always the best values?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "EbK7UH74sT",
"review_text": "This paper proposes a simple method to mitigate the gradient vanishing problem in the training of SNNs. This method introduces some early classification heads (including a pooling layer and a fully connected layer) to the SNN. Because the gradients from the early classification heads pass fewer surrogate gradients, this method aids the SNN in addressing the gradient vanishing problem. The authors also suggest an evolutionary training framework that changes the loss function to gradually adjust how important early classification head outputs are during the training phase. The proposed methods are only alive in the training phase and will not affect the inference phase of SNN.\n\nThis proposed method partially alleviates the gradient vanishing problem in the training of SNN with surrogate gradients. Furthermore, the method has demonstrated excellent performance across multiple datasets. The Short-BP method can be easily integrated into the SNN training process without introducing excessive computational overhead. Furthermore, the evolutionary training framework effectively mitigates the short-BP problem, which may make the network pay more attention to early classification heads than the final SNN output. The writing in this paper is clear and concise.\n\n1. In this paper, the author only demonstrates a change in gradient distribution in the first layer. Presenting the changes in the men and variance of the absolute gradients for each layer would provide a more direct proof of their argument.\n2. The author should provide a more detailed mathematical proof to explain why the use of surrogate gradients in deep SNN would lead to gradient vanishing, as well as why direct use of residual learning will not address the problem.\n3. The author has not demonstrated their method on much deeper network architectures where the gradient vanishing problem is more severe.\n\n1. How is the network divided into multiple blocks? Are there any additional rules for the insertion position and number of early classification heads?\n2. The results of using short-BP to train ResNet 18 in Table 1 and Table 2 are quite different. There may be a transcription error here."
},
{
"confidence": 5,
"rating": 6,
"review_id": "Zzay3MPqxE",
"review_text": "This paper proposes shortcut connections between layers to mitigate the gradient vanishing problem in SNNs. Additionally, the authors present a way to phase out the shortcut connections over training so that inference can be done without these additional connections. The experiments show that this method improves training performance in several image classification tasks.\n\n1.The idea is small, but interesting and effective enough.\n\n2.The performance improvement over the existing SNN methods is noticeable.\n\n3.The paper is well-written.\n\n1.The proposed method will increase the training time.\n\n2.In the experimental section, some newer methods should be compared with this method.\n\n3.Figure 2 lacks horizontal and vertical coordinates, and the readability and comprehensibility of the picture need to be improved.\n\n1.Does the proposed method lead to an increase in the calculation of gradient backpropagation? How much is the increased training time."
}
] | |
xjXYgdFM5M | Reasons and Solutions for the Decline in Model Performance after Editing | Knowledge editing technology has received widespread attention for low-cost updates of incorrect or outdated knowledge in large-scale language models. However, recent research has found that edited models often exhibit varying degrees of performance degradation. The reasons behind this phenomenon and potential solutions have not yet been provided. In order to investigate the reasons for the performance decline of the edited model and optimize the editing method, this work explores the underlying reasons from both data and model perspectives. Specifically, 1) from a data perspective, to clarify the impact of data on the performance of editing models, this paper first constructs a **M**ulti-**Q**uestion **D**ataset (**MQD**) to evaluate the impact of different types of editing data on model performance. The performance of the editing model is mainly affected by the diversity of editing targets and sequence length, as determined through experiments. 2) From a model perspective, this article explores the factors that affect the performance of editing models. The results indicate a strong correlation between the L1-norm of the editing model layer and the editing accuracy, and clarify that this is an important factor leading to the bottleneck of editing performance. Finally, in order to improve the performance of the editing model, this paper further proposes a **D**ump **for** **S**equence (**D4S**) method, which successfully overcomes the previous editing bottleneck by reducing the L1-norm of the editing layer, allowing users to perform multiple effective edits and minimizing model damage. Our code is available at https://github.com/nlpkeg/D4S. | https://openreview.net/pdf/29125d34caf7e2e65a3da6e297ad342a2583e2d3.pdf | [
{
"confidence": 3,
"rating": 6,
"review_id": "XLj4sSSFvC",
"review_text": "This paper addresses the challenges associated with the decline in performance of LLMs after undergoing knowledge editing. The study identifies the primary factors contributing to performance degradation from both data and model perspectives. By constructing a Multi-Question Dataset (MQD) and analyzing the impact of editing objectives, token length, and diversity, the paper finds that perplexity associated with editing objectives significantly affects model performance. From the model perspective, a strong correlation was observed between the L1 norm of parameter layers and editing accuracy. The paper proposes a novel method called Dump for sequence (D4C), which effectively manages the parameter growth and improves model performance post-editing.\n\n- Innovative Methodological Approach: The study introduces a new method, D4C, which addresses the explosive growth in parameter norms and optimizes model performance post-editing. This approach is both innovative and practical for managing edited models.\n- Comprehensive Data Analysis: The construction of the Multi-Question Dataset and detailed analysis of how different types of data affect model performance provide valuable insights into the mechanics of model editing.\n- Clear Identification of Problems and Solutions: The paper clearly identifies specific problems associated with knowledge editing in LLMs, such as catastrophic forgetting and performance bottlenecks, and provides targeted solutions to these issues.\n- Empirical Validation: The experiments conducted in this paper offer empirical evidence supporting the proposed methods, enhancing the credibility and applicability of the findings.\n\n- Generalizability of Findings: The study focuses on specific scenarios and datasets, which may limit the generalizability of the findings across different types of LLMs or editing tasks.\n- Potential Overfitting to Edited Scenarios: There is a risk that the model may become overly optimized for the edited scenarios, potentially affecting its performance on unedited or unrelated tasks.\n- Complexity of Implementation: The proposed D4C method, while effective, may be complex to implement and integrate into existing systems due to its sophisticated handling of parameter layers.\n- Unsuitable Citation Format: The citations in this paper are in the format of “XXX et al. [YEAR]”, which are not suitable enough, and had better change into the format of [1], [2], [3], ……\n\n- Adaptability of D4C Method: How adaptable is the D4C method to different types of LLMs and knowledge editing tasks beyond those tested in your experiments?\n- Impact on Unedited Model Performance: How does the D4C method affect the performance of the model on tasks that have not been edited? Is there any evidence of performance trade-offs?\n- Handling of Diverse Editing Objectives: Could you elaborate on how the D4C method manages the complexity and diversity of editing objectives without compromising the model’s overall integrity and coherence?\n\n**Missing References**\n- Editing Large Language Models: Problems, Methods, and Opportunities (EMNLP 2023)\n- Knowledge Editing for Large Language Models: A Survey (2023)\n- A Survey on Knowledge Editing of Neural Networks (2023)\n- A Comprehensive Study of Knowledge Editing for Large Language Models (2024)"
},
{
"confidence": 4,
"rating": 8,
"review_id": "DnMIxMQWsG",
"review_text": "Recent research has shown varying degrees of decline in model performance following small changes made by certain model editing methods. This paper is the first to comprehensively analyze the reasons behind such performance declines. Through extensive experiments, it identifies two main factors: data and model. For data-specific factors, the paper finds that perplexity and token length significantly influence performance. For model-specific factors, the L1 norm of the edited layer is identified as a key influence. Building upon these insights, the paper proposes a method named Dump for sequence (D4C), which significantly improves model performance.\n\n- The paper is well-motivated: Exploring the reasons behind and impact of small changes made by model editing techniques on the performance of unedited samples is of great significance.\n- The analysis of the data-specific and model-specific factors is supported with diverse datasets and comprehensive experiments. The model-specific analysis, in particular, is evaluated rigorously, addressing the forgetting issue that prior works often overlooked\n\n- The observation of the influence of editing on the model norm is intriguing. High-norm parameters can be sensitive to noise and numerically unstable. It would be beneficial if the authors could also provide an L2-norm plot for comparison.\n\n- The experimental results are impressive, demonstrating significant improvements and validating the effectiveness of the proposed method.\n\n- My main concern with the data-specific analysis is whether the conclusion is about correlation or causation. Many variables can be changed about the input data. Plotting a single Figure 3 might be insufficient to justify that perplexity and token length are the main reasons for the decline in model performance after editing.\n\n- Unfortunately, the constructed dataset is not open-sourced. \n\n- Recent research [1] has shown that model editing methods (e.g. ROME, MEMIT) are not good at handling multi-hop questions, how would D4C perform in such more challenging scenarios?\n\n- Some theoretical analysis can be conducted to demonstrate that D4C does not lead to an increase in norms.\n\n[1] Mquake: Assessing knowledge editing in language models via multi-hop questions. EMNLP 2023\n\n- Can the authors add a section in the appendix to expand on the dataset mentioned in 3.1 (i.e., provide examples and details about the editing objectives) for better readability?\n\n- What dataset was employed in Section 5?\n\n- I encourage the authors to release the full code to enhance reproducibility.\n\n- (Minor) Consider reducing v-space in some parts of the paper (e.g., the bottom of page 2)."
},
{
"confidence": 3,
"rating": 7,
"review_id": "ziWaD3VJyK",
"review_text": "The paper investigates the reasons behind performance decline in sequential model editing approaches that selectively update parameters based on both data and model factors. To address the issues causing this decline, the authors propose a method to save editing history, thereby transforming sequential editing into batch editing with minimal computational overhead.\n\nExtensive experimentation is conducted to empirically demonstrate how factors such as dataset characteristics, editing objectives, and model-specific properties affect performance in sequential model editing.\n\nA simple matrix storage solution is introduced, which enables the conversion of sequential editing into batch editing.\n\nThe study is restricted to two closely related editing approaches.\n\nExperimentation is limited in demonstrating the efficacy of the D4C method. Different datasets and a larger number of edits for a more thorough evaluation are needed.\n\nN/A"
},
{
"confidence": 3,
"rating": 6,
"review_id": "NhykXHeTVo",
"review_text": "This paper investigates the reasons and solutions for the decline in model performance of model editing. The authors conduct experiments from two perspectives: data and model. Specifically, to clarify the impact of data on the performance of edited models, the authors first evaluate how editing different types of data affects model performance. Then, the authors construct a Multi-Question Dataset (MQD) and identified that the performance of the edited models is primarily influenced by the diversity of the editing objectives and the length of the tokens. Secondly, the authors explore the factors that affect model performance from a model perspective. Experiments revealed a strong correlation between the L1 norm of the edited model layers and the editing accuracy, and identified an editing quantity bottleneck. To enhance the performance of edited models, the authors propose a Dump for sequence (D4C) method that effectively improves the performance of edited models and overcomes the previous editing bottleneck issue. This method allows for multiple effective edits with minimal impact on model performance.\n\nThis paper investigates the impact of data on the performance of edited models. Evaluations are conducted across multiple tasks, revealing that the editing objective is the primary factor influencing model performance.\n\nThe authors found that the decline in edited model performance is correlated with the explosive growth of the L1 norm of parameter layers during the editing process.\n\nThis paper proposes a caching sequence edit method that leverages O(1) space complexity to retain past knowledge and regulate the explosive growth of the parameter layer norm.\n\nThe writing of this paper should be improved. There is no overview of this paper, which makes it hard to follow the details of Section 3 and 4.\n\nThe motivation of the proposed method is not clear.\n\nThere are many typos such as line 182.\n\nThere are many missing references such as: \n\nKnowledge Editing for Large Language Models: A Survey\n\nStable Knowledge Editing in Large Language Models\n\nA Comprehensive Study of Knowledge Editing for Large Language Models\n\nEditing Large Language Models: Problems, Methods, and Opportunities\n\nSee weaknesses."
}
] | |
xgiurUq0ss | DDK: Distilling Domain Knowledge for Efficient Large Language Models | Despite the advanced intelligence abilities of large language models (LLMs) in various applications, they still face significant computational and storage demands. Knowledge Distillation (KD) has emerged as an effective strategy to improve the performance of a smaller LLM (i.e., the student model) by transferring knowledge from a high-performing LLM (i.e., the teacher model). Prevailing techniques in LLM distillation typically use a black-box model API to generate high-quality pretrained and aligned datasets, or utilize white-box distillation by altering the loss function to better transfer knowledge from the teacher LLM. However, these methods ignore the knowledge differences between the student and teacher LLMs across domains. This results in excessive focus on domains with minimal performance gaps and insufficient attention to domains with large gaps, reducing overall performance. In this paper, we introduce a new LLM distillation framework called DDK, which dynamically adjusts the composition of the distillation dataset in a smooth manner according to the domain performance differences between the teacher and student models, making the distillation process more stable and effective. Extensive evaluations show that DDK significantly improves the performance of student models, outperforming both continuously pretrained baselines and existing knowledge distillation methods by a large margin. | https://openreview.net/pdf/9b8b61e97034e23e7b524afd2363f19fce21200a.pdf | [
{
"confidence": 4,
"rating": 6,
"review_id": "s8Bq5I2zjj",
"review_text": "This paper proposes DDK, a knowledge distillation (KD) framework that distills large language models (LMs) into small LMs. Unlike previous KD methods, DDK dynamically adjusts the domain weights during distillation. Experiments show that DDK outperforms other KD baselines across various tasks.\n\n1. The paper is well written and the method is easy to follow.\n2. The experiments show that DDK outperforms other KD baselines on various tasks.\n\nThe extra computation introduced by KKD should be considered. It seems KKD requires the inference of a large LM during the training of the small LM. When the teacher model is much larger than the student model (QWen-1.5 14B v.s. QWen-1.5 1.8B), the inference cost of the teacher model would be even larger than training the student model. Therefore, it is more reasonable to compare the performance of the distilled model and the baselines given the same FLOPs.\n\n1. What are the training data for the baselines (CPT, TED, KD, and MiniLLM)? Is the data for DDK the same as that for the baseline methods?\n2. In lines 178-179, is the learning rate 3e-5 ($3\\times 10^{-5}$) rather than $3e^{-5}$?"
},
{
"confidence": 5,
"rating": 6,
"review_id": "WugHyZa5Cg",
"review_text": "The paper introduces a new framework called Dynamic Domain Knowledge Distillation (DDK) to enhance the efficiency of knowledge distillation for large language models (LLMs). Unlike traditional methods that overlook domain performance differences between student and teacher models, DDK dynamically adjusts the distillation dataset composition based on these differences, ensuring a more stable and effective knowledge transfer. This approach addresses the issue of excessive focus on domains with minimal performance gaps and enhances overall model performance. Extensive evaluations demonstrate that DDK significantly outperforms existing knowledge distillation methods and continuously pretrained baselines.\n\n- The proposed dynamic dataloader for KD is technically sound. \n- Numerical experiments well validate the efficacy of the method.\n\n- Dynamic dataloader requires knowing the training data distribution and category beforehand. \n- Missing references. Similar ideas have been explored in pruning LLMs, such as ShearedLLaMA, LoRAShear to recover the knowledge . The paper needs to discuss with them in the related work section due to the closed relation between pruning and KD. \n\nSheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning\n\nLoRAShear: Efficient Large Language Model Structured Pruning and Knowledge Recovery\n\n- How the methods perform on under other KD losses, such as reversed KLD, JSD, skew-KLD.\n\nOn-policy distillation of language models: Learning from self-generated mistakes.\n\nDistiLLM: Towards Streamlined Distillation for Large Language Models."
},
{
"confidence": 4,
"rating": 5,
"review_id": "rhBhVfF0Va",
"review_text": "The work introduces a novel framework for knowledge distillation (KD) for LLMs. The key innovation of DDK is its dynamic adjustment of the distillation dataset composition based on domain performance differences between the teacher and student models. The paper presents extensive evaluations demonstrating that DDK significantly improves performance in various KD settings, outperforming both continuous training baselines and existing KD methods.\n\n1. The authors provide extensive empirical evidence demonstrating the effectiveness of DDK in improving the performance of student models across various benchmarks.\n2. As the computational and storage demands of LLMs are significant barriers to their widespread deployment, KD is a promising solution. The proposed KD method is simple and easy to follow.\n\n1. Discuss the difference between DDK and the Dynamic Batch Loading proposed by Sheared LLaMA[1], which is also \n proposed to adjust domain proportions for dynamically training smaller models. They also identify discrepancies in loss between smaller and larger models across various domains, and accordingly, they sample more data from domains where the discrepancy is more pronounced. While they concentrate on structural pruning, it is akin to the DDK. Consequently, I perceive the novelty of DDK as being somewhat limited.\n2. The results of Qwen 1.5 in Table 1 are not significantly convincing. The MMLU/HumanEval of Qwen 1.5 1.8B in the Qwen official blog are 46.8/20.1 while the authors' report is 44.5/11.9. In addition, compared to the official results, we can see that the DDK fails to improve the model of the students on MMLU. The authors need to check this and provide **more robust results of baselines**.\n\n[1] SHEARED LLAMA: ACCELERATING LANGUAGE MODEL PRE-TRAINING VIA STRUCTURED PRUNING. Xia et al., 2023\n\n1. In the setting of paper, the domains are predefined. How to extend the DDK framework for new domain during the distillation training process? Could give more experiments on continual domain learning settings?"
},
{
"confidence": 4,
"rating": 5,
"review_id": "IQFYwRUPY2",
"review_text": "This work proposed a KD strategy for LLMs. Specifically, with assess to the domain-specific performance of both the teacher and student LLMs, DDK uses domain knowledge guided sampling to dynamically update the data mixture. In addition the paper also conducts a statistical analysis of the domain distribution of the datasets involved. The training process is relatively straightforward and easy to generalize. The experimental results also show that DDK's training method improves the performance average of different data sets.\n\n1. A complete training algorithm is designed, and the process is explained clearly. The process of the DDK algorithm is easy to extend to the training process of other models.\n2. The authors conducted a comprehensive knowledge distillation experiment on two large model families and a comprehensive ablation study.\n\n1. Although the method proposed in this paper is easy to understand and effective, I doubt that the method in this paper is limited to LLMs. In other words, this paper does not mention (or needs to explain) how previous researchers (before LLMs) performed domain-enhanced distillation for domain-biased datasets, and why these previous methods cannot be applied to the distillation of LLMs to achieve similar results. The advantages and novelty of this paper's domain sampling method over previous work that may be transferable to LLMs need further explanation.\n2. In the experimental part, there is a lack of key comparison between DDK and other methods that focus on similar domain sampling. The baseline actually involves the work that focuses on domain in KD (cited as [60], etc.), but the subsequent analysis only compares the total average score of DDK and these works, which seems to lack comparison and analysis of similar works. As far as I know, other baselines are more general KDs, and do not focus on domain information.\nIt is certainly worth noting that DDK performs better than baselines such as MiniLLM, but I think what can better illustrate the effectiveness and novelty of this paper is the comparison with similar domain data sampling, including experimental analysis.\n3. In the experimental section, you can add experiments on the dataset and the scale property of the teacher model. This is a possible suggestion.\n\nThe questions I expect to ask would be similar to the above section."
}
] | |
xgP5ynlZWf | RestoreAgent: Autonomous Image Restoration Agent via Multimodal Large Language Models | Natural images captured by mobile devices often suffer from multiple types of degradation, such as noise, blur, and low light. Traditional image restoration methods require manual selection of specific tasks, algorithms, and execution sequences, which is time-consuming and may yield suboptimal results. All-in-one models, though capable of handling multiple tasks, typically support only a limited range and often produce overly smooth, low-fidelity outcomes due to their broad data distribution fitting. To address these challenges, we first define a new pipeline for restoring images with multiple degradations, and then introduce RestoreAgent, an intelligent image restoration system leveraging multimodal large language models. RestoreAgent autonomously assesses the type and extent of degradation in input images and performs restoration through (1) determining the appropriate restoration tasks, (2) optimizing the task sequence, (3) selecting the most suitable models, and (4) executing the restoration. Experimental results demonstrate the superior performance of RestoreAgent in handling complex degradation, surpassing human experts. Furthermore, the system’s modular design facilitates the fast integration of new tasks and models. | https://openreview.net/pdf/a7abb52bb68d236e32bce92953c8abf4bfa5f495.pdf | [
{
"confidence": 5,
"rating": 7,
"review_id": "KiPpDc5CQ6",
"review_text": "For real-world images corrupted by multiple simultaneous degradations, this paper first analyzes the limitations of using all-in-one restoration models and various task-specific models. The authors then introduce RestoreAgent, which automatically identifies the types of degradation in a degraded image, determines the sequence of restoration tasks, and selects suitable models from the model pool. RestoreAgent presents an automated restoration pipeline that requires only an input image and a general human instruction, without any prior knowledge of the involved degradation tasks or manually predefined task sequences.\n\n1. The paper comprehensively analyzes the challenges and limitations of employing all-in-one models and multiple task-specific expert models with fixed or random task sequences, as well as fixed or random models for each task.\n2. The authors evaluate various configurations of RestoreAgent using diverse objective image quality metrics (PSNR, SSIM, LPIPS, DISTS, and their combinations), all of which outperform the human expert model on the corresponding metric. \n3. RestoreAgent exhibits the scalability by extending to new tasks and models with minimal computational resource. \n4. The presentation, including writing, analysis, and visualization, is clear and easy to follow.\n\n1. Incomplete descriptions about data construction. \n\n- Authors randomly select up to four types of degradation from a degradation set (noise, blur, JPEG, rain, haze, and low-light) to construct paired training data. According to data synthesis strategies in [1,2], JPEG compression is typically performed after noise and blur, and in the final order. Is the degradation order of JPEG compression in this paper the same? If not, the authors should discuss the reasonableness of random sampling.\n\n- What are the components of 23k paired data? One degraded image for each high-quality image or many degraded versions for each high-quality image? \n\n- What is the configuration in ablation studies about training data amount? Simultaneously scaling up low & high-quality images or synthesizing more low-quality images for each high-quality image? If it’s the former, will increasing the number of degraded images while keeping the number of high-quality images unchanged improve performance?\n\n2. Inference time for input images with diverse resolution.\n- The authors are suggested to report the running time for input images of various resolutions. This should include the total time, the running time for the RestoreAgent, and the running time for the subsequent restoration models. The reviewer is curious whether the agent's response time exceeds that of the restoration models when processing high-resolution images, such as those with 4K resolution.\n\n3. Scalability for new tasks and models. \n- Section 4.5 demonstrates that the proposed RestoreAgent can extend to new tasks and models in just half an hour, surpassing human expert-level performance on the new task. However, it is unclear whether adaptation to the new task results in performance degradation on prior tasks, similar to the catastrophic forgetting problem in continual learning. The authors are encouraged to report the performance of the fine-tuned model on the previous tasks to address this concern.\n\n[1] Wang X, Xie L, Dong C, et al. Real-esrgan: Training real-world blind super-resolution with pure synthetic data[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2021: 1905-1914.\n\n[2] Zhang K, Liang J, Van Gool L, et al. Designing a practical degradation model for deep blind image super-resolution[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 4791-4800.\n\nAddressing concerns in the Weaknesses with thorough explanations and additional experiments would significantly enhance my confidence in this work. A satisfactory response to these points may lead to a reconsideration of the current evaluation."
},
{
"confidence": 4,
"rating": 5,
"review_id": "OJHKrxIv4K",
"review_text": "This paper proposes a new pipeline to address multiple degradation, like noise, blur and low light. Besides, a RestoreAgent with multimodal large language models is introduced to assess the type and extent of degradations in the input images and perform dynamic restorations.\n\n1. The paper is well-written and well-organised. \n2. The whole pipeline seems to be novel and reasonable. \n3. The method achieves SOTA performance on several benchmarks and different degradation tasks.\n\nThe overall motivation of this paper is commendable, but I have a few concerns:\n\n1. The author mentions that RestoreAgent can autonomously assess the type and extent of degradation in input images and perform restoration. This strategy is interesting. However, I am wondering how the order of different enhancement techniques is defined. For example, if the input has noise and rain streaks, how is the order of dehazing and denoising techniques determined? Will this affect performance?\n\n2. In contrast to other image enhancement techniques, the proposed RestoreAgent should first find a suitable restoration task and then select the most appropriate model to enhance the quality of the input. Therefore, I am concerned whether this process will increase the inference time. The authors should provide some computational analysis.\n\n3. The enhancement capabilities of this work rely heavily on existing enhancement frameworks. If existing frameworks cannot work well in some cases, such as extreme noise effects, I guess the proposed RestoreAgent may also fail. Is this true? If so, I suggest the authors mention this in the limitations section.\n\n4. The explanation of \"ranking\" and \"balanced\" in Table 1 is still unclear. The authors should clarify the definitions of these terms.\n\n5. It would be better to show more visual comparisons of the RestoreAgent.\n\nPlease see weaknesses."
},
{
"confidence": 5,
"rating": 6,
"review_id": "NIwuCn7b0A",
"review_text": "This paper presents an image restoration pipeline designed to handle various degradation types and levels by leveraging MLLM’s capabilities to select the appropriate model and determine the execution order. It begins with an analysis of why execution order and utilizing multiple models for different degradation levels are crucial for restoring complexly degraded images. The paper then constructs an instruction dataset and fine-tunes the MLLM. Experimental results demonstrate the effectiveness of the proposed restoration pipeline.\n\n1.\nThis work presents a compelling analysis of complex image restoration. This insight is valuable given that degraded images in real-world scenarios often involve multiple types of degradation.\n2.\nThis approach leverages the strengths of different models for handling specific noise levels, thereby eliminating the trade-off between generalization and performance.\n3.\nThis paper formally defines the problem of handling multiple degradations and model selection in image restoration.\n4.\nExtensive experiments demonstrate superiority of such pipeline in processing degraded images with multiple degradations.\n\n1.\nIn the introduction, it would be helpful to explain how the Multi-Level Learning Model (MLLM) excels at understanding different types and levels of image degradation. This will show why MLLM is well-suited for handling complex combinations of image degradation. Providing this clarity will make the benefits of using MLLM for image restoration more evident.\n2.\nWhen incorporating a new type of degradation, the cost extends beyond merely training the MLLM. Please also discuss the process of constructing training data for the newly added degradation and how it integrates with previously trained data.\n3.\nIn lines 211-212, please clarify what the mean and standard deviation are calculated over. The subscript \"i\" is already used for degradation type and it might be clearer to use another character.\n\n1.\nWhat if the degradation of the input image falls outside the predefined degradation scope? This could present a generalization issue, as the model might not perform well on unseen types or levels of degradation not covered in the predefined scope. Please discuss it.\n2.\nIn Table 2, it would be clearer to highlight the best method for each evaluation criterion. Additionally, please specify which methods the ranking improvement is compared against for better context."
},
{
"confidence": 4,
"rating": 8,
"review_id": "cj46TZGoFY",
"review_text": "This paper introduces RestoreAgent, an innovative image restoration system that leverages multimodal large language models to autonomously handle images with multiple types of degradation. The system addresses limitations of existing all-in-one models and fixed task sequences by dynamically adapting to each image's specific degradations. RestoreAgent can identify degradation types, determine appropriate restoration tasks, optimize the execution sequence, select the most suitable models, and execute the restoration process autonomously. The authors present a method for constructing training data and demonstrate that RestoreAgent outperforms existing methods and human experts in handling complex image degradations.\n\n1. This paper represents a innovation and a good contribution in image restoration and potentially opens up a new research direction for this area.\n2. The motivation is strong. The authors effectively demonstrate the importance of task execution order and model selection in multi-task scenarios. The designed system adeptly addresses these issues.\n3. Experimental results indicate that RestoreAgent's decision-making capabilities in handling complex degradations surpass those of human experts. This kind of pipeline also surpass all-in-one models.\n4. The paper is generally well written and clear to understand.\n\n1. The paper constructs a training dataset for training the multimodal large language model and a testing dataset as a benchmark for evaluating performance across multiple tasks. More details and explanations regarding the construction methods of these datasets would be beneficial.\n2. Table 1 presents performance rankings using both ordinal and percentage forms. The definitions and explanations for these ranking forms are somewhat lacking, which might require readers to spend extra time understanding them. Clearer explanations would facilitate better comprehension.\n3. The proposed Autonomous Restoration Agent represents a novel paradigm that is likely to encounter numerous new challenges. Beyond the issues already mentioned in the paper, the authors could consider discussing additional limitations and future research directions for this paradigm. This would help future researchers better follow and improve upon this work.\n\n1. The current method appears to predict all execution steps at once for a given input image. In Figure 3, each image has a dashed line pointing to the input. Does this imply that after each execution, the result can be fed back as input? (Based on my understanding, this system supports this) The paper seems to lack analysis and experiments related to this aspect. Could the authors provide more details on this part?\n2. The authors have proposed a testing dataset to evaluate multi-task processing capabilities. Will this dataset be made publicly available to facilitate further research by other researchers?"
}
] | |
xeviQPXTMU | FedGMark: Certifiably Robust Watermarking for Federated Graph Learning | Federated graph learning (FedGL) is an emerging learning paradigm to collaboratively train graph data from various clients. However, during the development and deployment of FedGL models, they are susceptible to illegal copying and model theft. Backdoor-based watermarking is a well-known method for mitigating these attacks, as it offers ownership verification to the model owner. We take the first step to protect the ownership of FedGL models via backdoor-based watermarking. Existing techniques have challenges in achieving the goal: 1) they either cannot be directly applied or yield unsatisfactory performance; 2) they are vulnerable to watermark removal attacks; and 3) they lack of formal guarantees. To address all the challenges, we propose FedGMark, the first certified robust backdoor-based watermarking for FedGL. FedGMark leverages the unique graph structure and client information in FedGL to learn customized and diverse watermarks. It also designs a novel GL architecture that facilitates defending against both the empirical and theoretically worst-case watermark removal attacks. Extensive experiments validate the promising empirical and provable watermarking performance of FedGMark. Source code is available at: https://github.com/Yuxin104/FedGMark. | https://openreview.net/pdf/75848fdd795ff86e8eff2d9277a1b8057ad9f7d9.pdf | [
{
"confidence": 5,
"rating": 5,
"review_id": "He8o2uczmH",
"review_text": "This paper investigated the problem of watermarking the Federated Graph Learning (FGL) models. This paper proposed the first backdoor-based FGL watermarking framework, called FedGMark. Specifically, to tackle the issues of ineffectiveness and vulnerability of existing methods, FedGMark designed two modules respectively. One is a Customized Watermark Generator (CWG). CWG aimed to generate the watermarked trigger samples (graphs) using each client's secret key. The other is the Robust Model Loader (RML). RML guaranteed that the watermarked models were certifiably robust against layer perturbation attacks.\n\n- The first attempt to watermark federated graph learning models.\n- The watermarked models are certifiably robust against attacks.\n- Experiments on various datasets and models validate the effectiveness of FedGMark.\n\nMy major concerns are as follows.\n1. Unclear threat model: The threat model and the problem formulation of this paper is unclear. What's the capability of the adversary and the defender? And more importantly, who is the adversary to steal the FGL model? This paper proposed to watermark the FGL model from the client side, which means the clients should be trustworthy. Is the central server an adversary in this paper? To my best knowledge, the typical threat model of various attacks in FL (e.g., backdoor attacks or Byzantine attacks) assumes that some of the clients may be malicious. The author should add a section on the threat model or problem formulation and clarify why they make these assumptions. This may be helpful to better understand the problem the authors tried to solve.\n2. Privacy concern: I also worry that utilizing FedGMark may raise privacy concerns. In Section 3.4, the watermarked client needs to use a subset of its training graphs as the watermarked graphs. However, in FL, the client's graphs are privacy-sensitive, and using them to verify ownership may lead to privacy leakage. This is contrary to the original purpose (preserve privacy) of FL.\n3. Missing experiments on the robustness against backdoor defense: This paper considers three different watermark removal attacks. However, since FedGMark utilizes backdoor-based watermarking methods, it is important to validate whether FedGMark is robust against backdoor defenses.\n4. Missing introduction to ownership verification: This paper lacks an important section to introduce the ownership verification procedure of FedGMark.\n\n1. Clarify the threat model.\n2. Address the privacy concern.\n3. Analysis or experiments on the robustness against backdoor defenses.\n4. Clarify the procedure of ownership verification in FedGMark."
},
{
"confidence": 3,
"rating": 6,
"review_id": "C9sGZdxe04",
"review_text": "This manuscript introduces FedGMark, a backdoor-based watermarking method specifically designed to protect Federated Graph Learning (FedGL) models from illegal copying and model theft. They claim that the proposed FedGMark is the first method to safeguard the intellectual property of FedGL models, offering certified robustness against watermark removal attacks, leveraging unique graph structures and client information to create customized and diverse watermarks. Experiments demonstrate its effectiveness and robustness.\n\nThe paper introduces FedGMark to address the overlooked vulnerability of FedGL model ownership and identifies three main challenges in current watermarking techniques: inapplicability to graph data, vulnerability to removal attacks, and lack of formal guarantees. The proposed method, including CWG and RML, is clear and intuitive, and the authors have provided comprehensive experiments to support their approach.\n\n1.\tI strongly recommend setting a \"Threat Model\" subsection to clarify the potential security threats to FedGL. In my opinion, since the authors consider watermark removal attacks like distillation and finetuning, FedGL operates under a white-box setting.\n2.\tThe paper assumes attackers know the internal information of the target watermarked model, enabling distillation, finetuning, and layer-perturbation attacks. However, I find the white-box setting narrow and trivial. The authors should consider black-box attacks, which are more challenging and meaningful. Many studies on black-box attacks can be found.\n3.\tIn watermarking-related literature, robustness and fidelity are more frequently used terms than watermark accuracy and task accuracy.\n4.\tIn the \"Inapplicable or Ineffective\" item, the authors state, \"For instance, they require input data to have the same size, while graphs can have varying sizes,\" which is not entirely accurate. For example, some Wavelet and DCT-based watermarking methods can be scalable.\n\nPlease refer to Weaknesses part"
},
{
"confidence": 4,
"rating": 6,
"review_id": "QVY9VqBmm2",
"review_text": "This paper addresses the problem of protecting model ownership in the emerging domain of Federated Graph Learning (FedGL) by proposing FedGMark, a backdoor-based watermarking technique. The authors argue that existing watermarking approaches are either inapplicable to graph data or exhibit weaknesses in terms of robustness against removal attacks and lack of formal guarantees. FedGMark aims to overcome these limitations by leveraging graph structure and client information to learn customized watermarks, employing a novel graph learning (GL) architecture that enhances robustness, and providing certified robustness guarantees against layer-perturbation attacks.\n\n- The paper clearly outlines the limitations of existing watermarking techniques and presents a well-motivated approach to address them. The design of FedGMark, with its CWG and RML modules, is tailored to the specific challenges of watermarking in FedGL.\n- FedGMark demonstrates promising empirical performance in terms of both main task accuracy and watermark accuracy. It outperforms the baseline approach (random graph-based watermarking) significantly, especially under watermark removal attacks.\n- The paper provides theoretical guarantees for the robustness of FedGMark against layer-perturbation attacks, a unique and valuable contribution in the watermarking literature.\n\n1. The reliance on pre-defined private keys for watermark generation may not be practical in all scenarios, and alternative key management methods should be explored.\n2. The assumption of limited attacker knowledge about the watermarked model may not hold in practice. Evaluating FedGMark against more knowledgeable adversaries would provide a more realistic assessment.\n3. The focus on FedAvg for model aggregation limits the exploration of other aggregation methods and their impact on watermark robustness.\n\n1. Could you quantify the communication overhead of FedGMark during federated training, especially compared to random graph-based watermarking (in terms of local training time, size of watermarked data, etc.)?\n2. How do you envision FedGMark being deployed in a real-world FedGL system? What practical challenges might arise during implementation and watermark verification?\n3. How would the certified robustness guarantees be affected by more advanced watermark removal attacks beyond layer perturbation (e.g., those involving trigger reverse engineering)?\n4. How would the effectiveness of FedGMark be affected if the attacker had more knowledge about the watermarking process, such as access to the CWG architecture or the private key generation method?"
},
{
"confidence": 3,
"rating": 5,
"review_id": "Q8jpZIpcgr",
"review_text": "This work studies watermarking for federated graph learning (FGL) to protect the ownership of participants. It proposes a customized watermark generator for local clients that can capture the local graph structure and private client information, and a robust model loader consisting of multiple GL submodels and a majority-voting-based ensemble classifier, which can defend against the proposed layer-perturbation attack.\n\n1. This work claims to be the first to study watermarking for FGL models.\n\n2. The method can leverage local graph and client information to generate customized watermarks.\n\n3. The paper introduces a layer-perturbation attack to further demonstrate the certifiably robustness of the proposed backdoor-based watermarking for FGL.\n\n4. The work is well-motivated with preliminary studies.\n\n1. The concept of ownership in FGL can be confusing and is not well-defined in this paper. For example, can every client claim ownership of the federated trained model? Since the watermarks from different clients are different, can any single client claim entire ownership? Additionally, for clients who participate in the FL but do not have watermarks, how can they claim ownership?\n\n2. The motivation for using local customized watermarks is not clear. The following problems arise: (1) It is unclear how to conduct ownership verification. Should it use the global watermark or the local watermarks? (2) If using a global watermark, what is the necessity of employing customized watermarks, or what is the adequate way to aggregate the global watermark from customized watermarks? If using local watermarks, how can the customized watermarks be used across clients?\n\n3. The method requires specific GL models (to be split to multiple submodels), which can be hard to adapt to existing FGL methods, especially for advanced FGL methods.\n\n4. The motivation for incorporating submodels for GL is missing. Why is this design necessary?\n\n5. (1) What does “layer indexes” for splitting GL models mean? From section 3.3, it is not clear how the submodels are split and how the split submodels are decoupled from each other regarding cascaded structures. (2) Additionally, structural information can be important for graph learning. How would discarding such structural information impact in this setting?\n\n6. The global model is obtained by simply averaging uploaded clients’ models (not weighted by data size, or applying proxy terms for regularization). Can this method address the potential heterogeneity issue when local watermarks are highly disparate from each other?\n\n7. The proposed method can introduce efficiency issues, as it significantly increases the number of parameters and computation time.\n\n1. When the set of selected clients for aggregation is different from the set of watermarked clients, can the method achieve stable convergence?\n\n2. Is the layer-perturbation attack applied before or after submodel splitting? If it is applied after, does it perturb all submodels or not?\n\n3. Out of curiosity, is it possible to federated learn the local watermarking? How do you expect this would perform?"
}
] | |
xeXRhTUmcf | Combining Statistical Depth and Fermat Distance for Uncertainty Quantification | We measure the out-of-domain uncertainty in the prediction of Neural Networks using a statistical notion called "Lens Depth'' (LD) combined with Fermat Distance, which is able to capture precisely the "depth'' of a point with respect to a distribution in feature space, without any distributional assumption. Our method also has no trainable parameter. The method is applied directly in the feature space at test time and does not intervene in training process. As such, it does not impact the performance of the original model. The proposed method gives excellent qualitative results on toy datasets and can give competitive or better uncertainty estimation on standard deep learning datasets compared to strong baseline methods. | https://openreview.net/pdf/c4068669caf9d9c9527416b6771bb88358891592.pdf | [
{
"confidence": 3,
"rating": 7,
"review_id": "nXbQVdXpKs",
"review_text": "This paper introduces a new method for Out-of-Distribution detection based on the concepts of Lens Depth and Fermat distance. This method is used to see whether a sample has a similar representation in the penultimate layer of a Neural Network as the samples in the training data. The method is subjected to various tests of Out-of-Distribution detection and is shown to be on-par or exceeding alternative methods. However, the proposed method does not intrude on the training process of the model, and therefore cannot have a negative impact on the classification performance. Alternative methods assume a Gaussian Distribution in the hidden representation, but the use of (a modification of) Lens Depth allows estimating the “similarity” of the sample without assuming a certain distribution.\n\n- The application of Femat Distance and Lens Depth introduces mathematical concepts that are not common knowledge and not obvious to a Machine Learning audience. The application of these methods in OoD detection is new (originality)\n- Previous literature is well cited, and the mathematical concepts are clearly and intuitively introduced, with clearly stated relevance (clarity)\nThe claims made follow naturally from the evidence and are not overstated. The evaluation is in line with common practice in the field of OoD detection (quality)\n- The paper is well written and consistently builds a clear argumentation (clarity)\n- Mathematical concepts are introduced with both formalism, and an intuitive explanation (clarity).\n- The proposed method is competitive with other methods, and is minimally invasive to the training process. This could be helpful when then training process is outside of the control, for example for large pre-trained models (significance)\n\n- Small claims are not entirely accurate. Line 4 says there are “no assumptions” about the form of the distribution, but there are only minimal assumptions (see question 3). Line 262 claims that the proposed measure is a good measure of “uncertainty estimation”, but it’s only evaluated for OoD detection, so it may be wildly over/underconfident and behave poorly on aleatoric uncertainty. Line 323 conjects that OoD detection may ensure fairness, but I see no reason why. Line 5 claims that the proposed method is applicable to any classification model, but the performance is only tested for Neural Networks (quality/clarity)\n- The explanation of Lens Depth may be made more intuitive with a visualisation to support Lines 94-99 (clarity)\n- Presented results are not substantially better than previous methods. Authors argue that the main benefit is that the proposed method is minimally invasive to the training process, but the authors do not make a strong case on why this is necessary (significance)\n\n1. How computationally expensive is LD after the improvements discussed in Section 4.5? Is it substantially faster/slower to do inference than e.g. DDU?\n2. In Figure 4.2 you show that the LD still works with 200 samples to claim that the method also works for small datasets. At what dataset size does the method start to fail, and how catastrophic is that? A plot like Figure 4.2.B with decreasing sizes of the dataset may give this insight. \n3. Consider Figure D.1. What if two of the “blobs” belong to cluster A and the last to cluster B, so that there are two classes (C=2) but in three clusters. Would LD then still behave as desired? If LD then gives undesirable results, wouldn’t you say that there is at least some assumption about the shape of the distributions?\n4. How would the model perform if the two moons have more spread, to the point that the two classes might touch/overlap? Is there “uncertainty” between the two classes? I understand this is not the point of OOD-detection, but it can be a point of UQ. This might be a ‘limitation’ worth mentioning. LD is good at OOD-detection, but not for the general task of uncertainty estimation. Specifically Line 262 says that LD is a good measure for uncertainty estimation, but only OOD-detection and being monotically decreasing with accuracy are demonstrated. Estimating heteroscedastic aleatoric uncertainty and uncertainty calibration are not tested, but are properties of good uncertainty estimation. On Line 264 “uncertainty quantification” and is said, while OOD-detection is investigated, though I think they are not exactly the same. \n5. In Figures 5.2b-5.2d the accuracy seems to plateau. Do the authors have any suggestions on what might be causing this, and how this might impact applications using LD? \n6. One important use case I’d consider for minimally invading the training process is OoD detection with pre-trained models. Can you elaborate on whether this would be a good use case for your method? If it is, consider stating this in the paper as well, to argue clearly for why minimally invasive OoD detection is desirable."
},
{
"confidence": 3,
"rating": 6,
"review_id": "4kmJn3kryd",
"review_text": "The paper presents a non-parametric approach to out-of-distribution (OOD) detection. Given a trained neural network classifier, it is proposed to combine the Lens Depth (LD) with the Fermat distance (in an improved form) to capture the geometry and density of the data in feature space. Without assuming any prior distribution, the paper classifies OOD samples for toys and small scale benchmarks.\n\n- The combination of the Lens Depth with the sample Fermat distance for the out-of-distribution problem is a solid and interesting contribution. \n- The paper is well written and easy to follow. In general, the approach is clearly described.\n- The results on small scale experiments are convincing. \n- The approach presented does not include the training process of the model.\n\n- An extension of the related work to include papers on OOD would be necessary for the content of the paper. \n- An additional evaluation metric would be helpful, e.g. FPR-95, ECE. This point should be addressed. \n- A large-scale evaluation, e.g. ImageNet, is also missing. This is the main limitation of the paper.\n\n- What is the reason for not performing the ImageNet evaluation, given that it is quite common in the topic?"
},
{
"confidence": 4,
"rating": 8,
"review_id": "8Fdo37X8p8",
"review_text": "This paper proposes a new method for OOD detection/scoring based on the lens depth and Fermat distance, arguing that it has advantages over prior methods by being non-parametric, non-invasive, (almost) tuning-parameter-free, and quite effective in adapting to the unknown structure of the data to identify OOD points.\n\n1. Subject matter is important\n2. I found the paper really easy and fun to read.\n3. 4.2 is a nice, simple, and practical modification—very natural and clearly successful!\n4. Both the Lens Depth and Fermat Distance are nice, intuitive notions, and it is natural and fun to think about their combination!\n5. I raise a number of conceptual issues below, but at the end of the day the demonstration of the method on standard data sets, comparing it to state-of-the-art methods, is fairly compelling, hence my high score.\n\n1. LD is interesting and intuitive but what happens when the data falls into two disjoint clusters? Then won’t LD (with basically any distance I can think of, including Fermat distance) consider points in between those two clusters to be extremely central, despite the fact that, since they lie in neither distribution, they could reasonably be considered very OOD? Related: it seems the FD is infinite (whenever \\beta>0) between two points separated by a region of zero density, suggesting that the sample version will be highly unstable in this setting, as it is should not converge at all but instead diverge to infinity. I see this is addressed in 4.4 by computing sample FD separately per cluster, but how were the clusters computed? Clustering is no trivial task, and given that things go wrong without clustering, I imagine S(x) in eq (4.2) depends rather heavily on the clustering. This (seems to me important) aspect of the proposed method seems underexplored/underexplained in the paper.\n2. How does the convergence of the sample FD to the population FD depend on dimension? It’s a bit hard to believe it doesn’t suffer from some sort of curse of dimensionality, since it depends on a density and density estimation very much suffers from the curse of dimensionality. It seems many of the nice demonstrations of it in this paper occur in 2 dimensions (with the data lying nearly on a set of dimension 1), which doesn’t seem very representative of NN feature spaces.\n3. Claim of “no trainable parameter” in the abstract is rather misleading, given the need for choosing both \\alpha (ok there is a case made that maybe this isn’t too important) and the clustering.\n4. Lit review is well-organized, but very focused on methods for NN OOD detection. The paper makes a big deal out of the method being non-intrusive, but another way of saying this is just that the proposed method is a way of scoring a point being OOD with respect to a distribution, which is a problem that, in general, has nothing to do with NNs or their feature representations. Surely there is a large body of work on outlier detection in statistics that could be considered in a similar light to this method, where one takes an off-the-shelf outlier detection method’s score and just applies it to the data transformed to be in the feature space of the NN? That is essentially what this paper is doing (though for a novel method, and I am not questioning its novelty). I just wonder what other existing methods are out there that could be doing something similar, even if they haven’t been explicitly applied to NNs.\n5. Section 4.5 and Appendix E: choices II and III seem like they would rather seriously break the connection between the estimated LD and the true LD, since the k-means clustering will in general (and in typical circumstances) have clusters with very different numbers of points in them, so by reducing to the cluster centers (or center+’s), you are representing very different numbers of points with different centers. Another way to say it is that the density of the n points via methods II and III is quite different from that of the original N points (or via method I), and hence using them to compute the LD will be quite different in nature from using method I or the original N points. I would expect these methods (II and III) to not even have any kind of consistency property to the true LD of the original points, given their change in the density. \n6. I appreciated the authors’ honesty in reporting LL ratio results as being better than their method (of course, it comes with a more complex process), but it seems worth noting that it is substantially better. Since all the AUROC scores are close to 1, it is natural to look at. 1-AUROC (so smaller is better), in which case the LL ratio gets 0.006 and LD gets 0.029, almost 5x higher. I don’t think the authors were misleading in presenting these results, but I found the two sentences (lines 252-254) highlighting the challenges associated with the LL ratio to be a bit vague, and the results might be more convincing if those challenges were made more explicit (possibly in an appendix if there isn’t room in the main paper).\n7. I don’t find Fig 5.2 very convincing, since the monotonicity here is a pretty weak property and no comparison is made with other methods—my guess would be that many methods satisfy monotonicity. Is that not the case?\n\n1. What is \\alpha in Fig 4.1? Is it the same for all panels?\n2. Nothing about the proposed method seems to have anything to do with NNs or their feature space, and in particular, it is never mentioned why the method is applied to data points in the feature space, as opposed to the raw data points. I can imagine the reason is that the method works better with relatively “nice” densities, with fewer clusters and continuous densities supported on smooth manifolds, but there is no mention of this in the paper, and it seems like it merits discussion. I did see the last sentence mentions the method can be applied to any model with a feature space, but again, why is a feature space (or a classification model) even needed?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "nNB38bq8Z1",
"review_text": "The authors address the problem of out of distribution detection in supervised learning with particular focus on neural networks models. The developed method worj in some feature (embedding) space by measuring the statistical depth of the query point with respect to some reference set of points. The particular implementation combines lens depth function with Fermat distance. The authors validate the proposed approach in a series of experiments on simulated and real-world data.\n\n* The paper is very well-written and easy to follow.\n* The considered problem is relevant for practice as there is a significant demand in efficient and non-intrusive methods for uncertainty quantification. \n* The proposed approach is solid with all the steps being properly motivated.\n* The authors did a significant effort to do a comprehensive literature review, experimental evaluation and analysis, though all the steps were not fully successful (see Weaknesses and Questions below).\n\n[After rebuttal comment] I appreciate the answer by the authors and increase my score to 6. My main concerns were addressed.\n\n* While usage of statistical depth functions and distribution/manifold related distances looks logical, it is not clear why the particular choices of Lens Depth and Fermat distance were made.\n\n* The baselines considered are not comprehensive enough and some of the baselines are not interpreted correctly by the authors of the present paper. In particular:\na. Non-Gaussianity of embedding distribution was directly considered in [1] aiming to improve over GDA. I think that is worth comparing with this method as the present paper target the same issue though with the completely different approach.\nb. I believe that the authors incorrectly say that the difference between papers [2] and [3] is only in usage of spectral normalization. In my opinion, even more important is that [2] uses Mahalanobis distance as uncertainty measure while [3] considers the density of Gaussian mixture instead.\n\n* The experiments are done with relatively simple datasets like CIFAR-10 for in-distribution data and SVHN/CIFAR-100/TinyImageNet as OOD. With the proposed approach being relatively lightweight, it is not clear why not to consider CIFAR-100/ImageNet as in-distribution with corresponding OOD choices (like ImageNet-R or ImageNet-O as OOD for ImageNet).\n\nReferences \n[1] Kotelevskii, Nikita, et al. Nonparametric uncertainty quantification for single deterministic neural network. Advances in Neural Information Processing Systems 35 (2022): 36308-36323.\n[2] K. Lee, K. Lee, H. Lee, and J. Shin. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. Advances in neural information processing systems, 31, 2018.\n[3] J. Mukhoti, 362 A. Kirsch, J. van Amersfoort, P. H. Torr, and Y. Gal. Deep deterministic uncertainty: A new simple baseline. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 24384–24394, 2023.\n\n1. Why Lens Depth was chosen and not other statistical depth functions like Half-space depth, Simplicial depth, ...\n2. Why Fermat distance was chosen? One can consider many alternatives. For example, following manifold learning literature one can consider kNN graph constructed with Euclidean distance over embeddings and then computing shortest paths over the resulting graph.\n3. Can you clarify how you implemented \"GDA\"-based methods? Did you use Mahalanobis distance or GMM-density?\n4. Why didn't you do the experiments with more complex datasets? Is it due to high computational of LD + Fermat distance approach?\n5. Have you tested effectiveness of reduced LD on more complex dataset than MNIST? Apparently, more complex models may lead to more complex embedding structure and require more points for approximation."
}
] | |
xcqSOfHt4g | Simplified and Generalized Masked Diffusion for Discrete Data | Masked (or absorbing) diffusion is actively explored as an alternative to autoregressive models for generative modeling of discrete data. However, existing work in this area has been hindered by unnecessarily complex model formulations and unclear relationships between different perspectives, leading to suboptimal parameterization, training objectives, and ad hoc adjustments to counteract these issues. In this work, we aim to provide a simple and general framework that unlocks the full potential of masked diffusion models. We show that the continuous-time variational objective of masked diffusion models is a simple weighted integral of cross-entropy losses. Our framework also enables training generalized masked diffusion models with state-dependent masking schedules. When evaluated by perplexity, our models trained on OpenWebText surpass prior diffusion language models at GPT-2 scale and demonstrate superior performance on 4 out of 5 zero-shot language modeling tasks. Furthermore, our models vastly outperform previous discrete diffusion models on pixel-level image modeling, achieving 2.75 (CIFAR-10) and 3.40 (ImageNet 64x64) bits per dimension that are better than autoregressive models of similar sizes. | https://openreview.net/pdf/af16c0e21b31a4aa92236ff91bd4af0bfda1a2c9.pdf | [
{
"confidence": 3,
"rating": 6,
"review_id": "HMh3lS7uGG",
"review_text": "This paper proposes a new framework for masked diffusion models for generative modeling of discrete data. Masked diffusion models offer an alternative to autoregressive models for discrete data but have faced challenges due to complex formulations and unclear relationships between different approaches. This paper presents a simplified and generalized framework to address these issues, enhancing the performance and training of masked diffusion models.\n\nThe key contributions includes:\n\n1. Simplification of Model Formulation: The paper establishes properties for the forward process and its time reversal using elementary arguments, provides a simple expression for the Evidence Lower Bound (ELBO), demonstrating it as a weighted integral over time of cross-entropy losses, and shows invariance properties similar to continuous space diffusions.\n\n2. Re-derivation of Training Objectives: The paper demonstrates how various previously proposed discrete diffusion training objectives can be derived from the ELBO objective by altering parameterization, relaxing constraints, or modifying loss weighting.\n\n3. Performance Improvements: The paper demonstrates state-of-the-art likelihood and zero-shot transfer results on text and image tasks using the proposed ELBO objective.\n\n4. Generalized Masked Diffusion Model: The paper proposes a generalized masked diffusion model that allows state-dependent masking schedules, further improving predictive performance on test likelihoods.\n\n1. The paper makes a notable contribution to the field of generative modeling for discrete data by introducing a simplified and generalized framework for masked diffusion models.\n2. The quality of the paper is reflected in the thoroughness of its methodology and the robustness of its experimental validation.\n3. The paper is well-written and clearly structured, making it accessible to both experts and those new to the field.\n4. The significance of the paper lies in its potential to impact a wide range of applications in generative modeling for discrete data.\n\n1. While the paper provides a robust theoretical foundation, there could be more emphasis on practical applicability. The paper could benefit from additional practical guidelines for implementing the proposed framework, such as more detailed pseudocode and specific implementation challenges.\n2. The experimental results presented are strong, but the range of tasks and datasets could be expanded, such as VQ-Diffusion [1] for token-based text-to-image.\n3. I am unfamiliar with diffusion models for text generation. For image generation, the paper has reported likelihood results, missing some other common metrics, such as FID and IS.\n\n[1] Vector Quantized Diffusion Model for Text-to-Image Synthesis\n\nPlease see weaknesses."
},
{
"confidence": 5,
"rating": 6,
"review_id": "KTr6AwozjU",
"review_text": "The paper simplifies the mathematical formula for the absorbing state diffusion process. By doing so, the authors derive a continuous-time ELBO for masked diffusion models. Their method, MD4, achieves better perplexity scores than SEDD on text8 and zero-shot perplexity on numerous datasets.\n\nSimplifies the complex mathematical formulations for the absorbing state diffusion for D3PM.\n\nWeaknesses:\n\n1. Weak empirical results\n 1. The zero shot numbers for D3PM in Table 1 look fishy. There are only 2 differences between Md4 and Absorbing State D3PM:\n 1. Mathematical simplification. In the discrete case (Eqn. 6), even though MD4 features a Simplified functional form for the ELBO, it shouldn't give it any performance benefits in terms of perplexity since it is mathematically equivalent to D3PM.\n 2. The improvement in ELBO could be because of the continuous time formulation. However, VDM [1] has shown that for gaussian diffusion, improvement from discrete (T=1000) to continuous time (T = $\\infty$) barely improves the likelihood by less than 1%. For this reason, I request the authors to perform eval on an already trained model and report the perplexity numbers on text8 or OWT using Eqn (6) with T=100, 1000, 10000. If the numbers reported for D3PM in Table 1 are indeed correct, and if the entire improvement is coming from the continuous time formulation, then the discrete time MD4 should get a number that's comparable to D3PM's zero shot ppl numbers. \n \n Questions: How did they retrain D3PM? Did they use the same transformer backbone as MD4? Did they use the same model size and data pre-processing scheme? Did they use uniform state or absorbing state diffusion process? The authors need to clarify this.\n \n 2. CIFAR10 Experiments. The AR baselines use old transformer models hence the comparison isn't quite fair. Current SOTA diffusion models on Imagenet 32 achieve a NLL of 2.55 [2] which is far better than the absorbing state diffusion models. So, I'm unsure about the takeaway from Table 3. In the conclusion section, the authors claim that \"… on text and image data, the resulting masked diffusions outperform existing discrete and continuous diffusion models …\" which is factually incorrect given that their method largely underperforms against gaussian diffusion [1, 2].\n2. Limited evaluation of GenMD4. The authors mention that GenMD4 performs poorly on zero-shot tasks. I request the authors to quantify this poor performance by providing \n 1. Validation ppl numbers on OWT\n 2. zero-shot ppl numbers.\n\n[1] Kingma, D., Salimans, T., Poole, B. and Ho, J., 2021. Variational diffusion models. *Advances in neural information processing systems*, *34*, pp.21696-21707.\n\n[2] Sahoo, S., Gokaslan, A., Sa, C., Kuleshov, V., 2024. Diffusion Models With Learned Adaptive Noise. arXiv:2406.07524\n\n1. Clarification on D3PM experiments in Table 1 as mentioned in the \"weaknesses\" section in the reviews.\n2. Why did the authors decrease the dropout to 0.02 for OWT experiments and not set it to 0? Diffusion models are heavily regularized due to the randomness in the input to the model and oftentimes don't require additional regularization such as dropout. Hence, an intuitive or an empirical explanation would be helpful."
},
{
"confidence": 4,
"rating": 7,
"review_id": "4Lcb9YecYh",
"review_text": "The paper proposes a streamlined and generalized framework for masked diffusion models, addressing the complexities and inefficiencies of existing models, including those based on Score Entropy Discrete Diffusion (SEDD). It introduces a continuous-time variational objective for masked diffusion models, simplifying the evidence lower bound (ELBO) to a weighted integral of cross-entropy losses. Additionally, the paper presents state-dependent masking schedules, enhancing the flexibility and performance of these models. The proposed methods demonstrate state-of-the-art results in text and image tasks, significantly improving likelihood and zero-shot transfer performance.\n\n- The paper offers a novel theoretical formulation of the continuous-time variational objective for masked diffusion models, simplifying the training process and ensuring consistency between forward and reverse processes.\n- The introduction of state-dependent masking schedules provides a more adaptable approach, catering to the specific characteristics of the data and improving model performance.\n- The proposed methods achieve state-of-the-art performance in both text and image generative tasks, significantly enhancing likelihood and zero-shot transfer capabilities.\n- By reducing the ELBO to a weighted integral of cross-entropy losses, the paper makes the training and understanding of masked diffusion models more accessible and potentially more stable.\n- The paper includes comprehensive experimental validation on various datasets, demonstrating the robustness and superiority of the proposed methods.\n\n- Despite the theoretical simplifications, the practical implementation of state-dependent masking schedules can still be complex and computationally demanding. Specifically, obtaining the starting x_T is challenging, and since the sampling process lacks stochasticity, sampling cannot be done from the completely masked state.\n- The state-dependent models have a tendency to overfit to dataset statistics, which can limit their effectiveness in zero-shot transfer tasks.\n- While the paper demonstrates superior performance, a more detailed comparative analysis with other state-of-the-art methods, particularly regarding computational efficiency and training times, would provide a clearer picture of the advantages.\n\n- Could the authors provide more insights into the practical challenges faced during the implementation of the state-dependent masking schedules?\n- How does the proposed model ensure consistency between the forward and reverse processes, and how does this impact training stability compared to SEDD? \n- Could the authors provide a detailed and separate description of the training and sampling algorithms, similar to what is provided in the Appendix of the SEDD paper, to better and more easily understand the proposed method?\n- How sensitive is the proposed method to hyperparameter choices? Do multiple runs with the same hyperparameters yield consistent performance?"
},
{
"confidence": 5,
"rating": 7,
"review_id": "MJWNOiHah6",
"review_text": "Summary: This paper introduces a framework for masked diffusions that consolidates previous research on the topic and organizes it into a cohesive structure. The authors also present a generalized model within this framework, which enables the use of state-dependent masking schedules and optimization of scheduler parameters.\n\n1. The GenMD4 framework offers a valuable approach to optimize the forward process. In earlier studies, forward processes were typically manually designed and set within the model. However, GenMD4 adjusts the forward distribution to align with the estimated distribution, thereby improving the forward process. This innovation may serve as a source of inspiration for developing more effective forward processes.\n\n2. This paper summarizes previous formulations of masked diffusion models and establishes the connections between them.\n\n1. In line 90. The handling of $p(x _0|x _{t(1)})$ could be enhanced. Assuming $p(x _0|x _{t(1)}) \\propto q(x _{t(1)} | x _0)$ is equivalent to assuming that $q(x _0)$ is uniformly distributed. In reality, it should be treated the same as other $p(x _s|x _t)$.\n2. In line 114. When discussing multidimensional data, it is not straightforward to assume that the backward process factorizes across tokens. This is because the distribution $p(x _0)$ does not factorize across tokens. Achieving factorization necessitates a small time step dt, which may not be easily observable. Additionally, in the previous single-token scenario, dt dose not need to be small, indicating that one step is sufficient to model the distribution $p(x _0 | x _1)$. This aspect is crucial for multidimensional data and should be emphasized in a fundamental paper like this.\n3. In append F. The presence of a non-zero $\\alpha _1$ may result in the \"medium brightness problem\" [1]. However, there is no singularity when $\\alpha _1$ is zero if log-SNR is not introduced, and the time interval can be extended to [0, 1].\n4. In append G2. When applied to masked diffusion, $R_{kj}$ is zero when $ j \\ne k$ and $j \\ne m$. Given that $R_{kk} + R_{km} = 0$, $\\tilde{q}$ can only take on one value (m), resulting in no additional variance.\n5. In image experiments, MD4 employs masked noise, while $\\tau$LDR uses Gaussian noise. We recommend conducting experiments with the same noise scheduler to demonstrate conclusively that MD4 is superior. If the goal of this paper is solely to establish that masked noise outperforms Gaussian noise, we recommend explicitly stating this claim. Additionally, we advise detailing the sampling method, as variations in methodology can influence the quality of generated samples.\n\n[1] Common Diffusion Noise Schedules and Sample Steps are Flawed, Lin et al., 2024\n\n1. GenMD4 has not been tested on image datasets. Could you please share the results of GenMD4 when applied to image datasets?\n2. Since introducing GenMD4 results in additional variance, what if all tokens share the same w (referred to as \"simplified-GenMD4\")? This would result in less variance. Given that GenMD4's performance is close to MD4, can simplified-GenMD4 achieve the same BPC?"
}
] | |
xcF2VbyZts | SocialGPT: Prompting LLMs for Social Relation Reasoning via Greedy Segment Optimization | Social relation reasoning aims to identify relation categories such as friends, spouses, and colleagues from images. While current methods adopt the paradigm of training a dedicated network end-to-end using labeled image data, they are limited in terms of generalizability and interpretability. To address these issues, we first present a simple yet well-crafted framework named SocialGPT, which combines the perception capability of Vision Foundation Models (VFMs) and the reasoning capability of Large Language Models (LLMs) within a modular framework, providing a strong baseline for social relation recognition. Specifically, we instruct VFMs to translate image content into a textual social story, and then utilize LLMs for text-based reasoning. SocialGPT introduces systematic design principles to adapt VFMs and LLMs separately and bridge their gaps. Without additional model training, it achieves competitive zero-shot results on two databases while offering interpretable answers, as LLMs can generate language-based explanations for the decisions. The manual prompt design process for LLMs at the reasoning phase is tedious and an automated prompt optimization method is desired. As we essentially convert a visual classification task into a generative task of LLMs, automatic prompt optimization encounters a unique long prompt optimization issue. To address this issue, we further propose the Greedy Segment Prompt Optimization (GSPO), which performs a greedy search by utilizing gradient information at the segment level. Experimental results show that GSPO significantly improves performance, and our method also generalizes to different image styles. The code is available at https://github.com/Mengzibin/SocialGPT. | https://openreview.net/pdf/70105568d7ff06cd079c119532147dd737450a1c.pdf | [
{
"confidence": 4,
"rating": 7,
"review_id": "49Gle3BO6p",
"review_text": "Paper proposes a pipeline method of orchestration pre-trained foundation models to solve the social relationship classification problem. It uses vision models to extract information in text about the scene in the form of caption. Relevant information, i.e. age, gender, general description, of individual persons and objects are also extracted in text form via instance segmentation + masking + captioning. The generated text are then further converted to Social Story with a LLM. With the novel prompt engineering method, GSPO, another LLM will then generate the social relationship from the Social Story.\n\nExperimental results on the challenging benchmarks, PIPA and PISC, indicates its strong performance with zero-shot setup. Extensive ablation studies were also done to evaluate the contributions of the various components. In particular, it clearly showed the merits of the \"Social Story\" design.\n\nPaper proposed a novel method to solve the challenging social relationship classification problem. The proposed method cleverly combine several state-of-the-art foundation models in a logical, intuitive, and yet non-obvious design to achieve state-of-the-arts experimental results.\n\n1. Besides the clever design of the pipeline, the direct technical contributions is slightly on the weaker side as there is no obvious technical breakthrough. The proposed GSPO appears to be the main new technique introduced. However, I am not an expert in this area and will defer to other reviewers on its technical novelty and merits.\n\n2. (minor) The use of the generic semantic segmentation model (SAM) may not be the optimal choice. There are much stronger Human Instance Segmentation methods which can replace the paper's custom SAM method. Such methods are specifically trained on person dataset to handle various challenging scenarios unique to human segmentation, e.g. heavy occulsion, human-like objects (e.g. maniquinn).\n\nLing, E., Huang, D., & Hur, M. (2022). Humans need not label more humans: Occlusion copy & paste for occluded human instance segmentation. BMVC.\n\n1. Will/has the authors consider using pairwise attributes, besides the individual person attributes. E.g. relative age between pairs (older/younger), same/different clothings for the model? \n\n2. Why are only 2 attributes (age/gender) used for the person instance? In prior works, other attributes such as wearing uniform are important attributes for certain type of social relationship, e.g. team members?"
},
{
"confidence": 3,
"rating": 5,
"review_id": "pEZ8r6aBRK",
"review_text": "This paper introduces SocialGPT, a modular framework for social relation reasoning that integrates the perception capabilities of Vision Foundation Models (VFMs) with the reasoning capabilities of Large Language Models (LLMs). To optimize prompts, the authors propose GSPO, a segment-based optimization algorithm for automated prompt tuning. Extensive empirical results validate the effectiveness of SocialGPT both quantitatively and qualitatively. GSPO consistently enhances SocialGPT's performance across various LLMs, and case studies demonstrate the framework's generalizability and interpretability.\n\n- The paper is well-organized, with a logical flow and clear explanations of each step.\n- The proposed SocialGPT framework innovatively combines perception from VFMs with reasoning from LLMs, achieving competitive zero-shot performance and offering potential explanations for its reasoning process.\n- Extensive experiments, ablation studies, and case studies comprehensively evaluate the framework's effectiveness.\n\nSection 3.2 mentions that using precise coordinates can pose challenges for LLM numerical reasoning. However, it appears in Figure 3 that the objects' positional relations in the social story are inferred from numeric coordinates provided in the dense captions with symbols. Does this coordinate-based inference lead to similar numerical reasoning challenges? Additionally, how are relative positional relations conveyed here using referral symbols?\n\nPlease see the weaknesses section above."
},
{
"confidence": 4,
"rating": 4,
"review_id": "C3PEiWKQXi",
"review_text": "This paper proposes a framework called SocialGPT for social relation reasoning, which combines vision foundation models and large language models. A greeedy segment prompt optimization methods is also proposed to prompt LLM. Experimental results show the effectiveness of the proposed method.\n\n---The paper is well organized and written. \n\n---The idea of combining VFMs and LLMs is reasonable.\n\n--- The paradigm of using VLMs for perceiving and LLMs for reasoning is currently a common solution for multimodal tasks. The main difference of this paper seems to be the use of a generated social story as the representation of visual content. As stated by the authors, LLMs perform best when working with human-readable natural language and often struggle with arithmetic reasoning tasks, which is why they design an additional process to generate social stories. However, the generation of social stories is also done by LLMs, which also suffer from the above difficulties. \n\n--- The authors propose a candidate set consisting of alternative prompts for each segment and select the best-performing prompt from their combination. The final prompt is obtained by selection rather than generation, which limits the upper bound of the performance on the manually collected candidate set. \n\n--- The function of SAM is to distinguish individuals in the image and obtain their coordinates. However, in the social story generation phase, the LLM (Large Language Model) discards the coordinates, retaining only the semantic information and losing the positional information. Conducting social relationship reasoning purely based on semantics may be insufficient. For example, in Figure 2, the social relationship is identified as a sibling relationship (brother and sister), but there are two boys in the image, both fitting the given description of \"stands out in his vibrant red and green striped pajamas,\" making it unclear which individual P1 refers to.\n\n--- Is the design of using LLMs for social story generation optimal, and why? Also, have the authors tried other approaches to generate social stories from dense captions instead of using LLMs?\n\n--- In the part of reasoning with large language models, the social relation reasoning prompt is artificially divided into four partitions: System, Expectation, Context, and Guidance, but the motivation and reasonableness of such a design is not elaborated in the paper."
},
{
"confidence": 3,
"rating": 4,
"review_id": "AZl67WqVdY",
"review_text": "This manuscript introduces SocialGPT, a modular framework designed to enhance social relation reasoning by combining Vision Foundation Models (VFMs) and Large Language Models (LLMs). SocialGPT utilizes VFMs to convert image content into a textual social story, followed by LLMs performing text-based reasoning. The paper further introduces the Greedy Segment Prompt Optimization (GSPO) algorithm to optimize prompts for LLMs, addressing the challenges of long prompt optimization. The proposed method achieves competitive zero-shot results on social relation recognition tasks and offers interpretable answers.\n\n- The GSPO algorithm provides an efficient method for optimizing long prompts, significantly improving the performance of LLMs in social relation reasoning tasks.\n- SocialGPT achieves competitive zero-shot results on PIPA and PISC datasets, demonstrating the effectiveness of the proposed approach without additional model training.\n- By leveraging LLMs for reasoning, SocialGPT can generate language-based explanations for its decisions, enhancing the interpretability of the results.\n\n- The approach involves substantial computational resources for both the perception and reasoning phases, potentially limiting accessibility and scalability for some users.\n- The experiments, while promising, are primarily conducted on two datasets. Further testing on a broader range of datasets and tasks would strengthen the generalizability of the findings.\n- The method assumes that the visual context provided by VFMs is sufficiently detailed and accurate, which might not always hold true in diverse real-world scenarios.\n- The compatibility of the proposed method seems to be limited; Table 5 implies that LLaMA2-based SocialGPT performs very poorly compared to Vicuna. The proposed framework may work only for specific types of models.\n\n- How can we evaluate generated social stories? It would be great if the authors could show how GSPO improves the quality of generated social stories.\n- How GSPO can be performed without the ground-truth answer? The current formulation in section 4 seems to require the ground-truth to define the loss objective.\n- What are the differences between the social story of SocialGPT and social relationships used in baselines? I feel that Image-based text explanation is not new."
}
] | |
xbuaSTqAEz | Customized Multiple Clustering via Multi-Modal Subspace Proxy Learning | Multiple clustering aims to discover various latent structures of data from different aspects. Deep multiple clustering methods have achieved remarkable performance by exploiting complex patterns and relationships in data. However, existing works struggle to flexibly adapt to diverse user-specific needs in data grouping, which may require manual understanding of each clustering. To address these limitations, we introduce Multi-Sub, a novel end-to-end multiple clustering approach that incorporates a multi-modal subspace proxy learning framework in this work. Utilizing the synergistic capabilities of CLIP and GPT-4, Multi-Sub aligns textual prompts expressing user preferences with their corresponding visual representations. This is achieved by automatically generating proxy words from large language models that act as subspace bases, thus allowing for the customized representation of data in terms specific to the user’s interests. Our method consistently outperforms existing baselines across a broad set of datasets in visual multiple clustering tasks. Our code is available at https://github.com/Alexander-Yao/Multi-Sub. | https://openreview.net/pdf/e80cabcafca9dea1dfdf22930f209be7f322a75a.pdf | [
{
"confidence": 4,
"rating": 5,
"review_id": "oVi8tszObZ",
"review_text": "This paper incorporates a multi-model subspace proxy learning (Multi-Sub) to design a novel end-to-end multiple clustering approach and utilizes the synergistic capabilities of CLIP and GPT-4 to align textual prompts expressing user preferences with corresponding visual representations. The main contributions of Multi-Sub can be summarized as follows:\n1. Capturing user’s clustering interest: Existing works struggle to adapt to diverse user-specific needs in data grouping. To overcome this limitation, Multi-Sub explicitly captures a user’s clustering interest by learning the desired clustering proxy under a user’s interest and aligning textual interest with corresponding visual features.\n2. Simultaneous optimized framework: The previous methods separated the representation learning and clustering stages. Different from them, Multi-Sub obtains both the desired representations and clustering simultaneously, which significantly improve the clustering performance and efficiency.\n3. Extensive experimental validation: Extensive experiments on all public multiple clustering tasks demonstrate that Multi-Sub outperform other methods. Moreover, a series of ablation studies further verify the effectiveness of Multi-Sub.\n\n1. In real world, data may have multiple aspects that they can be grouped into different clusters. However, existing methods solely consider a single partition. So, it is meaningful to propose an effective algorithm to overcome this problem.\n2. The authors leveraged large language models (LLMs), including GPT-4 and CLIP, to align image and textual representations in the same subspace. Then, multi-modal subspace proxy learning is introduced to allow for the customized representation of data in terms specific to the user’s interests.\n3. Experimental results on public datasets show that the Multi-Sub method has a significant improvement, indicating the effectiveness of the propose method.\n\n1. To change the two-stage learning approach of previous works, Multi-Sub aims to learn representation and clustering simultaneously. However, Multi-Sub employs a two-phase iterative approach to align and cluster images in training process, including (1) Phase I: Learning and Alignment; (2) Phase II: Clustering and Optimization. I wonder if this is another form of two-stage task.\n2. The description of Clustering Loss is not very clear in Section 3.4, how to determine that samples belong to the same class? By pseudo-label? Where did the pseudo-label come from?\n3. In this paper, the authors introduced large language models (LLMs) to learn representations and bridge the gap of textual and image features. But does the direct use of a pre-trained large language model introduce a priori information about the category, which can lead to unsupervised scenarios being corrupted?\n\nPlease refer to the Weaknesses."
},
{
"confidence": 5,
"rating": 7,
"review_id": "oeQFctyi24",
"review_text": "This paper presents an innovative approach for addressing the limitations of existing multiple clustering methods. By leveraging the synergistic capabilities of CLIP and GPT-4, Multi-Sub aligns textual prompts with visual representations to cater to diverse user-specific clustering needs. This method introduces a novel multi-modal subspace proxy learning framework, which automatically generates proxy words from large language models to represent data in terms specific to user interests. The experimental results demonstrate that Multi-Sub consistently outperforms existing baselines across various datasets. Overall, I believe this paper makes a substantial contribution to the field of deep clustering and holds significant practical application value.\n\nThe paper offers several notable strengths that contribute to its overall impact and significance in the field of multiple clustering: \n1.\tThe integration of CLIP and GPT-4 for multi-modal subspace proxy learning is novel and effectively addresses the limitations of traditional multiple clustering methods.\n2.\tMulti-Sub excels in capturing and responding to diverse user interests, providing tailored clustering results without requiring extensive manual interpretation. Moreover, the performance gains come at a low cost and seem relatively easy to achieve. \n3.\tThe writing is clear and easy to follow. The figures are well-drawn, allowing for a quick understanding of the research motivation and methodological design.\n4.\tExtensive experiments on a wide range of publicly available datasets demonstrate the robustness and generalizability of the proposed method.\n\nDespite its strengths, there are some areas where the paper could be improved to enhance its clarity and applicability:\n1. Although the paper mentions the hyperparameters used, a more detailed analysis and discussion on the sensitivity of the method to these parameters would be beneficial.\n2. Given the method's iterative nature and the use of large models, there is a risk of overfitting, especially on smaller datasets. I am curious whether regularization techniques were used to address this issue?\n3. Table 3 compares the impact of different text encoders on performance. Clearly, there are significant performance differences when using different encoders, and the authors have indeed analyzed this issue. However, I believe the reasons behind this phenomenon could be explored in depth. Intuitively, given that the input text is quite simple, the overall performance should not be particularly sensitive to the choice of text encoder.\n\n1. Although the paper mentions the hyperparameters used, a more detailed analysis and discussion on the sensitivity of the method to these parameters would be beneficial.\n2. Given the method's iterative nature and the use of large models, there is a risk of overfitting, especially on smaller datasets. I am curious whether regularization techniques were used to address this issue?\n3. Table 3 compares the impact of different text encoders on performance. Clearly, there are significant performance differences when using different encoders, and the authors have indeed analyzed this issue. However, I believe the reasons behind this phenomenon could be explored in depth. Intuitively, given that the input text is quite simple, the overall performance should not be particularly sensitive to the choice of text encoder."
},
{
"confidence": 4,
"rating": 5,
"review_id": "dH43xtx4CS",
"review_text": "The paper is about Multiple Clustering, which is an interesting topic. The authors propose a novel end-to-end multiple clustering approach that incorporates a multi-modal subspace proxy learning framework. The paper is well written and well organized. However, there are several concerns in the current version of the paper that addressing them will increase the quality of this paper.\n\n1 The authors' idea of using large models to aid clustering is novel.\n2 The paper is clearly structured and easy to understand.\n3 The paper has sufficient experiments to support its point of view.\n\n1 The authors point out that different clustering results can be given for different customization needs of users. Then it will bring several associations (not necessarily accurate): a. What should be done if the user's demand is exactly opposite to the potential clustering distribution? b. The experiments do give different clustering results for different demand types, if the user proposes a new type of demand, can the model also adaptively adjust?\n2 Figure 2 is well drawn but could be further improved, some icons and fonts need to be adjusted.\n3 The authors point out that their model is capable of outputting clustering results directly, and then there should be a corresponding formula to represent this. In addition, it is hoped that the authors will discuss further why, if it is not a difficult task to output clustering results directly, few previous methods have done so.\n4 Authors should add details about the dataset, such as data size, feature types, etc.\n\nConsidering that the authors did not add an appendix, are there any other discussions or experiments?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "4M5Chj2Hft",
"review_text": "This paper introduces an end-to-end multi-clustering method that integrates a multimodal subspace proxy learning framework. It combines text prompts expressing user preferences with corresponding visual representations to achieve clustering based on user interests.\n\n1.The clustering task, driven by user interests, aligns better with user preferences and is more applicable to real-world scenarios.\n2.The experimental results are promising, and the methodology is clear and logical.\n\n1.The contributions of the paper are not very clear. At first glance, it appears to merely combine CLIP and GPT, lacking innovative architecture.\n2.The baseline methods chosen for comparison are neither cited nor introduced.\n3.Section 5 discusses only limitations, lacking a discussion on broader impacts.\n\nThe evaluation metrics mentioned in the paper require comparing results with ground truth values. How were the multiple clustering ground truth values in the dataset obtained? How is the accuracy of these ground truth values ensured?"
}
] | |
xavWvnJTST | Feedback control guides credit assignment in recurrent neural networks | How do brain circuits learn to generate behaviour?
While significant strides have been made in understanding learning in artificial neural networks, applying this knowledge to biological networks remains challenging.
For instance, while backpropagation is known to perform accurate credit assignment of error in artificial neural networks, how a similarly powerful process can be realized within the constraints of biological circuits remains largely unclear.
One of the major challenges is that the brain's extensive recurrent connectivity requires the propagation of error through both space and time, a problem that is notoriously difficult to solve in vanilla recurrent neural networks.
Moreover, the extensive feedback connections in the brain are known to influence forward network activity, but the interaction between feedback-driven activity changes and local, synaptic plasticity-based learning is not fully understood.
Building on our previous work modelling motor learning, this work investigates the mechanistic properties of pre-trained networks with feedback control on a standard motor task.
We show that feedback control of the ongoing recurrent network dynamics approximates the optimal first-order gradient with respect to the network activities, allowing for rapid, ongoing movement correction.
Moreover, we show that trial-by-trial adaptation to a persistent perturbation using a local, biologically plausible learning rule that integrates recent activity and error feedback is both more accurate and more efficient with feedback control during learning, due to the decoupling of the recurrent network dynamics and the injection of an adaptive, second-order gradient into the network dynamics.
Thus, our results suggest that feedback control may guide credit assignment in biological recurrent neural networks, enabling both rapid and efficient learning in the brain. | https://openreview.net/pdf/9dab0a630262d4f5546036f7479bf26afc15556b.pdf | [
{
"confidence": 3,
"rating": 7,
"review_id": "ZTTpQzwRlO",
"review_text": "The authors explore the relationship between feedback control and learning with recurrent neural networks (RNN). Specifically, they enforce a control signal onto a RNN that is used to generate a trajectory for a outreaching task, and then propose to use local learning rules on the neurons in the RNN. They show that with feedback control the network can adapt faster to perturbations, of the task and show that the local (in time) gradients are better aligned with the global ones.\n\nThe claims are all very reasonable and well illustrated. I this is the first time such feedback-based learning used in proper control settings, which is surprising given that it is based on control theory.\n\nMain problem:\nMy main concern is that I the task chosen consists on bringing a system to a desired static target, so it is possible that there is no \"out of equilibrium dynamics\", rather the learning simply consists on bringing the \"arm\" to the required target and it just so happens that the shortest trajectory aligns with the velocity profile. While it could be that the trajectory is indeed learned (and with some implicit or explicit regularization it should be the case), the current task is not conclusive. If the point is to really learn a trajectory, the authors should have picked a task where the trajectory is a bit more complex than going to equilibrium. Maybe a limit cycle? Otherwise the work might be a minor modification of Meulemans et al.\nAlso, I fail to see the \"biological circuits\". If we are talking about recurrent neural networks, this is fine, but usually when we talk about circuits in biology we would refer to cell types (and this has a lot of constraints). In fact the authors themselves state that they are agnostic to the biological implementation, which is hardly in compatible with the title. I would replace it by recurrent neural networks.\n\nOther issues:\n- The key findings are not clear in the introduction. The term \"inference learning\" is only used there and in one of the figure, but it is not clearly defined. If the authors mean that feedback control can train an RNN then this has already been shown. For the second finding, \"increased accuracy of approximate local learning rules\" it would be better stated as increased accuracy WITH local learning rules (or something similar). For the third, the second order gradient is not really injected (this would suggest that the gradient is imposed on purpose); rather, the feedback control is implicitly related to second order optimization methods.\n- Line 142: it seems natural that if the network is perturbed from its trajectory the feedback would be stronger to compensate for the perturbation. I don't see why this is \"suggested\". Also, the sentence is badly written \"suggest that the during task... activity is increasingly by feedback\").\n- LInes 164 and 165. The authors say that \" using a local learning rule without feedback control show an increasing performance gap compared to those trained with feedback control and BPTT\". The sentence could be interpreted as if the network is trained with feedback control AND BPTT (combined). A better wording would replace AND by OR \n- In 3.4 it is a bit hard to follow. It seems as if the authors are using an eligibility trace to train the RNN through BPTT. But this intermediate step might not be real BPTT as it is commonly used. \n\n\nLiterature issues:\n- The work of Meulemans et al 2022b is credited with alleviating the temporal storage required by BPTT. While they did do that (and it is a good paper), I think that they based the memory load decrease on previous work (Bai et al., Deep Equilibrium models 2019), which if memory serves does use BP. The logic of my comment is that by training the equlibrium point of the network one can avoid the memory load, regardless of the training method.\n- The connection between feedback-based learning and second order optimization has been is very closely related to Meulemans, et al. \"A theoretical framework for target propagation\" 2020. That paper mentions target propagation, but it is very similar to feedback based learning (as the authors probably can infer).\n- This is a personal opinion, the authors do not need to take it into consideration: The biological plausibility claims seem to rely on the locality of the learning rules. While it's a requirement that learning rules should not break the laws of physics (or in this case basic biological knowledge), learning rules should at least have some basis on biology, which I am missing here. A brief mention of why would one think that the learning rules are close to biological ones would be welcome. My guess for this feedback-based work would be something with a temporal component such as temporal hebbian rules (ex: Aceituno et al. \"Learning cortical hierarchies with temporal Hebbian updates.\" 2020)\n\nI am not 100%, but I think that the work of Gilra and Gerstner had a very similar architecture. Could you mention what are the main differences?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "xJ8VELqo6j",
"review_text": "Feedback controllers are ubiquitous in neuroscience but their functions are not fully understood. This paper studies how feedback control interplays with biologically plausible online learning on a standard motor control task. The authors show that:\n- feedback control enables to adapt to task variations without any plasticity, by approximating the gradient of the loss with respect to the hidden states.\n- it makes tractable approximations to RTRL more reliable by shrinking the recurrent Jacobian eigenvalues.\n- it incorporates some second-order information in the weight updates, leading to faster learning.\n\nThe paper studies an important understudied question, that is the interplay between feedback and learning.\nThe paper is overall well-written and is easy to read. The message is clearly delivered.\nThe experiments are carefully designed and well executed, and support the claims of the paper.\n\nOverall, the paper will be an insightful read to the community.\n\nWhile the experiments are overall well executed, there are a few points that should be improved to make the paper's claims more robust:\n- In the appendix, it is written that the learning rate is taken to be constant. To make claims about e.g. learning speed, the optimizer, in particular its learning rate, has to be tuned.\n- Figure 5b: it is not clear from this experiment that RFLO-c contains some second-order information. The alignment with the 2nd-order gradient result is not convincing as the estimated gradient is more aligned to the first-order gradient than to the second-order one. This experiment needs to be improved for it to support its claim. The BPTT-c baseline that I mention below may be a good starting point for further analysis as the gradient of a \"controlled loss\" (which is not the case for RFLO).\n- A BPTT-c/RTRL-c baseline would be an interesting add to disambiguate between the role of feedback control and approximate gradient estimation through RFLO. This baseline would include feedback control in the recurrent neural network dynamics and optimize for the MSE loss at the output. This would be useful in e.g. Fig3b and Fig5b.\n\n- l98-99: can the author clarify the link between the use of a local learning rule and the rapid adaptations shown in neuroscience studies?\n- Fig1: a, b, c legends are missing in the figure.\n- l140-141: \"that\" missing after \"feedback control,\"? typo \"outout\".\n- Fig2: \"approximate inference learning\": what do the authors mean by inference learning? I could not find any definition.\n- l167-168: \"does\" missing + typo for \"adaptatation\".\n- Appendix A.3: the authors mention that they use Adam with weight decay. The standard practice is to use AdamW instead (c.f. the AdamW paper for more detail). Can the author confirm that they are using AdamW?"
},
{
"confidence": 2,
"rating": 5,
"review_id": "THjocgjjx9",
"review_text": "Recent work has shown that feedback signals can be critical to rapid adaptation in control tasks, and may explain how biological intelligence can make rapid adjustments when solving such tasks. This paper studies how feedback control achieves this. To do so, the authors train an RNN enhanced with feedback control on a common control task, and study how the feedback signal lead the network to achieve more rapid adjustments when perturbations are introduced. The 3 main findings are that the feedback signals align well with the optimal global gradient of the error, that they help the network better weigh current information (vs. less relevant past information) during perturbations, and that they indirectly inject second-order information to the RNN.\n\n- This work focuses on improving the theoretical understanding of an important method. Given that our understanding of many deep learning methods are woefully inadequate, such work is critically important for the field's development.\n- The method and the results are clearly presented, the figures are excellent, and the writing is easy to follow.\n\nI am not familiar with feedback control and motor tasks; hence, I ask the AC to please take this into consideration to appropriately weigh my review. My remarks on the methods could be wrong or trivial. That said, I'll do my best to provide feedback.\n\n- Several sections of the paper seem to just present results from previous work, including section 3.1 and the entirety of the methods section. This makes the contributions of this paper seem rather thin.\n\n- I may be missing something, but some of the results seem minimally surprising. For example, in section 3.2, the authors state \"...the feedback contribution to the overall network output increases during perturbation.\" But how could it not increase during perturbation? Isn't the network explicitly trained to use the feedback information to make corrections during perturbation? The same goes for the alignment between the feedback signal and the optimal global gradient, and the indirect introduction of second-order information-- is it not by design that the network use feedback to make corrections, and thus the larger the correction needed (i.e. the larger the optimal gradient) the larger the feedback signal? And is it not by design that second-order information gets introduced via the recurrent connections that enables the network to \"save\" information from previous timesteps in the hidden state?\n\n- The authors claim that feedback control guides credit assignment in biological circuits, but uses BPTT during the pretraining phase of the RNN, which they acknowledge is not biologically plausible. \nIt seems to me that backprop is still doing much of the heavy lifting in terms of solving credit assignment, thus I'm not sure this claim is sufficiently justifiable. A more defensible claim given the current results may be that feedback control may guide motor adaptation in biological circuits.\nSimilarly, some parts of the intro and abstract strongly suggest that the presented method would perform credit assignment without suffering from the biological implausibilities of backpropagation (e.g. the abstract sets up the problem as \"backpropagation is known to perform accurate credit assignment of error, how a similarly powerful process can be realized within the constraints of biological circuits remains largely unclear\"), yet the actual method relies heavily on backpropagation.\n\n- The experiments are performed on a single task, using a small single layer RNN with 400 hidden units, and therefore it's unclear whether the findings would scale to other tasks and larger architectures. Given that the primary goal of this paper is to improve understanding of an existing learning algorithm, and most of the analysis are performed via empirical testing, I believe it's important for the authors to demonstrate that their conclusions are robust over a wider range of tasks and hyperparameters/architectures.\n\n- How does this work relate to hierarchical predictive coding, and to the feedback connections introduced by Hinton in [1] (and further explored by [2])?\n- The learning setting presented in this work seem very similar to the setting of reinforcement learning, which also deals with control tasks and shifting distributions. Do you foresee these same results (i.e. feedback control improves performance) to carry over to some RL tasks? If not, what are the differences that limit these results from applying there?\n\n[1] Hinton, G. (2022). The forward-forward algorithm: Some preliminary investigations. arXiv preprint arXiv:2212.13345.\n\n[2] Ororbia, A., & Mali, A. (2023). The predictive forward-forward algorithm. arXiv preprint arXiv:2301.01452."
},
{
"confidence": 4,
"rating": 7,
"review_id": "mwxnT896na",
"review_text": "The paper studies the effect of feedback control on motor learning in recurrent neural networks, finding that feedback control improves learning performance and better aligns with the true gradient w.r.t. the task.\n\n- Alignment with the true gradient is an interesting result and helps explain why feedback works\n- The authors study alignment from different perspectives (e.g. step-wise/full gradients, Newton method)\n- The task the authors consider is widely used in monkey experiments, therefore it should be possible to adapt the conclusions to real data or use them to guide new experiments\n\n- The training setup is rather limited; it would be interesting to see training done for other tasks and architectures (or RNN sizes).\n- The paper might benefit from some theoretical analysis of why the feedback signal alings with the true gradient, although it’s not clear if that can be easily done.\n\nWhat is the difference between RFLO and RFLO+c? Does the first lack the feedback term in Eq. 1? This should be clearly stated within Sections 2.2-2.4.\n\nLine 141: “outout”\nLine 141: “ is increasingly by”"
}
] | |
xaqPAkJnAS | Beyond Redundancy: Information-aware Unsupervised Multiplex Graph Structure Learning | Unsupervised Multiplex Graph Learning (UMGL) aims to learn node representations on various edge types without manual labeling. However, existing research overlooks a key factor: the reliability of the graph structure. Real-world data often exhibit a complex nature and contain abundant task-irrelevant noise, severely compromising UMGL's performance. Moreover, existing methods primarily rely on contrastive learning to maximize mutual information across different graphs, limiting them to multiplex graph redundant scenarios and failing to capture view-unique task-relevant information. In this paper, we focus on a more realistic and challenging task: to unsupervisedly learn a fused graph from multiple graphs that preserve sufficient task-relevant information while removing task-irrelevant noise. Specifically, our proposed Information-aware Unsupervised Multiplex Graph Fusion framework (InfoMGF) uses graph structure refinement to eliminate irrelevant noise and simultaneously maximizes view-shared and view-unique task-relevant information, thereby tackling the frontier of non-redundant multiplex graph. Theoretical analyses further guarantee the effectiveness of InfoMGF. Comprehensive experiments against various baselines on different downstream tasks demonstrate its superior performance and robustness. Surprisingly, our unsupervised method even beats the sophisticated supervised approaches. The source code and datasets are available at https://github.com/zxlearningdeep/InfoMGF. | https://openreview.net/pdf/e5b9f6af4bcc1edd63aea9284ca2c3aba26fc5b0.pdf | [
{
"confidence": 4,
"rating": 7,
"review_id": "h6c9IXNyB5",
"review_text": "- This paper presents an information theory approach to obtain a single graph fused from a multiplex graph, which preserves \n - sufficient task-relevant information \n - while removing task-irrelevant noise. \n- A learnable graph augmentation strategy is also developed. \n - The learned graph and representation can be applied to different types of tasks. \n- The effectiveness is supported by extensive experimental results.\n\n- This paper is well-motivated. \n - The authors find that each graph contains much unique task-relevant information, which is ignored by mainstream contrastive learning-based methods.\n- This paper develops multiple graphs non-redundancy principle, which lays the foundation for multiplex graph data process. \n - Two random and generative graph augmentation strategies are accordingly built to capture view-unique task information.\n- The experimental results are promising. \n - The framework demonstrates a clear advantage over existing methods, including advanced supervised approaches, highlighting its potential for broad application.\n- This paper provides the code and all experimental settings for reproducing the results.\n\n- The difference between the existing non-redundancy principle and multiplex graph non-redundancy is unclear. Please clarify it.\n- The proposed InfoMGF-LA runs out-of-memory on MAG data. The reason should be given.\n- It is possible that the proposed method cannot handle real-world large-scale graph. It should be addressed in the future and discussed in the conclusion part.\n- The difference between the proposed method and DGM is unclear.\n\nI list them in **Weaknesses**."
},
{
"confidence": 5,
"rating": 7,
"review_id": "hTcRDf1kRZ",
"review_text": "The paper introduces InfoMGF (Information-aware Unsupervised Multiplex Graph Fusion), a novel framework aimed at addressing the issue of graph structure reliability in Multiplex Graphs. The primary goal is to refine graph structures to eliminate noise and maximize task-relevant information. Theoretical analysis and comprehensive experimental results validate its effectiveness.\n\n1.\tOriginality: The paper addresses a critical gap in Unsupervised Multiplex Graph Learning (UMGL) by focusing on the reliability of graph structures, which is often overlooked in existing research.\n2.\tQuality: The proposed InfoMGF framework effectively refines graph structures to eliminate noise and maximizes both view-shared and view-unique task-relevant information. Theoretical analyses provided in the paper validate the effectiveness of InfoMGF in capturing task-relevant information and improving graph fusion. Extensive experiments demonstrate that InfoMGF outperforms various baselines and even sophisticated supervised approaches in different downstream tasks.\n3.\tClarity: The paper is generally clearly written and well organized.\n\n1.\tScalability: The framework involves several steps. Though the paper provides the complexity analysis in Appendix for each step, it is still unclear what is the overall complexity.\n2.\tReproducibility: The authors share the code for reproducibility. However, I didn’t see the datasets.\n3.\tAccuracy: The authors should check for the few grammatical and spelling errors that occur in the text.\n\nAs above."
},
{
"confidence": 4,
"rating": 7,
"review_id": "5J0sqL1ri5",
"review_text": "The paper introduces InfoMGF, an innovative framework for Unsupervised Multiplex Graph Learning (UMGL) that addresses the often-overlooked issue of graph structure reliability. InfoMGF refines graph structures by removing task-irrelevant noise and maximizing task-relevant information through mutual information maximization. Extensive experiments demonstrate its superior performance over various baselines and even some supervised methods, validating its effectiveness in enhancing node representation learning.\n\n- New Problem Formulation: The paper pioneers the investigation of graph structure reliability in multiplex graphs, which is a significant advancement in the field. Multiplex graphs enrich the representation of real-world systems and its analysis is very difficult inherently.\n- Theoretical Analysis: The several theorems are quite interesting and provide a solid foundation for the proposed method. In particular, Theorem 3 proves the necessity of fusing multiplex graphs.\n- Extensive Evaluation: The framework is thoroughly tested against various state-of-the-art methods on both node clustering and classification tasks, showcasing its robustness and effectiveness across different tasks. The comparison methods are representative and new.\n\n- Robustness: Fig.4 shows that the proposed method is very robust to structure noise. However, more analysis is needed. Both InfoMGF and SUBLIME are structure learning methods. Compared to InfoMGF,Why does the performance of SUBLIME degrade rapidly in the case of edge deletions?\n- Clarity: The paper develops two algorithms in this paper: InfoMGF-RA and InfoMGF-LA. However, it is a little confusion that what is the difference in their objective functions.\n\n1.\tThere are some small errors in Algorithm 1. In particular, the title of it is InfoMGF-LA, however, line 11 also includes the operation for InfoMGF-RA.\n2.\tThe proposed method depends on the assumption of optimal augmentation. How to guarantee that the used feature and structure augmentations are optimal? It is still unclear to me.\n3.\tThe authors discuss the robustness against structure noise. How about feature noise? Could you share your intuition on this matter?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "iXj0tm8q3t",
"review_text": "The authors develop a novel approach to improve Unsupervised Multiplex Graph Learning by refining graph structures to eliminate noise and maximize relevant information. The method utilizes mutual information maximization to integrate multiple graph views effectively. Theoretical validation and comprehensive experiments show that the proposed method outperforms existing methods.\n\n1.\tMultiplex graph provides an efficient representation of complex systems. This paper focuses on non-redundancy issue, which is a new perspective and opens up a new avenue for future research.\n2.\tThe proposed method adopts an unsupervised and generalized approach. Its performance surpasses several supervised approaches, underscoring its potential for practical applications. \n3.\tThe framework’s performance is validated through comprehensive experiments and compared with more than 20 methods.\n4.\tVisualization is also a strong point of this paper. The figures of node correlation, heatmaps of the subgraph, and unique relevant edge ratio are very illustrative.\n\n1.\tAccording to Table 1 and 2, it seems that the proposed method improves more on clustering than classification.\n2.\tOverall, this paper is well-organized. However, the writing could be improved in terms of tone and words.\n3.\tThere are too many notations, which are confusing.\n\n1.\tIs there any explanation about why the method performs better on clustering than classification?\n2.\tHow to solve the above issue?\n3.\tThe font of k in the caption of fig.5a is not correct. \n4.\tIn the Appendix, the authors proof Proposition 1, however, there is no corresponding one in the main paper."
}
] | |
xabStWAUtr | Co-occurrence is not Factual Association in Language Models | Pretrained language models can encode a large amount of knowledge and utilize it for various reasoning tasks, yet they can still struggle to learn novel factual knowledge effectively from finetuning on limited textual demonstrations. In this work, we show that the reason for this deficiency is that language models are biased to learn word co-occurrence statistics instead of true factual associations. We identify the differences between two forms of knowledge representation in language models: knowledge in the form of co-occurrence statistics is encoded in the middle layers of the transformer model and does not generalize well to reasoning scenarios beyond simple question answering, while true factual associations are encoded in the lower layers and can be freely utilized in various reasoning tasks. Based on these observations, we propose two strategies to improve the learning of factual associations in language models. We show that training on text with implicit rather than explicit factual associations can force the model to learn factual associations instead of co-occurrence statistics, significantly improving the generalization of newly learned knowledge. We also propose a simple training method to actively forget the learned co-occurrence statistics, which unblocks and enhances the learning of factual associations when training on plain narrative text. On both synthetic and real-world corpora, the two proposed strategies improve the generalization of the knowledge learned during finetuning to reasoning scenarios such as indirect and multi-hop question answering. | https://openreview.net/pdf/a05ca368d45ceff595e9950cf21de3cd1baf43fe.pdf | [
{
"confidence": 4,
"rating": 7,
"review_id": "xqHME5vef5",
"review_text": "This paper distinguishes two forms of knowledge learning in the model: \n1. co-occurrence statistics: from modeling the co-occurrence of entities in the text.\n2. factual associations: from modeling entity relations established through implicit associations.\n\nThey synthesize two datasets where knowledge is represented in the above two ways. They show that models that learn factual associations can generalize better than models that learn co-occurrence statistics. They also show that models that learn from factual associations can utilize the knowledge better for reasoning.\n\nThey further study where the knowledge of these two different representations is stored in the model. They show that co-occurrence statistics are stored in the middle layer, while factual associations are stored in the lower layers. Accordingly, they propose to reset the middle layer while training the model. They show that this approach makes models generalize and do reasoning better.\n\n1. The identification of the two forms of knowledge learning shed valuable insight on how models generalize from learned from the training data.\n2. They create a dataset and an associated experiment, which can be used for further studies in the same direction.\n3. They study where the knowledge is stored in the model. According to the findings, they propose a simple but effective approach to improve the models’ generalization ability and utilization of the knowledge for reasoning.\n4. Their experiment is comprehensive. They utilize a benchmark dataset MQuAKE-T and they include fine-tuning only the lower layers as a baseline.\n\nBecause studying how language models acquire knowledge from training data is crucial for developing a better training paradigm and I found this paper solid and well-presented, I highly recommend this paper.\n\n1. They only experiment with MQuAKE-T where each sentence encodes a piece of knowledge (subject-relation-object). The authors could experiment with some more realistic settings where a single sentence contains more than one piece of knowledge.\n2. It would be interesting to see how model scaling affects the behavior. The authors could experiment with models of different sizes in the same model family.\n\n1. Do you think models learn this two forms of knowledge differently when they are trained from scratch?\n2. The paper \"On Retrieval Augmentation and the Limitations of Language Model Training\" could be related to your work.\n3. Also, the title could be more informative."
},
{
"confidence": 3,
"rating": 7,
"review_id": "jrDCvKGdM5",
"review_text": "This paper studies how language models acquires factual knowledge during finetuning. It shows that narrative input tends to teach a model co-occurrence between entities, while referencing input teaches more about factual association. Models that learn factual association generalizes better to various question answering tasks than models that learn co-occurrence, especially for multi-hop reasoning tasks. By resetting different layers to the pretrained weights in models, the authors show that co-occurrence is mostly learned by the middle layers, while factual association is mostly learned by the lower layers. Based on this observation, the authors propose to reset the upper 2/3 layers to learn factual association when finetuning models on narrative input.\n\n- This paper studies how factual knowledge is learned by language models training on pure textual data, which is novel to my knowledge. The authors delivered clear lessons based on synthetic data and per-layer parameter ablation, and provided two solid solutions for real-world reasoning datasets. These lessons are important to the community of language models and reasoning.\n- The paper is well structured and very easy to read. There are no typos and grammar errors.\n\n- The analysis of this paper is limited to triplets, which do not represent all kinds of knowledge in reasoning tasks. Can you extend the conclusions to more general knowledge forms?\n- The authors do not provide enough insights why narrative input tends to teach co-occurrence statistics. The only insight I can find in the paper is that co-occurrence statistics can be learned faster (Line 245-247). I would suggest the authors discussing this more in Section 3.\n\n- The title does not clearly reflect the core contribution of this paper. May consider “How do language models learn factual association in finetuning?” Same for Section 3 header.\n- Is it possible that language models learn factual association better from reference input is because reference input provides the same context for synonyms? I hypothesize that understanding “is” is identical would be harder than learning synonyms under the same context.\n- Line 142-143: Are non-trivial positive comparison ratio and negation ratio sufficient to verify factual association? I feel they are only necessary but not sufficient.\n- Figure 2: What is log likelihood ratio here? It is hard to get an intuition of what different scales mean here."
},
{
"confidence": 4,
"rating": 5,
"review_id": "ynhz8gQzZT",
"review_text": "The work investigates the deficiencies of pretrained language models in learning factual knowledge, highlighting that these models tend to learn word co-occurrence statistics rather than true factual associations. The authors find that language models, when dealing with explicit relationships, are prone to merely memorize word co-occurrences and perform poorly on tasks that require reasoning.\n\n* This work shows that language models tend to learn word co-occurrence statistics instead of true factual associations. This finding is important for improving the knowledge learning of language models.\n* The authors propose two methods to improve the learning of factual associations. First, by using text with implicit rather than explicit factual associations, they force the model to learn these associations. Second, by actively forgetting the learned co-occurrence statistics, they allow the model to better learn and retain factual associations.\n* The proposed strategies significantly improve the model's performance in multi-hop reasoning tasks on both synthetic and real-world datasets, proving their effectiveness.\n\n* The generalization across different domains. This work synthesizes Country-City-Animal data, which is somewhat limited.\n* Reasoning or memory? The purpose of implicit training is to force the model to understand factual associations through indirect connections, thereby enhancing its reasoning abilities. This approach will help the model perform better on complex, multi-step reasoning questions rather than simple memory tasks because of their training pattern. While, it can’t directly prove that referencing method can bring better memory than Co-occurrence. Moreover, for simple QA tasks, the Referencing method performs worse than the Narrative method. Different test tasks should be designed to verify knowledge retention. For instance, adding more noise and interference during simple QA tests to evaluate the robustness of memory. Design memory retrieval tasks that do not require complex reasoning to ensure that the tests only assess the model's ability to recall facts.\n* Although it mentions that co-occurrence statistics and factual associations are parameterized in different layers of the Transformer model, it lacks a deep explanation of the specific mechanisms and reasons behind these phenomena.\n\nSee weaknesses"
},
{
"confidence": 4,
"rating": 7,
"review_id": "fngboEGbfo",
"review_text": "This paper investigates the learning of factual knowledge in pretrained language models, distinguishing between knowledge represented as word co-occurrence statistics and true factual associations. The authors find that language models tend to learn co-occurrence statistics, which do not generalize well to reasoning tasks, while factual associations, which generalize better, can be harder to learn. They propose two strategies to improve the learning of factual associations: training on text with implicit associations and using a method called active forgetting to discard learned co-occurrence statistics. Their experiments on synthetic and real-world datasets demonstrate that these strategies significantly enhance the models' ability to generalize factual knowledge in various reasoning scenarios. The paper includes a thorough layer-wise analysis of knowledge parameterization in transformer models finding different localization for co-occurence statistics vs factual knowledge in model weights.\n\nI think the strengths of this paper are in the following contribtions \n\n- Identification of Knowledge Representations: The paper clearly distinguishes between two forms of knowledge representation in language models: co-occurrence statistics and true factual associations. This distinction is crucial for understanding the limitations of current models. Additionally, the detailed analysis of how co-occurrence statistics and factual associations are parameterized across different layers of transformer models provides valuable insights into the internal workings of pretrained models.\n\n- Empirical Validation: The authors conduct comprehensive experiments using synthetic and real-world datasets to validate their claims. They show that models trained on implicit associations generalize better to reasoning tasks than those trained on explicit co-occurrence.\n\n- Novel Training Strategies: They propose a training strategies to improve factual learning are innovative. Training on text with implicit associations and a method of actively forgetting learned co-occurrence statistics to unblock factual learning.\n\n- Public Release of Resources: Finally, the release of the synthetic corpus and code to reproduce their reulsts can facilitate further research and experimentation in this domain.\n\nI did not find any major weaknesses in this paper.\n\nThe main ones, which are mentioned by the authors when addressing current limitations of their work are the following:\n\n- Synthetic data split: how are you splitting your synthetic data? Are you evaluating on an unseen subset for both synthetic as well as natural dataset? I understood you are testing on unseen data for natural dataset and I am unsure if that's also the case for the synthetic dataset. Please clarify. This is the reason why I am, at the moment, giving a score of 6 for what would otherwise be a clear 7.\n\n- Overhead in Data Preparation: Converting general text to forms with implicit associations for real-word data may require significant effort and sophisticated rewriting techniques, potentially limiting practical applicability.\n\n- Limited Scope of Text Variations: The paper only considers two forms of text (narrative and implicit association). There is a need to explore more diverse textual representations to validate the findings comprehensively.\n\n- Focus on a single type of reasoning: While the claims that learning implicit knowledge improve performance on complex reasoning tasks, the paper focuses on a specific type of reasoning. Other type of reasoning like logical or mathematical should be validated. Additionally, it is unclear whether the proposed finetuning method and data harm existing model performance on standard LLM benchmark. It would a nice addition to show whether the method in the paper do not conflict with existing model knowledge in other domains.\n \n- Evaluation information: Taken from the appendix \"For evaluation on question answering tasks, we report 5-shot exact match accuracy unless otherwise specified.\" Please add this in the main body of the paper and mention why you use this metric instead of others like F1 for QA tasks. Is it because all your tasks require a single word as gold label? Is this true also for the real-world dataset in table 3 (MQuAKE-T and 2WikiMultiHopQA)? Please add this info together with your generation parameters used at inference time (number of generated tokens/sampling parameters etc.)\n\n- \n---\n\nMinor\n\n- Missing reference: De Cao et al. Editing Factual Knowledge in Language Models, EMNLP 2021. This is an important reference when discussing model editing since it was among the first contribution in this area.\n\n- line 200 the reference to Appendix 3.3 is wrong\n\n----\n\n### Final Recommendation\n\nOverall, I think the claims are backed by well-presented empirical evidence and I vote for the inclusion of this paper to NeurIPS.\n\n### Update post rebuttal\n\nI increase my score from 6 to 7\n\n- Have you tried evaluating the model in a 0-shot fashion? Given the model has been finetuned on that data it can be helpful to add 0-shot performance \n\n- How do you compute shaded areas in figure 3? For instance, it seems that MC accuracy of Llama 3 70B Narrative trained does not show decrease performance on the lowest layer for first-to-last ablation while it does for last-to-first ablation, yet you shaded that area for both ablation. It can be informative to add additional info on the criteria you used to shade those areas\n\n- To compute the comparison ratio, the score depends on the choice of the entity in the denominator. Given the small size of your synthetic data, unless you are already doing so,, can you marginalize across all other entities? Please clarify how you compute the comparison ration"
}
] | |
xZxXNhndXU | Dynamic 3D Gaussian Fields for Urban Areas | We present an efficient neural 3D scene representation for novel-view synthesis (NVS) in large-scale, dynamic urban areas. Existing works are not well suited for applications like mixed-reality or closed-loop simulation due to their limited visual quality and non-interactive rendering speeds. Recently, rasterization-based approaches have achieved high-quality NVS at impressive speeds. However, these methods are limited to small-scale, homogeneous data, i.e. they cannot handle severe appearance and geometry variations due to weather, season, and lighting and do not scale to larger, dynamic areas with thousands of images. We propose 4DGF, a neural scene representation that scales to large-scale dynamic urban areas, handles heterogeneous input data, and substantially improves rendering speeds. We use 3D Gaussians as an efficient geometry scaffold while relying on neural fields as a compact and flexible appearance model. We integrate scene dynamics via a scene graph at global scale while modeling articulated motions on a local level via deformations. This decomposed approach enables flexible scene composition suitable for real-world applications. In experiments, we surpass the state-of-the-art by over 3 dB in PSNR and more than 200x in rendering speed. | https://openreview.net/pdf/65abb3d26796730cf34d63a16451810604040c39.pdf | [
{
"confidence": 5,
"rating": 5,
"review_id": "xL9aqsl9tV",
"review_text": "This paper aims to perform view synthesis for dynamic urban scenes. This paper adopts 3DGS as scene geometry and uses neural fields to model the dynamic appearance of urban scenes. The neural scene graph is introduced to handle the movement of dynamic objects, and a deformation field is used to handle local articulated motions. Experiments show that the proposed approach outperforms baseline methods.\n\n1. The presented pipeline well handles the dynamic appearance of urban scenes.\n2. The experiments are sufficient and validate the effectiveness of the proposed approach.\n3. The idea of combining neural fields with 3DGS is sound and effective.\n\n1. The method presented in the paper takes 0.17 seconds to render an image at a resolution of 1550x2048, which is significantly slower than conventional 3DGS. Is the trade-off of such a significant sacrifice in rendering speed for quality improvement justified? Does the author have any solutions to address this issue?\n2. The paper needs to evaluate the extent to which neural fields impact the rendering speed of 3DGS.\n3. The pipeline figure of the paper should be clearer. The connections between the various modules are not easily discernible from the figure and its caption. For instance, it is not clearly depicted how the latent codes obtained from the scene configuration are inputted into the neural fields. Then, how are neural fields combined with 3DGS to represent static scenes and dynamic objects? The figure only shows simple association arrows. However, these modules are not merely input-output relationships. There are some combination operations between them.\n4. The paper uses neural fields to represent appearance, which reduces the memory footprint but may also significantly impact rendering speed. Has the paper considered how to address this issue?\n5. In Figure 2 of the paper, regarding the neural fields section, the symbols for static opacity correction and dynamic deformation are inconsistent with the descriptions in Section 3.2 of the paper. This is quite confusing.\n6. I am curious whether the combination of neural fields with 3DGS could make the optimization of 3DGS unstable?\n7. The non-rigid objects mentioned in the paper refer to cars, right? Or other objects? I did not see how the paper describes the modeling of cars. Although the paper mentions the use of scene graphs for modeling, I did not see how dynamic cars are represented using scene graphs. Does the paper treat dynamic cars as non-rigid objects directly? In this case, how can the large range of movement of dynamic cars be handled?\n\nThe presentation of this paper should be improved. Some important technical details are missing. The limitations from the introduction of neural fields should be discussed."
},
{
"confidence": 5,
"rating": 6,
"review_id": "AVm4uQmqVd",
"review_text": "This paper proposes a hybrid neural scene representation for dynamic urban driving scene modelling. The method utilizes 3D Gaussians as an efficient geometric scaffold and neural fields to represent appearance, thereby reducing memory. To account for transient scene geometry variations caused by weather, seasons, and other factors, the authors introduce an opacity attenuation field that modulates the scene geometry. For modeling dynamic actors in the scene, an object-centric representation is used, with a non-rigid deformation in the canonical space to animate objects such as pedestrians. Experiments demonstrate that the proposed method achieves state-of-the-art performance while rendering faster than previous methods.\n\n* The paper is well-written and easy to follow.\n* The decomposed representation of appearance significantly reduces memory usage.\n* It models transient scene appearance and geometry, as well as non-rigid objects like pedestrians.\n* The evaluation and ablation study are comprehensive.\n* The paper demonstrates visually superior results compared to baselines such as SUDS and ML-NSG.\n\n* The rendering of the proposed scene representation requires query appearance from the neural fields, it is unclear whether this will impact rendering speed compared to spherical harmonics representation.\n* This paper lacks a comparison with recent neural field baselines such as UniSim and NeuRAD for urban driving scenes. Additionally, there is no comparison of the speed to 3D Gaussian baselines.\n* How to control the non-rigid objects in the scene? e.g., animating the pedestrians given a sequence of human poses.\n* Is it feasible to render other sensor modalities in autonomous driving, such as LiDAR?\n\nThis paper addresses a practical and important problem in autonomous driving. The writing is clear, and the results are promising. I look forward to the authors' response to the concerns I raised above."
},
{
"confidence": 5,
"rating": 7,
"review_id": "a1HKXkD7lO",
"review_text": "The paper presents a novel 3D scene representation for novel view synthesis (nvs) in dynamic urban environments where, in particular, under heterogeneous imaging environments. The proposed representation relies on existing ingredients: 3D Gaussian Splatting, learned static/dynamic object instances, and a global scene graph.\n\nThe resulting system yields very strong results on a series of public autonomous driving benchmarks.\n\n### + Readability.\nOverall, in its current state, the paper's readability is relatively good. The main ideas, concepts, are mostly well discussed, conveyed, and articulated, throughout the paper.\n\n### + Practical usefullness of the considered problem.\n\n### + Structure, and Organization of the Contents.\nThe presentation is mostly on point and each dedicated section of the paper is properly balanced. The use of text real-estate is fair.\n\n### + Relative simplicity of the conceptual contribution.\n\n### + The amount of implementation details is very good.\n\n### + The reported performance.\n\n### + Implementation details for reproducibility: excellent.\n\n### - (1) Positioning of the conceptual contribution vs. the competitive landscape.\n\nIn particular, the proposed method looks very much like a revisit of Panoptic Radiance Fields [49] by replacing the NeRF component byt 3D Gaussian splats. \n\nWhile this is perfectly fine, this merits a targeted, transparent discussion in the main paper to help the reader understand the whereabouts of how the proposed contribution relates (or not) with such pieces of litterature.\n\n### - (2) How much does it cost?\n\nMissing piece of information regarding the resource usage, memory footprint, typical timings etc to better understand the downsides of using the provided method.\n\n### - (3) (To a lesser extent) Certain contents in the paper are unclear.\n\nFigure 4: what is happening? Adding color annotations or boxes would definitely help.\n\nI do not have more questions or suggestions than the ones underlying the aforementioned weaknesses."
},
{
"confidence": 4,
"rating": 8,
"review_id": "qdczzHYDLg",
"review_text": "This paper works on novel view synthesis (NVS) for large-scale, dynamic urban scenes. This paper proposes a neural scene representation called 4DGF, which uses 3D Gaussians as an efficient geometry scaffold while relying on neural fields as a compact and flexible appearance model. The proposed method integrates scene dynamics via a scene graph at global scale while modeling articulated motions on a local level via deformations. The method significantly outperforms baselines in terms of speed and rendering quality on three benchmarks.\n\n1. The idea of combining Gaussian Splatting and neural fields to model geometry and appearance, respectively, is very interesting. This makes a lot of sense considering the efficiency and the advantages of each of the two representation. This is definitely a more scalable approach to large-scale scenes compared to prior work.\n\n2. Extensive experiments have been conducted to validate the proposed method, this includes comparing with recent baselines on three benchmarks and the ablation studies that carefully examine each component. Moreover, the rendering quality improvement and the speedup is very significant on all three datasets.\n\n3. The paper is very well-written and easy to follow. Implementation details are sufficiently discussed for reproducibility.\n\n1. I appreciate the authors' including a video in the submission. I found sometimes there's a large foggy region near the camera (e.g., the regions on the right during the 5-6th second), do the authors have any explanations on that? Is it caused by any limitations discussed in Sec. 5?\n\n2. I understand that this paper mainly focuses on large dynamic scenes. I am curious how this hybrid representation performs on 3D statics scenes (e.g., the benchmarks that the original 3DGS have been tested on). This seems to be a more straightforward way to see the effect of using neural fields instead to model appearance.\n\nMinor questions/suggestions:\n\nWhat does GPU memory in Tab. 4 (a) mean? Is it peak memory?\n\nIn Fig. 1 inputs, the blue/orange colors for the image boundaries are also used for \"geometry\" and \"appearance\" respectively. I assume there's not such a correspondence between the input and geometry/appearance. So maybe you could change the input boundaries to different colors.\n\nFIg. 2: extra space in \"Scene configuration\"\n\nTab. 4 appears before Tab. 3, maybe you could switch the order."
}
] | |
xZKXGvLB0c | Causal vs. Anticausal merging of predictors | We study the differences arising from merging predictors in the causal and anticausal directions using the same data.
In particular we study the asymmetries that arise in a simple model where we merge the predictors using one binary variable as target and two continuous variables as predictors.
We use Causal Maximum Entropy (CMAXENT) as inductive bias to merge the predictors, however, we expect similar differences to hold also when we use other merging methods that take into account asymmetries between cause and effect.
We show that if we observe all bivariate distributions, the CMAXENT solution reduces to a logistic regression in the causal direction and Linear Discriminant Analysis (LDA) in the anticausal direction.
Furthermore, we study how the decision boundaries of these two solutions differ whenever we observe only some of the bivariate distributions implications for Out-Of-Variable (OOV) generalisation. | https://openreview.net/pdf/329aa661f69e5ddf452da79c89b300d87c6805fa.pdf | [
{
"confidence": 4,
"rating": 4,
"review_id": "mAAZaZycXC",
"review_text": "The paper explores the potential differences in predictor merging when approached from causal versus anti-causal directions. The results from MAXENT and CMAXENT indicate that in the causal direction, the solution converges to logistic regression, whereas in the anti-causal direction, it converges to Linear Discriminant Analysis (LDA). The study also examines how the decision boundaries of these two solutions vary when only partial bivariate distributions are observed, highlighting implications for Semi-Supervised Learning (SSL) and Out-Of-Variable (OOV) generalization.\n\nThe paper investigates the differences that arise in predictor merging from causal and anti-causal perspectives. It demonstrates through MAXENT and CMAXENT that the causal direction results in logistic regression, while the anti-causal direction leads to Linear Discriminant Analysis (LDA). Additionally, the paper analyzes how the decision boundaries of these two methods change when only some bivariate distributions are observed, discussing the implications for Semi-Supervised Learning (SSL) and Out-Of-Variable (OOV) generalization.\n\n1. **Small scale dataset**: The main weakness is the small scale of the data and models studied in the paper. I believe the challenge of reducing computational cost with mixture-of-expert models is more relevant to larger models. The authors however only presented results on small bivariate distributions. Experiment results with larger models are appreciated. If experiments with larger models are not feasible, I hope authors can discuss potential limitations of the study under those larger-scale/multivariate scenarios. Do you expect the findings in Eqn 1&2 change in larger-scale/multivariate setups?\n\n2. **Lacks comparison**: This paper lacks sufficient comparisons with other papers. Can you explain what are the differences and advantages of the proposed method, compared to the pi-Tuning method proposed in [1] (Section 3.2 and Section 3.3)?\n\n3. **Contributions are obvious from the observations given in MAXENT and CMAXENT**: The overall paper seems like a consolidation of few previous papers. \n\n4. **Non-causal**: The paper has studied the causal and anti-causal setups but from the signal processing point of view pleas study the non-causal settings. \n\n5. **Results under the availability of Noise and biases**: The paper has shown result in a toy example which is good for overall pipeline understanding, but not sufficient to understand it from ML perspective. For example: what if the random variable leverages some amount of noise and have biases that imposes skewness in the distribution? \n\n\nReferences:\n[1] Wu, Chengyue, et al. \"pi-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation.\" International Conference on Machine Learning. 2023.\n\n1. **Noise**: What if we have observation noise (which is very common in MRI dataset)? Will the results still hold?\n2. **Co-variance shift**: Not sure whether I am missing anything, but, what if there is a co-variance shift?\n3. A curiosity is whether all the theoretical results hold if the distributions are not Gaussian?\n4. From a signal processing point of view can we get results for non-causal setups?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "ETdOSCzLFg",
"review_text": "The authors give a treatment of the mixture of experts problems using the idea of maxent; they use this as a tool to discuss how to merge causal and anti-causal inferences on the same data, in part as a way to assess the quality of the data being analyzed.\n\nThe discussion of the differences and merging of causal and anti-causal analyses was strong and appreciated.\n\nThe framing of the paper, I felt, missed a lot of literature and possible approaches to the issue being addressed. That is, the paper is framed as a discussion of merging of expert models (which can be an important problem), and maxent is proposed as a method for doing this. But the merging of experts problem is itself framed as a problem of inferring causal graphs where each expert has access to only part of the data. The problem of overlapping dataset has been extensively studied, but no reference is made to that literature, or to any ideas that literature has proposed which may compete with the maxent proposal here. More recently (actually not that recently) the discussion has taken a turn into discussions of privacy, for contexts, e.g., where different hospitals have access to their own dataset but may want collaborate on building causal model without risking making available by inference the identities of their respective patients--i.e., the so-called \"federated learning\" problem. This has been extensively studied also in the literature to date with many proposals given for how to address it. I think this paper could benefit from a literature review of this sort to place the proposed ideas in context, with comparisons made to alternative methods, or at least reasons not to compare to particular methods that make sense.\n\nAlso, the paper is mainly theoretical but could have benefitted from discussion of an empirical or simulation example.\n\nWould the authors be willing to expand their literature review to encompass more of the literature relevant to what is being called the \"mixed of experts\" issue?\n\nWould the authors be willing to include an empirical example, if not in the main test, then in the Supplement, with a pointer from the main text? It would be very helpful to know whether these ideas about causal/anti-causal merging are helpful empirically or in simulation.\n\nWould the authors be willing to provide software to allow Neurips readers to easily evaluate the ideas in this paper?"
},
{
"confidence": 3,
"rating": 6,
"review_id": "SRvrfSxrGa",
"review_text": "This paper studies the problem of learning a mixture of experts (predictors) where individual predictors have been learned with different causal constraints. It studies different asymmetries that arise when we merge different predictors using the Causal Maximum Entropy (CMAXENT) objective. It goes on to show that different data-generating processes lead CMAXNET to reduce to different objectives under some restrictive setting. Next, they show how the learnt predictors will have different decision boundaries under different data moment restrictions.\n\n1. The paper is well-written and easy to follow. \n2. The contribution of this paper, however, restricted to a simple setup, is novel. The author shows that under different assumptions on the data-generating process, the CMAXENT objective will yield different predictors and establish necessary and sufficient conditions under which the predictors are different.\n\n1. The connection with the OOV generalization literature is not discussed properly. In particular, it would be interesting to see how this paper's observation relates to the paper \"On causal and anti-causative learning\" (ICML 2012) and the guarantees they have for generalization to the distribution shift.\n\n2. In the introduction and abstract, the authors mention that they study the problem of merging the two predictors, i.e., one predictor trained to assume an anti-causal data generating process (DGP) and another assuming causal. Next, in Sections 3 and 4, the authors show the closed form of each predictor separately under different DGPs. However, little is said about the final \"combined\" predictor and its generalization properties. See Question 1 for more.\n\n**Questions**:\n1. By merging the predictor, I understand learning a combined model using both the causal and anti-causal predictor. Is my notion of \"merging\" predictors correct, or am I missing something?\n\n\n**Typos**:\n1. Line 235: I think you mean X2 will be irrelevant in the estimation of the target predictor."
},
{
"confidence": 4,
"rating": 6,
"review_id": "ZrLsb4P4DM",
"review_text": "This paper studies the differences and properties that emerge when one uses causal, anticausal features for prediction.\n\n**S1.** This work makes several interesting observations of causal and anticausal predictors under their parametric assumptions.\n\n**S2.** This work suggests some potential considerations for practitioners dealing with feature sets that contain both types of information.\n\n**W1.** The primary weakness of this work is that the connections are underexplored empirically and in more complicated settings, e.g., higher dimensions and discrete data.\n\n**W2.** While I do not have an issue with the simplifications you have made to make the connections clear, the lack of more general results combined with a lack of real-world datasets that exhibit properties resembling the observations from your analysis limit the impact of this work is insufficient for the venue.\n\n**W3.** Some of the observations merely confirm properties already known, e.g., the asymmetries on causal and anticausal directions [1-2].\n\n[1] Schölkopf, Bernhard, et al. \"On causal and anticausal learning.\" arXiv preprint arXiv:1206.6471 (2012).\n\n[2] Janzing, D., and B. Schölkopf. \"Causal inference using the algorithmic Markov condition. to appear in IEEE Transactions on Information Theory.\" See also http://arxiv. org/abs/0804.3678 (2008).\n\n**Q1.** Do your results hold with incomplete causal sets?\n\n**Q2.** Are there any connections between your observations and robustness to distribution shifts?"
}
] | |
xXRnUU7xTL | SelfCodeAlign: Self-Alignment for Code Generation | Instruction tuning is a supervised fine-tuning approach that significantly improves the ability of large language models (LLMs) to follow human instructions. For programming tasks, most models are finetuned with costly human-annotated instruction-response pairs or those generated by large, proprietary LLMs, which may not be permitted. We propose SelfCodeAlign, the first fully transparent and permissive pipeline for self-aligning code LLMs without extensive human annotations or distillation. SelfCodeAlign employs the same base model for inference throughout the data generation process. It first extracts diverse coding concepts from high-quality seed snippets to generate new tasks. It then samples multiple responses per task, pairs each with test cases, and validates them in a sandbox environment. Finally, passing examples are selected for instruction tuning. In our primary experiments, we use SelfCodeAlign with CodeQwen1.5-7B to generate a dataset of 74k instruction-response pairs. Finetuning on this dataset leads to a model that achieves a 67.1 pass@1 on HumanEval+, surpassing CodeLlama-70B-Instruct despite being ten times smaller. Across all benchmarks, this finetuned model consistently outperforms the original version trained with OctoPack, the previous state-of-the-art method for instruction tuning without human annotations or distillation. Additionally, we show that SelfCodeAlign is effective across LLMs of various sizes, from 3B to 33B, and that the base models can benefit more from alignment with their own data distribution. We further validate each component’s effectiveness in our pipeline, showing that SelfCodeAlign outperforms both direct distillation from GPT-4o and leading GPT-3.5-based distillation methods, such as OSS-Instruct and Evol-Instruct. SelfCodeAlign has also led to the creation of StarCoder2-Instruct, the first fully transparent, permissively licensed, and self-aligned code LLM that achieves state-of-the-art coding performance. Overall, SelfCodeAlign shows for the first time that a strong instruction-tuned code LLM can result from self-alignment rather than distillation. | https://openreview.net/pdf/f4cd100d3f9f85fe8c929ea517dc4cbd24143e72.pdf | [
{
"confidence": 3,
"rating": 7,
"review_id": "v9GTLuSZZm",
"review_text": "- The paper introduces SelfCodeAlign, a fully transparent and permissive self-alignment pipeline for code generation in LLMs without relying on extensive human annotations or distillation from larger models. SelfCodeAlign generates instruction-response pairs from seed snippets, evaluates responses with test cases, and fine-tunes models based on successful executions. The approach shows superior performance over state-of-the-art methods, including GPT-3.5-Turbo-based distillation, particularly in HumanEval+ benchmark. The pipeline demonstrates effectiveness across various model sizes, emphasizing the benefits of self-generated data over teacher models with smaller performance gaps. \n- Overall, I feel that SelfCodeAlign is a very easy workflow to follow and I see much potential for such pipelines that do not depend on distillation or human annotations. I recommend an accept.\n\n## Originality\n- The paper adequately cites related work, clearly identifying gaps such as the lack of transparency in existing methods, which is a key motivation for their work. \n\n## Quality\n- The submission is technically sound with both quantitative and qualitative analysis.\n- The authors provide detailed experimental results, demonstrating significant performance improvements over baselines.\n- The inclusion of both large-scale and small-scale model evaluations further strengthens the quality of the research.\n- In terms of ethical considerations, they have considered all terms of use as well as the data in code snippets.\n\n## Clarity\n- Well organized paper, except for appendix.\n\n## Significance\n- The results are highly significant, as SelfCodeAlign achieves performance improvements, notably surpassing models that are an order of magnitude larger in size. This work addresses the challenge of instruction tuning without human annotations or distillation, offering a scalable and transparent solution that advances the state of the art in code generation.\n\n## Originality\n- Perhaps similar to this paper [Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models](https://arxiv.org/pdf/2312.06585) ? Even if it is different, I think that this should also be part of your baseline comparison as well.\n\n## Quality\n- The qualitative examples provided in the appendix are excessively long, which may overwhelm the reader and obscure the main differences and contributions of the methodology. It would be beneficial to reduce the number of examples or to shorten them, focusing on highlighting the key differences and improvements over baseline methods. Additionally, the examples are presented in black and white with no descriptions or annotations, making it difficult to discern their significance. Providing clearer, annotated examples with concise explanations would enhance the readability and impact of this section.\n- I do not see any weaknesses discussed in this work, for example, in what scenario do you think does this methodology not work? Why is the score still not perfect? (or for eg, below 80% accuracy)\n\n- What about experiments/benchmarking on models that uses the GPT4 family as part of distillation? \n- Why did you only limit it to 33B, what about 70b?\n- Line 121: What is the difficulty for? It is subjective, so how does the difficulty aid the model/aid in finetuning of models?\n- Line 228-233, Table 7: Any reason why the increasing trend is not consistent, for eg for StarCoder2-15B, the score decreased when tuned with DeepSeek-Coder-33B data? \n- Line 232, Table 7: Why did you not fill up the blank cells just like the last row? This would have ensured that your statement is true for all models, because you are just basing off CodeQwen1.5-7B model only."
},
{
"confidence": 2,
"rating": 6,
"review_id": "RHsipqEMyr",
"review_text": "The authors proposed SelfCodeAlign that finetunes the model based on the filtered data generated by the same model itself. The authors conduct experiments to show that SelfCodeAlign outperforms most open-sourced models that were finetuned on public code dataset.\n\nThe code generation problem is important and the results (compared to models trained on public dataset) are promising.\n\nCompare to models that are distilled/trained on non-disclosed data, the performance of SelfCodeAlign is not as competitive. The presentation can be improved, see **questions**.\n\n1. Can you properly highlight the row in table 1? \n2. I would suggest to give a brief summarization of the component analysis after line 92.\n3. It seems crucial to me to understand why SelfCodeAlign outperforms other dataset (for example, GPT-generated dataset), is there an analysis on how these datasets differ? Also, you mentioned distribution shift across different models in 4.1, is there a qualitative/quantitative comparison between the code generated by different models?"
},
{
"confidence": 3,
"rating": 5,
"review_id": "jB0IIZAC8g",
"review_text": "This paper introduces SelfCodeAlign, an entirely transparent and permissive pipeline designed for self-aligning code large language models without the need for human annotations or distillation. By applying SelfCodeAlign to CodeQwen1.5-7B, the authors generated a dataset containing 74k instruction-response pairs. They then fine-tuned CodeQwen1.5-7B using this dataset, resulting in SelfCodeAlign-CQ-7B, which demonstrates robust performance on the HumanEval+ benchmark.\n\n1. The performance is satisfactory: SelfCodeAlign-CQ-7B achieves a pass@1 score of 67.1 on HumanEval+, outperforming larger models like CodeLlama-70B-Instruct (65.2), which is a significant achievement.\n2. The process is auto-mated: This paper introduces a novel self-alignment pipeline including concept extraction from seed code, task generation, multiple response generation, and execution validation. This approach is independent of human annotations or large model distillation, making it easy to be applied.\n3. Scalability: Experiments demonstrate the method's applicability to models ranging from 3B to 33B parameters, showing good scalability across different model sizes.\n\n1. Lack of Diversity in Generated Tasks: While the method aims to produce a variety of coding tasks, it is unclear how this diversity is achieved or measured. There is a risk that the generated tasks may be biased towards certain types of coding problems, which could limit the model's ability to generalize effectively.\n2. Overreliance on Self-Generated Tests: The method relies heavily on tests generated by the model itself to validate responses. This self-validation approach could result in a feedback loop where the model learns to create tests that are easy to pass, rather than generating truly challenging or comprehensive tests. The paper does not address how this potential issue is mitigated.\n\nRefer to the weakness."
},
{
"confidence": 3,
"rating": 7,
"review_id": "j9oS0cI09q",
"review_text": "This paper proposes a pipeline for generating synthetic instruction tuning data. The method consists of the following steps: 1. data filtering is applied to seed coding data to select high quality examples; 2. base LLM is used to generate a set of coding concept and category based on the seed data; 3. base LLM is used to generate coding instruction, response and test; 4. generated examples are selected based on the code execution result.\n\n1. the paper focuses on using base model to generate synthetic data to self-improve, which is an interesting and useful angle for synthetic data generation\n2. the method is evaluated on several different coding LLM benchmarks which shows the effectiveness of the method\n3. there are also ablation experiments verifying the contribution of specific design choices in the framework.\n\nWhile using base model to self-improve is an interesting and useful direction, synthetic data generation could be improved by using a stronger LLM than the base model. It is not clear from the paper whether the proposed framework would be effective compared to previous methods if we use a stronger LLM to synthesize the data. The synthetic data generation could also be potentially improved by having multiple rounds of data generation process.\n\n1. Have you tried this framework using stronger LLM to generate synthetic data?\n2. Can you get even better performance by running several rounds of data generation with improved base model?"
}
] | |
xW6ga9i4eA | pFedClub: Controllable Heterogeneous Model Aggregation for Personalized Federated Learning | Federated learning, a pioneering paradigm, enables collaborative model training without exposing users’ data to central servers. Most existing federated learning systems necessitate uniform model structures across all clients, restricting their practicality. Several methods have emerged to aggregate diverse client models; however, they either lack the ability of personalization, raise privacy and security concerns, need prior knowledge, or ignore the capability and functionality of personalized models. In this paper, we present an innovative approach, named pFedClub, which addresses these challenges. pFedClub introduces personalized federated learning through the substitution of controllable neural network blocks/layers. Initially, pFedClub dissects heterogeneous client models into blocks and organizes them into functional groups on the server. Utilizing the designed CMSR (Controllable Model Searching and Reproduction) algorithm, pFedClub generates a range of personalized candidate models for each client. A model matching technique is then applied to select the optimal personalized model, serving as a teacher model to guide each client’s training process. We conducted extensive experiments across three datasets, examining both IID and non-IID settings. The results demonstrate that pFedClub outperforms baseline approaches, achieving state-of-the-art performance. Moreover, our model insight analysis reveals that pFedClub generates personalized models of reasonable size in a controllable manner, significantly reducing computational costs. | https://openreview.net/pdf/e4ee792dd28b3bc552b8f290198f312b9e344159.pdf | [
{
"confidence": 5,
"rating": 7,
"review_id": "0413gIGNFk",
"review_text": "The authors address a key issue in personalized federated learning, which enables clients with heterogeneous model structures to participate in federated learning with consideration of effectiveness and efficiency. This method is based on model assembly and reassembly, in which the blocks and layers can be treated as modules. After that, the server selects the personalized models and assigns them to the clients. The received models will be used as the teacher to guide the local update. The authors run extensive experiments to demonstrate the effectiveness of their algorithm.\n\n1. This paper is well-organized and clearly motivated. Its logical structure and presentation aid comprehension, while the clear and accessible framework and figures enhance readability. Experiments, discussions, or analyses robustly support each claim.\n\n2. The focus on controllability renders the algorithm more applicable in real-world scenarios, allowing for greater human involvement in the model generation process. The authors effectively demonstrate the utility of their design through experimental results.\n\n3. The authors have performed extensive experiments, including principal studies on image datasets, ablation studies, hyperparameter evaluations, and thorough discussions. These efforts confirm the validity of the techniques and provide deep insights into the paper's contributions.\n\n1. Based on the algorithm itself, it includes the reassembly, assembly, matching, and other operations. The reviewer may be concerned about the computational burden compared with the one without any controllability. \n\n2. How to select the anchor block and why needs to be stated clearly.\n\n3. According to the experiment results, the reviewer is wondering about how this approach can be used with the public data with/without labels and the possible reason why it is robust to the public data with or without labels.\n\nPlease see the above weaknesses."
},
{
"confidence": 5,
"rating": 7,
"review_id": "XNHBaG3pEv",
"review_text": "This paper presents a controllable model reassembly approach to enable heterogeneous model cooperation in federated learning. The designed CMSR algorithm provides the control of the space to save the computational cost. Furthermore, the approach also achieves model personalization for each local client. They test the proposed approach on benchmark datasets and compare with other baselines under different settings.\n\n1, This paper targets one of the challenges in federated learning, which is the model heterogeneity. To the best knowledge, most existing related works are based on knowledge distillation. This work presents a controllable approach to conduct block assembly and reassembly from local models to achieve heterogeneous model cooperation and model personalization. The idea itself is interesting and practical.\n2, They take efficiency, generalization, and personalization into consideration. They provide comprehensive analysis and provide detailed discussion under various settings, which support their statement soundly. \n3, Their presentation and logic are both easy to follow and understand. The framework, experiment results, and discussion are clearly presented.\n\nWeakness\n1, In their approach, the authors employ K-means clustering. The reviewer is curious about how the value of K is selected and how this selection influences the results.\n\n2, One of the main contributions compared to pFedHR is the enhanced controllability. I am interested in understanding the nature of this controllability, specifically the extent to which the generated models can be controlled.\n\n3, The paper focuses solely on image classification. Adhering to the review guidelines, the reviewer is not requesting additional experiments, but the reviewer is interested in exploring whether the existing methodology could be applicable to other tasks.\n\n1, Please address the concerns in the weakness part.\n2, After the blocks are stitched, the parameters of the blocks and/or the stitching layer would be trained? If trained, how are they trained? If not, how do you deal with the parameters of the stitching layers?"
},
{
"confidence": 4,
"rating": 5,
"review_id": "cekm2YmEwd",
"review_text": "The paper proposes a `pFedClub` method for personalized federated learning that enables controllable heterogeneous model aggregation, addressing limitations of existing approaches such as lack of personalization, privacy concerns, and uncontrolled model size growth.\n\nExtensive experiments conducted on three benchmark datasets using various CNN-based model structures validate the effectiveness of the proposed method under both IID and non-IID settings.\n\n- They conduct extensive experiment including the discussion about the hyparameter $K$ to validate the controllability of the proposed method and computational efficiency on the server.\n\n1. The writing and structure of the paper need improvement, particularly in the \"Order-constrained Block Search\" paragraph. The concept of order is unclear, especially the meaning of $q < u$ in line 177. It's not evident whether this refers to a similarity score or another metric. The author should provide a clearer explanation of this constraint.\n2. In equation (1) on line 141, the meaning of 'CKA' is not defined. The authors should explain what CKA stands for and how it's calculated. Additionally, it's unclear whether this computation occurs on the server. If clients must transmit input $x_{m,i}^t$ and output to the server, this raises privacy concerns that should be addressed.\n3. The paper doesn't specify whether the features $x_{m,i}^t$ and $x_{n,j}^t$ in equation (1) have the same dimensions. This should be clarified to ensure a proper understanding of the similarity calculation.\n4. The sampling process for the Anchor Block selection is ambiguous. The probability distribution over all models for this selection is not clearly defined.\n\nOverall, the authors should formulate the proposed method more rigorously, using well-defined notations and providing clear explanations for each step of the algorithm. This would significantly improve the paper's readability and reproducibility.\n\n1. Table 5 indicates that the model achieves the best performance in both IID and non-IID settings when K equals the number of activated clients. However, this raises a question about the necessity of the K-means method in this scenario. When K equals the number of activated clients, the input data naturally satisfies the minimization objective of the K-means algorithm, rendering the clustering step redundant. It would be valuable to explore how K affects the results when the number of activated clients increases, for example, to 10 or 20. This analysis would provide deeper insights into the scalability and robustness of the proposed method.\n\n2. While the current experiments focus on CNN-based structures, it would be beneficial to validate the proposed method on other neural network architectures. Specifically, evaluating pFedClub on Transformer-based structures would demonstrate its versatility and applicability across different model types. This expansion of the experimental scope would strengthen the paper's contributions and broaden its potential impact in the field of federated learning.\n\n3. The supplementary material should include a comprehensive set of implementation details for all methods used in the comparisons. This should encompass not only the proposed pFedClub method but also such as the hyparameter for baseline approaches."
},
{
"confidence": 4,
"rating": 8,
"review_id": "XUeu9qlNKJ",
"review_text": "This paper addresses heterogeneous model aggregation in federated learning. To this end, the authors introduce pFedClub, which aims to generate personalized models for federated clients while ensuring that the models remain within size constraints. Specifically, pFedClub consists of three main steps: first, it decomposes models into multiple blocks and clusters them using the K-means algorithm; second, it replaces original blocks with others from the same clusters to create a set of candidate models; third, it selects the optimal personalized model for each client using a public dataset and an initial model transferred to the server. Extensive experiments illustrate its significant improvement over existing methods in this field.\n\n1. The work is well motivated and explores an interesting problem in federated learning. \n2. The presentation of this paper is clear, and the authors comprehensively and intuitively describe the proposed pFedClub. \n3. The paper conducts sufficient experiments and compares the proposed method with previous works. The numerical results demonstrate the superiority of pFedClub.\n\n1. The proposed work requires a public dataset, which is unsuitable in federated learning due to the privacy concerns. Is this work applicable to a public dataset different from the training data distribution? For example, the clients collaboratively train a model for CIFAR-10, while the server holds a public dataset from ImageNet. \n2. Although the proposed work achieves remarkable under convolutional neural networks, it is unclear how pFedClub performs under transformers. Is the proposed work suitable for a setting where clients hold three different sizes of LLM, i.e., LLaMA-7B, LLaMA-13B, and LLaMA-70B?\n\nSee **Weaknesses**"
}
] | |
xUoNgR1Byy | Interpreting Learned Feedback Patterns in Large Language Models | Reinforcement learning from human feedback (RLHF) is widely used to train large language models (LLMs). However, it is unclear whether LLMs accurately learn the underlying preferences in human feedback data. We coin the term **Learned Feedback Pattern** (LFP) for patterns in an LLM's activations learned during RLHF that improve its performance on the fine-tuning task. We hypothesize that LLMs with LFPs accurately aligned to the fine-tuning feedback exhibit consistent activation patterns for outputs that would have received similar feedback during RLHF. To test this, we train probes to estimate the feedback signal implicit in the activations of a fine-tuned LLM. We then compare these estimates to the true feedback, measuring how accurate the LFPs are to the fine-tuning feedback. Our probes are trained on a condensed, sparse and interpretable representation of LLM activations, making it easier to correlate features of the input with our probe's predictions. We validate our probes by comparing the neural features they correlate with positive feedback inputs against the features GPT-4 describes and classifies as related to LFPs. Understanding LFPs can help minimize discrepancies between LLM behavior and training objectives, which is essential for the **safety** and **alignment** of LLMs. | https://openreview.net/pdf/df55089e1689943ee66585be002d20df9b191eae.pdf | [
{
"confidence": 4,
"rating": 6,
"review_id": "yNr1JZHHQn",
"review_text": "This submission tries to tackle one big question in the field of interpreting the data-driven preference learnt by RLHF in human language. The technical path this submission took is to train probe on SAE features to distinguish between good and bad RLHF features.\n\n+ The attempt to interpret what happens during RLHF training is a good direction to pursue.\n+ Releasing the SAE direction and training code could be an excellent news to the community.\n\n+ Unclear why have to probe on top of SAE feature. SAE greatly increase the dimensions of the features, leading to overfitting---you can find a separating plane for whatever classification task in this high-D space. Lacking comparison to normal probing. \n+ Considering the problem from a dynamical perspective can be fruitful. Noted that the authors did ablate the features and observe a performance drop on preference dataset. But it's also interesting to see the progress of RLHF training, how it warps the features spaces, even the SAE features' relative importances.\n\n+ I wonder if it could be interesting to conduct the same analysis on the reward model, if the reward model is another language model that is open-weight and trained. Can we compare the two representation space, one trained for discrimination and the other trained for generation?"
},
{
"confidence": 1,
"rating": 5,
"review_id": "lJhkBFBGmy",
"review_text": "The goal of this paper is to predict where patterns in LLM activations learned from RLHF diverge from the human preferences used for the RLHF training. \nGiven a base model and an RLHF tuned version of it, the method involves first identifying the 5 layers with highest parameter difference according to an L2 norm. Then two auto-encoders are trained over the activations from these layers. The encoder and decoder weights of the autoencoder are tied, and the output from these is preferred for studying the activations as they are expected to be more sparse, condensed and interpretable than the raw activations.\nAt inference time, for each input, the activations from the high divergence layers are computed, passed through the autoencoder and then aggregated. Given a pair of contrasting inputs, a linear probe is trained to predict activation deltas using the above aggregated autoencoder output as input. The output of the probe is meant to be a predicted feedback signal that can be compared to the ground truth fine tuning feedback. For sentiment analysis, a strong correlation is observed with the Pythia-160m model but this is weaker for Pythia-70m and GPT-Neo-125m. \n\nFor another validation the probes, GPT-4 is used to generate explanations of the features in the decoder weights of the autoencoders that get activated when the predicted feedback is positive. GPT4 is then prompted to predict whether or not these are relevant to the fine tuning task, based on a language description of the task. It is found that a feature identified by GPT-4 as relevant to the fine-tuning task is between twice and thrice as likely to be correlated with predicted positive feedback.\n\nThe paper is quite accessible for a reader whose area of focus is not interpretability.\n\nAs a reviewer not particularly experienced with work on interpretability, the takeaways of this paper are somewhat unclear. For example, if we finetuned a new model on one of the datasets used in this paper and trained probes in a similar way from its activations, what would that tell us about the the difference between the base and RLHF versions of that model? Alternately, is the goal to discover information about a model where the base and RLHF-tuned versions are available but the data is not, and hence we do not know what factors might have influenced the preference annotations that guided the annotation. \n\nI did not fully understand how the activation deltas are calculated. While most of the paper is fairly readable to a reviewer with a different area of focus, this aspect could be improved.\n\n1. I don't feel like I understood the concept of the activation delta. The paper states \"We compute the activation deltas for a given contrastive triple as the difference between the positive 212 and neutral element and negative and neutral element under the ℓ2 norm\". For any input x, there is a set of values $\\hat{a}$. For calculating an L2 norm these still need to somehow be aggregated. Since the probe input $A_{concat}(x)$ is already a concatenation of these values, I assume it is not simply an L2 norm of the two $A_{concat}$ vectors, as then it is unclear what the probe would learn. \n\n2. Assuming that the probes trained in this paper do obtain information about the preferences underlying human preference data, how do we make use of that information?"
},
{
"confidence": 3,
"rating": 4,
"review_id": "NDXoPOIkWc",
"review_text": "The authors propose an approach for measuring and interpreting the divergence between learned feedback patterns (LFPs, or simply the model's activation patterns) and the feedback reward distribution of the preference training dataset. To do so, they identify layers whose activations have moved the most during RLHF training and input these layers' activations into a sparse auto-encoder (SAE) that is trained to provide sparse representations of the LLM's activations. Then, they train probes to predict the feedback signal (e.g. reward, sentiment label) from the SAE's outputs. They use these probes both to measure the divergence of the LFPs from the actual feedback signals and to interpret which features are most important for the LFPs.\n\n- The authors ask an interesting question of whether we can measure and interpret the difference between a trained model's activation patterns and the preference distribution it has been trained on. The interpretability aspect of this question is interesting, since it can help us better understand what exactly the model has learned (or not learned) from its training dataset.\n- The authors provide a good explanation of why sparse auto-encoders are being used for this task (rather than interpreting the raw model activations), as well as the limitations thereof.\n\n- The effectiveness of this probing method seems to rely on many key assumptions being true, such as (i) sparse autoencoder outputs being more interpretable than the original model's outputs, (ii) sparse autoencoder output representations being faithful to the original model's representations, (iii) the probes being accurate, and (iv) GPT-4 being accurate/faithful when giving descriptions of each feature. There is very little experimental evidence provided for confirming that any of these assumptions are true, and these claims are difficult to test in the first place.\n - In fact, the authors mention that a likely reason for the low correlation between the probe's predictions and the VADER lexicon (for some models) is \"the complexity of the probe's task...a linear regression model is unlikely to recover such granular rewards accurately from just the activations\" (L265-266). Although they do find a high correlation for one model, the insufficiency of this probe implies that it is not effective for accurately measuring the divergence between the model's activation patterns and the feedback label distribution. If the correlation is low, we cannot tell whether that is the probe's failure, or if the model has not acquired strong LFPs, or some combination of the two. Since this probing technique is a central contribution of the paper, I would expect stronger probes and more rigorous evaluation of the effectiveness of the probes.\n - How can one ensure that GPT-4's interpretations of the features are accurate or faithful?\n- Table 5 purports to check whether the predicted LFP-related features were actually important and useful to the LLM, but the numbers before and after ablation are often very close together (or identical, in the case of GPT-Neo-125m). It would be helpful to report confidence intervals or standard errors to check whether these differences are significant. But as it currently stands, this table's results does not seem to strongly support the claim that the predicted LFP-related features are indeed relevant to/critical for LFPs.\n\n- Lack of clarity in explaining methods:\n - Much of the writing about the methods is unclear, contradictory, or omits many details. For instance, the explanation of the logistic regression probe in L233-234 says \"we label the concatenated activations as positive or negative based on the averaged activation deltas for each token over the entire input sequence, and train a logistic regression model to classify the activations,\" which would suggest that this probe's inputs are the activations. But L493 (in the appendix) says \"...we give a positive or negative label to concatenated autoencoder outputs based on the sign of their activation delta. We then train a logistic regression model to predict the labels from the concatenated autoencoder outputs,\" which suggests that the inputs are actually the autoencoder outputs, not the original model's activations. Which is it?\n - In Section 3.4, how is GPT-4 prompted to provide the explanations?\n - Given how confusing and verbose the methodology is, I would encourage the authors to write out some of the procedures in equation form, rather than long paragraphs of text.\n\nQuestions are above."
},
{
"confidence": 4,
"rating": 6,
"review_id": "TWkTilF83J",
"review_text": "The paper investigates how large language models (LLMs) learn preferences from human feedback during fine-tuning using reinforcement learning (RLHF). The authors introduce the concept of Learned Feedback Patterns (LFPs) to describe activation patterns in LLMs that align with human feedback. They aim to measure the accuracy of these patterns in capturing human preferences by training probes on condensed representations of LLM activations. The probes predict the implicit feedback signal in these activations and compare it to true feedback.\n\n- The introduction of LFPs provides a new perspective on understanding how LLMs learn from human feedback. This concept helps in quantifying and interpreting the alignment between LLM activations and human preferences.\n\n- The authors validate their probes by comparing neural features correlated with positive feedback against GPT-4’s descriptions of relevant features. This cross-validation strengthens the reliability of their findings.\n\n- The use of synthetic datasets to elicit specific activation patterns in LLMs adds to the reproducibility and robustness of the study. These datasets are also made publicly available for further research.\n\n- The study primarily focuses on a few specific models (e.g., Pythia-70m, GPT-Neo-125m) and tasks (sentiment generation, toxicity), which might limit the generalizability of the findings across different LLMs and applications. More recently released models are of more value for studying RLHF patterns and verify that the method can be generalized. The patterns are easy to extract because that the used data are quite obvious to encode and decode. \n\n- While the probes show significant accuracy for certain tasks, the paper notes weaker correlations for more granular reward predictions, suggesting that the approach might struggle with highly detailed feedback signals. The issue of feature superposition in dense, high-dimensional activation spaces poses a challenge to fully interpreting the learned features. Although sparse autoencoders mitigate this to some extent, the problem remains a significant obstacle.\n\n- The validation process relies on GPT-4’s ability to describe neural features, which introduces a dependency on another model’s interpretability. This could introduce biases or inaccuracies if GPT-4’s descriptions are not perfectly reliable.\n\n- The paper acknowledges that while their method identifies features involved in feedback signals, it does not provide a mechanistic explanation of how these features interact or influence the expected feedback signal. This limits the depth of interpretability.\n\n- Can you elaborate on how your findings can be practically applied to mitigate risks associated with LLM deployment, such as manipulation of user preferences or harmful behaviors? What strategies would you recommend for developers to monitor and adjust LFPs in deployed models?\n\n- Your validation relies on GPT-4’s feature descriptions. Have you explored other methods or models for validating the identified features? How do you ensure the robustness of these validations?\n\n- Have you tested your method on other LLM architectures or tasks beyond sentiment analysis and toxicity detection? If so, what were the results? \tHow do you anticipate the effectiveness of your method would vary with different model sizes and types?"
}
] | |
xUjBZR6b1T | ReVideo: Remake a Video with Motion and Content Control | Despite significant advancements in video generation and editing using diffusion models, achieving accurate and localized video editing remains a substantial challenge. Additionally, most existing video editing methods primarily focus on altering visual content, with limited research dedicated to motion editing. In this paper, we present a novel attempt to Remake a Video (ReVideo) which stands out from existing methods by allowing precise video editing in specific areas through the specification of both content and motion. Content editing is facilitated by modifying the first frame, while the trajectory-based motion control offers an intuitive user interaction experience. ReVideo addresses a new task involving the coupling and training imbalance between content and motion control. To tackle this, we develop a three-stage training strategy that progressively decouples these two aspects from coarse to fine. Furthermore, we propose a spatiotemporal adaptive fusion module to integrate content and motion control across various sampling steps and spatial locations. Extensive experiments demonstrate that our ReVideo has promising performance on several accurate video editing applications, i.e., (1) locally changing video content while keeping the motion constant, (2) keeping content unchanged and customizing new motion trajectories, (3) modifying both content and motion trajectories. Our method can also seamlessly extend these applications to multi-area editing without specific training, demonstrating its flexibility and robustness. | https://openreview.net/pdf/bb0cf0788a982c6b491da99b791d82fa60d2e219.pdf | [
{
"confidence": 5,
"rating": 7,
"review_id": "J76UKADBko",
"review_text": "ReVideo presents a novel view of video editing by modifying content with input trajectory to create new content. It designs a three-stage strategy to wrestle out the problem of ignoring motion control when direct training. The main contribution of this work relies on the new task of editing motion via user-specified trajectory while keeping the original video movement. The editing results are superior and photorealistic.\n\n1. The first video editing work on creating new motion and content.\n2. Good writing; the paper is easy to follow, and the motivation and three-stage training strategy on decoupling content and motion control is reasonable. The proposed SAFM learned a dynamic fusion weight at different timesteps.\n3. The editing results are photorealistic and adhere to the original motion or follow user-specified trajectory with no artifacts.\n\n1. The author did not provide the method or explanation of how ReVideo edits the first frame, making the total editing pipeline not end-to-end for users.\n2. Part of the original video motion, like mouth movement in the Zuckerberg->robot (head6) and tail movement in dog->lion, is not kept in the edited video.\n3. I would like to know how the drag-based editing method handles non-rigid motion, such as the rotation of car tires from a side view. In examples like sea2 and sea2_2, where a shark and a dinosaur are added, the limbs of the animal seem unable to move, making the video look unrealistic. However, in soccer and some human-centric examples, the legs of dogs and people can move normally. Therefore, I would like the authors to add an example of a vehicle moving on the road from a side view, including the movement of the wheels, to address my concerns. This may be a limitation of the drag-based method.\n3. There is no quantitative comparison of the ablation study; I understand that the image results in Fig 7 are clear, but only one video qualitative ablation is not reasonable. \n4. There are no qualitative video comparisons with other methods in the supp or project page, but only Fig 6, and the automatic metrics are worse than pika even though I understand the clip scores are not accurate, which can not reflect temporal consistency accurately. I suggest the author supply the comparison video between Revideo and other methods in the rebuttal phase.\n5. The training cost of three stages: even though Revideo makes great progress in creating new motion, training cost like GPU costs, time costs, memory costs and so on, is still a problem since users prefer to edit a video in a zero-shot manner when using a pretrained video generation model and the compared methods like AnyV2V is training-free.\n\n1. The method to edit the first frame needs to be declared.\n2. Non-rigid motion-like side view wheels movement of cars.\n3. Qualitative video comparisons with other methods. \n4. The inference time/training cost comparison with other similar methods.\n5. What about ReVideo performing in editing multiple objects simultaneously in the same video?\n6. Can ReVideo work on text-to-video generation models?"
},
{
"confidence": 3,
"rating": 5,
"review_id": "tKgfI7TS8x",
"review_text": "The paper presents a video editing method that enables precise localized adjustments to content and motion within specific areas of a video. It introduces a three-stage training strategy and a spatiotemporal adaptive fusion module to integrate edits across frames and locations effectively. This method allows for complex editing tasks such as changing content while maintaining motion, adding new motion to static content, and simultaneously modifying both elements.\n\n- The paper introduces a novel challenge of editing both content and motion in specific video areas and combines techniques from diffusion models and video editing to achieve nuanced control.\n- The three-stage training strategy enhances the robustness and effectiveness of the edits, supported by experimental validation that demonstrates superior performance compared to existing methods.\n- The paper is well-organized and clearly explains complex concepts, including the innovative spatiotemporal adaptive fusion module and detailed training strategy.\n\n- The decoupling training could cause some artifacts. Although the paper demonstrates these artifacts could mostly be alleviated by deblocking training. I can still see some blocky/unnatural results in the result videos.\n- The training is quite complicated and separated into three stages. I feel the training strategy could 'overfit' this particular video dataset.\n- This method is more like a direct combination of video diffusion and ControlNet.\n- More detailed implementation specifics, particularly regarding parameter settings and the architecture of the spatiotemporal adaptive fusion module, are needed.\n- The method's computational demands and potential scalability issues are not adequately addressed. For example, what kind of GPU does one need to perform training and testing? \n- The paper focuses heavily on technical aspects with less consideration of user interaction.\n\n- What kind of GPU does one need to perform training and testing? \n- Will the authors release the training and testing code along with pre-trained models upon acceptance?"
},
{
"confidence": 3,
"rating": 6,
"review_id": "h3FIUCzk0A",
"review_text": "This paper presents ReVideo, a new approach for precise local video editing of both content and motion. It introduces a coarse-to-fine training strategy to progressively decouple content and motion control, and a spatiotemporal adaptive fusion module to integrate them effectively. Experiments show ReVideo can modify local video content, customize motion trajectories, or change both simultaneously, and extend to multi-region editing.\n\n- This appears to be the first attempt at exploring local editing of both content and motion in videos using diffusion models. Being able to modify content and motion trajectories in specific regions is a novel capability compared to prior work.\n- The proposed three-stage coarse-to-fine training strategy to progressively decouple content and motion control is an interesting technical approach to deal with the core challenge.\n- The spatiotemporal adaptive fusion module is another novel component to integrate the content and motion conditions across sampling - steps and spatial locations.\n- Extending the approach to allow multi-area editing without requiring specific training demonstrates flexibility.\n- Most of the visual and quantitative results show improvements over prior methods \n\nOverall, this paper addresses a timely and important topic with significant potential benefits for the community. Despite some weaknesses, the reviewer recommends acceptance, considering this is a relatively new area and the paper presents promising results. The score may be adjusted based on the quality of the rebuttal.\n\n## Practicality of the Editing Workflow\n\nThe current editing interface requires users to specify both a target content image and a set of motion trajectories. While this allows for fine-grained control, it may not be the most intuitive or efficient workflow for common editing tasks. Consider the scenario of object removal - the user would need to carefully craft a content image with the object removed and ensure that the remaining motion trajectories are consistent. An alternative approach could be to directly specify the regions to remove and have the model infer the appropriate content and motion changes automatically. The paper would benefit from a more detailed discussion of the practical trade-offs and usability considerations of the proposed editing framework.\n\n## Limited Motion Control\nWhile the method allows for editing the motion of individual objects, it assumes that the overall scene motion (camera movement, background motion) remains fixed. This limits the applicability of the approach in scenarios where the goal is to modify the global motion patterns (e.g. stabilizing shaky footage, changing the camera viewpoint).\n\n## Precise Placement and Key Contributions of this Paper\n\nWhile the individual technical components (e.g. coarse-to-fine training, adaptive fusion) are well-motivated, it's worth considering whether similar strategies have been explored in related domains. For instance, progressive training to handle multi-factor variation has been used in GANs, and spatially-adaptive normalization is common in style transfer. Drawing more connections to such related work would clarify the novelty of the specific adaptations made here.\n\n## Content-Motion Entanglement\n\n- The key technical contribution of the paper is the decoupling of content and motion information through a coarse-to-fine training strategy. However, it's not clear if this decoupling is complete or if there are still some residual entanglements between the two factors. For instance, the edited content may still contain some motion information that could interfere with the specified motion trajectories, leading to artifacts or inconsistencies. A more thorough analysis of the content-motion separation and its impact on the editing quality would be informative.\n\n- Is decoupling content and motion the only way to address the issue - could a joint representation learning approach work instead? Acknowledging alternate strategies would help justify the chosen approach.\n\n- **Figure 4 is not very intuitive. It would benefit from additional justification, theoretical analysis, and insights into why such a simple composition from two videos is effective.** This is a key concern.\n\n## Multi-area Editing \n- The extension to multi-area editing is a nice addition, but the paper could go further in characterizing the challenges involved. Are there issues with preserving global coherence across multiple edited regions? How does the method scale with the number of regions? Providing such details would give a more complete picture of the capability.\n\n## Clarity and Reproducibility\n- Implementation details: There are some missing specifics that could hamper reproducibility. For instance:\n\n - How exactly are the editing regions defined during training - what is the procedure for randomly sampling them?\n - What metrics are used for the \"threshold filtering\" of motion trajectories and how were the thresholds chosen?\n - Are there any data augmentation, regularization or optimization tricks used during training?\n\n## Evaluation Metrics\n\nThe quantitative evaluation relies primarily on low-level metrics like PSNR and LPIPS, which may not fully capture the perceptual quality and coherence of the edited videos. Additional metrics could provide a more comprehensive assessment:\n\n- Metrics that specifically measure the consistency of the edited regions with the target content and motion (e.g. using an object detector or tracker).\n- Metrics that evaluate the temporal stability and smoothness of the edited videos (e.g. some metrics that are used in video inpainting tasks, Please refer to [this repo](https://github.com/MichiganCOG/video-inpainting-evaluation) for details).\n- Human evaluations of the overall realism, coherence, and faithfulness to the editing inputs (e.g. through user studies).\n\n\n\n## Robustness Evaluation and Ablation Studies\n\nWhile the paper does include ablations for a few key components (e.g. SAFM, training stages), there are other design choices that are not fully explored. For instance:\n\n - How important is the choice of motion representation (trajectory vs. alternatives)? Testing with different motion inputs would reveal the sensitivity to this factor.\n - What is the impact of the trajectory sampling strategy and hyperparameters? Varying the number and selection of trajectories could provide insight into the robustness.\n - How does the performance vary with the size and shape of the editing regions? A systematic evaluation across different region properties would be informative.\n - Only the end-to-end video editing pipelines are compared, but not the individual technical components. For instance, how does SAFM compare to simpler fusion schemes used in prior work?\n - Input noise and perturbations (e.g. in the content image or motion trajectories)\n\n## Dataset Complexity \n\n- While the approach achieves good results on the chosen datasets, it's unclear how well it would generalize to more complex video content (e.g. with dynamic backgrounds, scene changes, occlusions etc.). Discussing the potential failure modes and current limitations would help scope the contribution appropriately.\n\n- The examples shown in the paper are largely limited to simple object-level edits in relatively constrained scenarios (e.g. clean backgrounds, single objects). It's unclear how well the method would perform on more challenging videos with complex scenes, multiple objects, occlusions, camera motion, etc. Testing on a wider range of video complexity would help establish the generality of the approach.\n\n## Editing Scenarios\nThe paper demonstrates a few key editing applications (e.g. object addition/removal, motion editing), but there are other important scenarios that are not explored, such as: performing semantic-level edits (e.g. changing the action or interaction between objects).\nShowcasing the method's performance across a fuller range of editing tasks would demonstrate its versatility.\n\n## Open Source\nWill the code for training and inference be released?\n\nPlease refer to the weakness section."
}
] | |
xSziO6gQgG | Implicit Optimization Bias of Next-token Prediction in Linear Models | We initiate an investigation into the optimization properties of next-token prediction (NTP), the dominant training paradigm for modern language models. Specifically, we study the structural properties of the solutions selected by gradient-based optimizers among the many possible minimizers of the NTP objective. By framing NTP as cross-entropy minimization across \emph{distinct} contexts, each tied with a \emph{sparse} conditional probability distribution across a finite vocabulary of tokens, we introduce ``NTP-separability conditions'' that enable reaching the data-entropy lower bound. With this setup, and focusing on linear models with fixed context embeddings, we characterize the optimization bias of gradient descent (GD): Within the data subspace defined by the sparsity patterns of distinct contexts, GD selects parameters that equate the logits' differences of in-support tokens to their log-odds. In the orthogonal subspace, the GD parameters diverge in norm and select the direction that maximizes a margin specific to NTP. These findings extend previous research on implicit bias in one-hot classification to the NTP setting, highlighting key differences and prompting further research into the optimization and generalization properties of NTP, irrespective of the specific architecture used to generate the context embeddings. | https://openreview.net/pdf/174e95ac9c3ffa8d220ddbe8561c2d8a3a48c25e.pdf | [
{
"confidence": 3,
"rating": 5,
"review_id": "4g0jhptnRd",
"review_text": "This paper studies the implicit bias of the gradient descent on the Next-Token Prediction (LTP) problem in linear models. They first formulate this NTP problem as minimizing the cross-entropy (CE) loss over distinct contexts, each tied with a sparse conditional probability over the token space. They then provide the necessary conditions for the CE loss to reach the entropy lower bound, i.e., the NTP-compatible condition and the NTP-separable condition. Then, they prove one sufficient condition for those two conditions is oevrparameterization, i.e., the dimension of the embedding space d is larger than the number of distinct contexts in the dataset. Assuming both compatible and separable conditions, they then prove the directional convergence of the minimizer of the CE loss within a certain range and the directional convergence of the GD iterate towards the direction of the solution of an NTP-SVM.\n\nIn general, I think this paper delves into a good and important problem: the optimization path and implicit bias of NTP mechanism. The authors provided a good formulation, and the proof is solid.\n\n1. They investigate an interesting and important problem: the optimization path and the implicit bias of NTP.\n\n2. Their formulation of NTP into the CE minimization over distinct contexts is novel.\n\n3. They provide rigorous theoretical results and the proofs are solid, to my knowledge.\n\n1. The main issue of this paper is that, for the NTP-compatible and separable conditions to hold, one needs d > m. Does this overparametrization condition usually hold in practice or not? To my knowledge, in practice, the embedding dimension d is much smaller than the number of training data. Since m is not the number of training data and can be much smaller than that, it is not clear to me whether this assumption is possible in practice.\n\n2. There are some paragraphs that are not very clearly written. For example, in lines 154-157, why does equation 4 constrain W^p w.r.t. this subspace? Why is the solution W* unique, assuming equation 4 has a solution? I think those can be expressed as lemmas to make them clearer. In line 148, the authors claim that (3a) holds if and only if the data satisfies the NTP-compatible condition. The 'if' direction is trivial, but the other direction needs a more rigorous proof.\n\nSee above."
},
{
"confidence": 3,
"rating": 7,
"review_id": "ciVXvf4wb0",
"review_text": "This work studies the implicit bias of optimization in next token prediction tasks by analyzing the structure of the decoding matrix at infinite time. The paper introduces two novel conditions under which the loss reaches its minimum theoretical value and demonstrates that if these conditions hold (which can be, for example, the case when the model is overparameterized), then after GD training, the decoding matrix will converge (in direction) to a matrix reminiscent of the maximum-margin matrix in \"standard\" classification.\n\nThis work studies a timely topic (next token prediction) and approaches it from a learning theoretic perspective (implicit bias of optimization), which has proven to be very fruitful in \"standard\" classification. The assumption of sparse contexts is clever and should be of wider applicability. The results are novel and analogous to similar results that were proven for \"standard\" classification. Furthermore, the presentation is comprehensive, with many pointers to related work, which help contextualize this paper's contributions.\n\nA weakness, which the authors do acknowledge in their work, that prevented me from giving a higher score is that there is no clear connection between the structure of the weights and generalization, as there exists in \"standard\"/one-hot classification. As a result, it is unclear how much insight can be derived from the current result. I would appreciate the authors' thoughts on this.\n\nMinor: The text is too dense in places, with the authors trying to include more details than what the space permits. I would suggest moving some of the discussion in Sections 6 and 7 to the Appendix to facilitate a smoother flow.\n\nA minor suggestion: lines 32-34 appear to require rephrasing."
},
{
"confidence": 2,
"rating": 7,
"review_id": "Ldp7YiBmwf",
"review_text": "This paper studies the structural properties of the solutions selected by gradient-based optimizers among the many possible minimizers of the NTP objective, the central challenge being to discern the \"implicit bias\" of the optimizer towards particular solutions.\n\n- The paper is generally well written, and the notation is very clear.\n\n- The paper provides a a very interesting starting point for studying the solutions found by gradient descent in NTP settings\n\nWhile the paper provides a a very interesting starting point for studying the solutions found by gradient descent in NTP settings, it's not very clear whether margin maximization practically corresponds to any meaningful takeaway in language modeling.\n\nJust the remark in the weaknesses."
},
{
"confidence": 4,
"rating": 5,
"review_id": "4q4R2OWFrz",
"review_text": "This study investigates the structural properties of solutions chosen by gradient-based optimizers for next-token prediction (NTP), framing NTP as cross-entropy minimization across various contexts with sparse conditional probability distributions over a finite vocabulary. It focuses on the optimization bias of gradient descent (GD), characterizing how GD selects parameters that equate the logits’ differences of supported tokens to their log-odds.\n\nThis study enables deriving the data-entropy lower bound in NTP for understanding the optimization and generalization properties of NTP models.\n\nThe study's focus on linear models analyzing CE loss for NTP may limit its novelty and applicability, making its contributions to the field appear unclear compared to existing research.\n\nQ.1 Please clarify the differences and advantages of your study compared to the following existing research. What new insights does this study provide, and why are they important? Specifically, while these existing studies highlight the critical role of attention in NTP, your study omits this aspect. Could you explain why it is still valid to disregard attention in your analysis?\n\nMechanics of Next Token Prediction with Self-Attention\nYingcong Li, Yixiao Huang, M. Emrullah Ildiz, Ankit Singh Rawat, Samet Oymak\n\nMax-Margin Token Selection in Attention Mechanism\nDavoud Ataee Tarzanagh, Yingcong Li, Xuechen Zhang, Samet Oymak\n\nTransformers as Support Vector Machines\nDavoud Ataee Tarzanagh, Yingcong Li, Christos Thrampoulidis, Samet Oymak\n\nQ.2 When considering next-token prediction (NTP) using sequence data, distinct contexts might differ by only a single character and are expected to be interrelated. Does the assumption of independence and identically distributed (i.i.d) data in Eq.(2) not pose a problem in this scenario?"
}
] | |
xSU27DgWEr | On $f$-Divergence Principled Domain Adaptation: An Improved Framework | Unsupervised domain adaptation (UDA) plays a crucial role in addressing distribution shifts in machine learning. In this work, we improve the theoretical foundations of UDA proposed in Acuna et al. (2021) by refining their $f$-divergence-based discrepancy and additionally introducing a new measure, $f$-domain discrepancy ($f$-DD). By removing the absolute value function and incorporating a scaling parameter, $f$-DD obtains novel target error and sample complexity bounds, allowing us to recover previous KL-based results and bridging the gap between algorithms and theory presented in Acuna et al. (2021). Using a localization technique, we also develop a fast-rate generalization bound. Empirical results demonstrate the superior performance of $f$-DD-based learning algorithms over previous works in popular UDA benchmarks. | https://openreview.net/pdf/e6f6280a04e2892629381753602bc9e403e994ea.pdf | [
{
"confidence": 3,
"rating": 6,
"review_id": "wtx0Bpmx8l",
"review_text": "This study addresses the gap in the theory and algorithms of unsupervised domain adaptation based on f-divergence proposed by Acuna et al. 2021. Specifically, while the theory uses absolute values, the algorithms do not, and this issue is resolved by introducing a single scaling factor. The newly proposed f-DD generalization bound is derived based on Rademacher complexity, and tighter bounds are obtained using the localization technique.\n\nAs a specific domain adaptation algorithm, an adversarial type algorithm is proposed, yielding favorable results in benchmarks.\n\nThis study bridges the gap between theory and practice in existing UDA methods based on f-divergence, advancing the foundational research in domain adaptation. Furthermore, the derivation of sharper bounds using the recently introduced localization technique for DA is highly commendable as a contribution to the theoretical framework of DA.\nThe authors validate their theoretical contributions with empirical results, showing superior performance on popular benchmarks.\n\nWhile the empirical validation is strong, it is limited to specific benchmarks. Broader validation across diverse datasets and tasks would strengthen the findings. It is nice to present some insight into what kind of dataset the proposed f-DD works well (and why), and also into what kind of dataset it does not work well (and why).\n\nPlease provide the source of the technique that resolves the absolute value with a scaling parameter.\nAdditionally, clarify whether the claim in this paper—that there is no need to adjust the scaling parameter—holds generally."
},
{
"confidence": 2,
"rating": 7,
"review_id": "eruwfhwwEu",
"review_text": "This paper studies the learning theory aspect of the domain adaptation problem, where the key is to bound the estimation errors between expectations over shifting distributions. Specifically, this work improves the recently developed $f$-divergence-based generalization analysis, where the main results ensure a tighter generalization upper bound and the consistency between theory and method. For finite sample setting, a sharp bound is provided to accelerate the asymptotic rate. Numerical simulation is conducted to demonstrate the superiority of the theory-guided method over the existing discrepancy-based framework.\n\n+ The motivation is clear, i.e., improving the $f$-divergence-based bound and bridging the gap between method and theory, and the presentation is easy to follow.\n+ The technical part is generally sound and the justifications are sufficient.\n+ The experiment results are superior compared with recently developed generalization bounds.\n\n+ Some notations are inconsistent in theoretical analysis.\n+ The proposed algorithm needs further justifications.\n+ The experiment comparison could be improved.\n\nThere seem no major faults in this submission, and I only have the following minor concerns. \n\nQ1. Theory and methodology. The major result for the target error bound is provided in Eq. (4) in Thm. 4.1 and the specific bound w.r.t. KL-divergence is presented in line 162, where the induced learning objective consists of source risk and the square root of cross-domain KL-divergence. However, it seems that the optimization objective Eq. (5) considers the divergence without the square root directly. I understand the optimal solutions are the same for these two objectives (if the optimal solutions ensure 0 cross-domain discrepancies). But considering Eq. (4) is closely related to the major merit of this work, i.e., the tight bound, the consistency between Eq. (4) and Eq. (5) seems to be important. Some justifications are highly expected. \n\nQ2. Method application. As far as I understand this work, the derived $f$-DD measure can be applied to existing works whose primary goal is discrepancy minimization. Thus, it could serve as a plug-and-play module for existing SOTA DA methods. Thus, some detailed discussions on the capability of $f$-DD w.r.t. existing methods are highly expected.\n\nQ3. Following Q2, apart from the experiments in the current version, some comparisons between SOTA DA methods and their combination with $f$-DD objective are highly expected.\n\nQ4. The clarity w.r.t. definitions could be improved, e.g., $K_{h',\\mu}(t)$ depends on the hypothesis $h$ while the justification (i.e., line 178) is provided after the definition (i.e., line 176). A thorough check for these issues could improve the readability.\n\nQ5. Some notations seem to be inconsistent. For example, the notation $I_{\\nu}^{\\phi}(h,h')$ in line 132 is inconsistent with $I$ in line 129; the notations $\\mathbb{E}_{\\nu}$ in line 132 seems to be incorrect (probably should be expectation over $\\mu$?)."
},
{
"confidence": 3,
"rating": 6,
"review_id": "tYVWGCB8VB",
"review_text": "This paper aims to develop an improved version of f-divergence-based unsupervised domain adaptation (UDA) learning theory. In particular, the authors introduce a novel f-divergence-based domain discrepancy measure (f-DD) by combining the two existing concepts, which are f-divergence and domain discrepancy. Based on that f-DD measure, the paper next provides a generalization bound on the target domain, which is shown to be sharper than the existing related bound. The experimental results consistently demonstrate that f-DD outperforms the original f-DAL in three popular UDA benchmarks, with the best performance achieved by Jeffereys-DD.\n\nThe paper is well-written and easy to follow. The idea of introducing f-divergence-based UDA, targeting a better risk-bound on the target domain is novel and interesting. All the main statements of the paper are theoretically supported, though I did not have enough time to verify all of those propositions/theorems carefully. \n\nThe experimental results consistently demonstrate that f-DD outperforms the original f-DAL in three popular UDA benchmarks, with the best performance achieved by Jeffereys-DD.\n\nThe novelty of the paper is quite limited since the f-divergence-based domain discrepancy measure (f-DD) is proposed by combining the two existing concepts, which are f-divergence and domain discrepancy. \n\nIn Theorem 5.2, the authors claim that the application of the localization technique gives a fast-rate generalization, they do not provide a concrete evidence. Could the author give some explanations/clarifications for that. \n\nMoreover, the experimental part of the paper seems not be very convincing since it only provides experiments with quite small datasets (Office31, Office-Home, MNIST & USPS) and simple model (e.g., Lenet). It raises the concern about capability of f-DD in more complicated settings with large datasets and backbone network.\n\nPlease refer to my comments about the weaknesses of the paper."
},
{
"confidence": 3,
"rating": 4,
"review_id": "KlP0tWCkto",
"review_text": "This paper improves the theoretical foundations of UDA proposed by previous work, named f-DD. By removing the absolute value function and incorporating a scaling parameter, f-DD yields novel target error and sample complexity bounds, allowing us to recover previous KL-based results and bridging the gap between algorithms and theory presented in Acuna et al. Leveraging a localization technique, this paper also develops a fast-rate generalization bound. Empirical results demonstrate the superior performance of f-DD-based domain learning algorithms over previous works in popular UDA benchmarks.\n\n1) This paper holds significant theoretical significance in the field of UDA (Unsupervised Domain Adaptation);\n2) The proof of the theorem is very solid; \n3) The experiments are also sufficient.\n\n1) The readability of the paper is poor. It is almost entirely composed of definitions, remarks, lemmas and theorems, lacking a figure to introduce the motivation of this paper and explain why the improved framework is effective. 2) It is difficult to reproduce the results, as the training objective (5) is very abstract and unclear how to implement it experimentally. 3) This paper requires a substantial foundation of reading other papers in order to be understood.\n\nHow to implement the training objective (5)?"
},
{
"confidence": 3,
"rating": 7,
"review_id": "mIxOtDRrHS",
"review_text": "In this paper, new expected risk analysis based on f-divergence is provided for the unsupervised domain adaptation problem. Although there are prior researches on expected risk analysis based on f-divergence, several issues have been pointed out, such as the fact that the variational representation of f-divergence used in these studies does not recover the Donsker-Varadhan representation of KL-divergence, and the use of the absolute value of the variational representation as a measure of domain discrepancy. \nIn this paper, to address these issues, the authors adopt an alternative variational representation of f-divergence and, based on this, provide an upper bound evaluation of the expected risk in the target domain, namely ``target risk $\\le$ source risk + marginal domain discrepancy + joint error''. Additionally, a sample approximation version of the derived upper bound is also provided, allowing it to be estimated from the data (excluding the joint error part, as in conventional bounds).\n\n- The paper clearly discusses what are difficulties with the conventional DA theory using f-divergence and explains how it is solved by the proposed approach. Especially, this paper provides a solid theoretical foundation, with detailed assumptions and rigorous proofs that are well-documented in the appendix. \n\n- Previous expected risk bounds in UDA have often been given by relatively simple inequality evaluations, following the formulation given by Ben-David et al. In contrast, a similar upper bound evaluation using the f-DD proposed in this paper requires an inequality evaluation for ``change of measure\" (as given in Lemma 4.1), and it can be seen that this is not an incremental extension of the conventional DA theory.\n\n- I don't think there is enough information needed when trying to calculate the derived upper boundary from the sample. For example, $t_0$ in Lemma 4.2 and the construction of the Rashomon set in Sec 5 should be discussed in more detail.\n\n- Are no assumptions specifically placed on the hypothetical set $\\mathcal{H}$?\n\n- In Lemma 4.2, is there a way to estimate the value of t_0 when t_0 cannot be written in closed form (e.g. for KL or Jeffereys employed in the Experiments?)? \n\n- The Rashomon set used in Section 5 appears to be in fact the set to be estimated (as the true expected risk of the source domain is unknown). How exactly is the Rashomon set constructed in this paper?\n\n- In the experiments, three types of discrepancies are evaluated for the proposed method, namely KL-DD, $\\chi^2$-DD and Jeffereys-DD. Then, do you have any insights on which discrepancy measure should be used for which type of problem? I think the question of which measure to use to evaluate domain discrepancy is critical not only for theorists but also for practitioners.\n\n- Is the f-DD proposed in Definition 4.1 always 'better' than the existing f-divergence-based discrepancy (Definition 3.1)? I am wondering whether there are cases where using absolute values to define the discrepancy is an advantage."
}
] | |
xRdpCOdghl | Enhancing Semi-Supervised Learning via Representative and Diverse Sample Selection | Semi-Supervised Learning (SSL) has become a preferred paradigm in many deep learning tasks, which reduces the need for human labor. Previous studies primarily focus on effectively utilising the labelled and unlabeled data to improve performance. However, we observe that how to select samples for labelling also significantly impacts performance, particularly under extremely low-budget settings. The sample selection task in SSL has been under-explored for a long time. To fill in this gap, we propose a Representative and Diverse Sample Selection approach (RDSS). By adopting a modified Frank-Wolfe algorithm to minimise a novel criterion $\alpha$-Maximum Mean Discrepancy ($\alpha$-MMD), RDSS samples a representative and diverse subset for annotation from the unlabeled data. We demonstrate that minimizing $\alpha$-MMD enhances the generalization ability of low-budget learning. Experimental results show that RDSS consistently improves the performance of several popular SSL frameworks and outperforms the state-of-the-art sample selection approaches used in Active Learning (AL) and Semi-Supervised Active Learning (SSAL), even with constrained annotation budgets. Our code is available at [RDSS](https://github.com/YanhuiAILab/RDSS). | https://openreview.net/pdf/9fd99d5fa35620629a56be581ef28d009815b175.pdf | [
{
"confidence": 5,
"rating": 4,
"review_id": "kVd9TqqRxl",
"review_text": "The paper suggests a new sampling method for the labeled set of semi-supervised learning. This sampling method, termed RDSS, selects a set of examples that is both representative of the data, and diverse. The paper shows that using such a sampling function improves both freematch and flexmatch, and compares it against other sampling methods, and methods from AL and SSAL.\n\nThe idea of the paper is good, and is well supported by theory. The experimental setup does convince me that the suggested method is better than random sampling when picking the labeled set of SSL. However, a better comparison to previous works is required, see the Weaknesses section.\n\nClarity: The paper is clearly written, the idea is well presented and intuitive, and the paper is easy to read and follow.\n\nSome of the claims made by paper already appeared in previous art. Specifically, [1] showed that \"traditional\" AL methods do not pick bad labeled sets for SSL when compared to random sampling. [2] showed that when the labeled set is particularly small, instead of traditional AL techniques, one should focus on labeling examples that are more typical and diverse, showing that such a method can drastically improve both AL and sampling techniques for SSL. [3] presented sampling strategy, showing that picking examples that are representative and diverse examples for the labeled set of SSL improves it by a big margin in low-budget scenarios.\n\nThe proposed manuscript does not reference or compare to any of these works. This affects both the novelty, significance and quality of the proposed method: the novelty is somewhat more limited, as many of the ideas overlap with existing works. The significance of this work is impacted, as while the problem at hand is important, it is unclear if the presented ideas pose significant advancement over the existing methods, and the quality is diminished, as a lot of comparisons are missing in the experimental setup.\n\nSpecifically, any low-budget strategy could be potentially applied to SSL as well, so those methods should be compared against as well. See for example [4], [5].\n\nAdditionally, the vice-versa argument should also hold -- if AL methods can be applied in this case, this method can be used as a method for picking labeled examples for active learning purposes and should be tested as such, as the literature in AL is much broader than the literature of picking the labeled set in SSL, which can provide a much wider context for the given work.\n\nIn addition, the framing of the paper is a bit unclear to me. I think the paper could benefit from explaining use cases in which one has the option to pick in advance the labeled set for SSL, which is not already covered by AL use cases.\n\n-------\n\n[1] Mittal, Sudhanshu, et al. Parting with illusions about deep active learning. (2019).\n\n[2] Hacohen, Guy et al. Active learning on a budget: Opposite strategies suit high and low budgets. (2022).\n\n[3] Yehuda, Ofer, et al. \"Active learning through a covering lens.\" (2022).\n\n[4] Mahmood, Rafid, et al. \"Low budget active learning via wasserstein distance: An integer programming approach.\" (2021).\n\n[5] Wen, Ziting, et al. \"NTKCPL: Active Learning on Top of Self-Supervised Model by Estimating True Coverage.\" (2023).\n\nCan you please elaborate on how the proposed method differs from the idea suggested in [2]?\n\nHow is the problem of selecting the labeled set of SSL different from the problem setting of active learning?"
},
{
"confidence": 3,
"rating": 5,
"review_id": "w14JM6MlJs",
"review_text": "This paper proposes a Representative and Diverse Sample Selection approach (RDSS) that utilizes a modified Frank-Wolfe algorithm to minimize a novel α-Maximum Mean Discrepancy (α-MMD) criterion, aiming to select a representative and diverse subset from unlabeled data for annotation. Experimental results demonstrate that RDSS consistently improves the performance of several popular semi-supervised learning frameworks and outperforms state-of-the-art sample selection methods used in Active Learning (AL) and Semi-Supervised Active Learning (SSAL), even under constrained annotation budgets.\n\n1.This paper is in Well-written, logically organized, and smoothly expressed.\n2. The presented results demonstrate the effectiveness of the proposed approach.\n\n1. The author conducted tests on two baseline methods(FlexMatch [58] and Freematch [50]), but neither of them represents the current state-of-the-art.\n2. Some details of the experiments are unclear, such as in Table 3.\n\n1. The definition and usage of variable X in the article are inconsistent. \n2. Is Y in the section starting from line 141 representing the point as X? If so, I suggest using notations like Xi, Xj for clarity.\n3. In Chapter 6, the determination of the kernel and parameters seems arbitrary. Could you provide some proofs, theories, or experiments to justify them?\n4. The SOTA models selected by the author in the experiments are somewhat outdated. It is recommended to include some more updated methods.\n5. In the experiment section, the author mentions the limitations of stratified sampling. What do these limitations refer to? Why was this method excluded? From the results, it seems that stratified sampling outperforms the proposed method in several settings.\n6. In Table 2, what sampling method did the other comparison methods adopt?\n7. What dataset does Table 3 represent? What is the data distribution?\n8. The author should objectively evaluate their method, including its limitations."
},
{
"confidence": 2,
"rating": 6,
"review_id": "bhUQhi7xof",
"review_text": "This paper proposes a new sample selection method, RDSS, for the SSL task. RDSS considers both the representativeness and diversity of the selected sample and achieves state-of-the-art performance. This is achieved by the proposed α-MMD criterion and an efficient optimization algorithm GKHR.\n\n1. RDSS considers both representativeness and diversity of samples, which is a convincing strategy, and the experimental results also demonstrate the effectiveness of this motivation.\n2. Sufficient theoretical analysis and experimental comparisons are conducted to demonstrate the effectiveness of the proposed method.\n\nI would like to see images of the actual selected samples and visualizations of the feature distribution to demonstrate that RDSS indeed balances the representativeness and diversity.\n\nPlease refer to the weakness."
},
{
"confidence": 3,
"rating": 6,
"review_id": "yWTB6PepJV",
"review_text": "Choice of the labeled set in the semi supervised learning is critical for the final performance of the model. This problem can also be looked as AL with SSL, or single shot AL with SSL (in other words similar to experimental design). This works provides a way to select the seed set which is representative, as well as diverse. The problem is reduced to minimizing MMD and similarity score of the selected examples. The paper finally proposes a greedy algorithm, and compare the proposed method against various subset selection baselines, and AL.\n\nI like the motivation of the problem and a neat theoretical derivation of the objective, and the provided theoretical analysis. Paper was also easy to follow and experiments are compelling.\n\n- From a purely combinatorial point of view, I think that the final objective is supermodular in nature. Given the vast literature on submodular/supermodular functions, is it not possible to get an algorithm purely from that standpoint? If so, how different would it be from the proposed one? \n\n- Can one derive things such as leverage scores to detect the outlier-ness of a given point (or any other score)? If so, then couldn't one use something such as diversity - outlier score (or add a score that models likelihood) , with diversity such as Facility location function, and optimize the final objective using greedy? \n\n- In experiments I believe one of the strong baselines such as facility location function is missing. Facility Location has a rich history and have been used in several instances in Active Learning ([1, 2, 3, 4]). I believe authors can add a small discussion on FL and add that baseline. Furthermore, other diversity based approaches have also been considered in the past [5]\n\n- Now a days a lot of focus is also for doing finetuning of existing CLIP models [3]. I'd appreciate one experiment on fine-tuning the CLIP models using the proposed method. \n\n\nReferences\n- [1] Submodularity in machine learning and artificial intelligence\n- [2] An Experimental Design Framework for Label-Efficient Supervised Finetuning of Large Language Models\n- [3] LabelBench: A Comprehensive Framework for Benchmarking Adaptive Label-Efficient Learning\n- [4] Deep Submodular Peripteral networks\n- [5] GLISTER: Generalization based Data Subset Selection for Efficient and Robust Learning\n\nRefer to the weaknesses."
}
] | |
xRQxan3WkM | The Implicit Bias of Adam on Separable Data | Adam has become one of the most favored optimizers in deep learning problems. Despite its success in practice, numerous mysteries persist regarding its theoretical understanding. In this paper, we study the implicit bias of Adam in linear logistic regression. Specifically, we show that when the training data are linearly separable, the iterates of Adam converge towards a linear classifier that achieves the maximum $\ell_\infty$-margin in direction. Notably, for a general class of diminishing learning rates, this convergence occurs within polynomial time. Our result shed light on the difference between Adam and (stochastic) gradient descent from a theoretical perspective. | https://openreview.net/pdf/d4a15e237e7fcad39521c47bdcdaeb9981dca258.pdf | [
{
"confidence": 4,
"rating": 6,
"review_id": "aHfkRN83Kq",
"review_text": "The main focus of this paper is on the implicit bias of Adam for a single layer linear model which performs binary classification on separable data. In particular, assuming a zero stability constant $\\epsilon$, this paper reveals that Adam finds the solution that achieves maximum-$\\ell_\\infty$-margin and characterizes the convergence rate for different classes of learning rate. This implicit bias is different from the $\\ell_2$-norm minimization solution obtained by previous work which does not assume $\\epsilon = 0$.\n\n- This paper is clearly written and well-organized. It is easy and clear to follow the argument and motivation of this paper, e.g., the proof sketch makes it easy to follow the way how the theoretical conclusion is developed. In addition, to me, the introduction of the related works are comprehensive and clear. It also clearly summarizes the difference between this paper and related works.\n- The settings and results of this paper are new compared to previous works, i.e., previous works showed an $\\ell_2$-norm solution implicit bias of Adam on separable data while this paper reveals an $\\ell_{\\infty}$-norm implicit bias when the stability constant $\\epsilon$ is zero.\n\nDespite the novelty of the theoretical claims, I still have several concerns, which I will discuss in the following.\n\n1. Removing the stability constant $\\epsilon$ makes the approach of this paper fails to characterize the influence of it, which, though being small, still has non-negligible effect, e.g., [1] observed that Adam with an $\\epsilon$ that is too small does not even converge in certain circumstances. Treating $\\epsilon$ as 0 seems a bit rough to me. \n\n In addition, [2] showed that Adam minimizes the interpolation norm of gradients that depends on magnitudes of various hyper parameters including the stability constant $\\epsilon$ (although [2] did not specify the types of loss functions and model architectures). [1] claimed that Adam with nonzero $\\epsilon$ converges to $\\ell_2$-norm solution, which is also verified by extensive experiments. As a comparison, this paper showed that both Adam with $\\epsilon=0$ and with a non-negligible $\\epsilon$ do not converge to the aforementioned solutions (line 210). In this sense, it seems that the conclusion reached by this paper contradicts with those derived by [1, 2]. Therefore, in my view, it would be better to start with a non-zero $\\epsilon$ and let the case with $\\epsilon=0$ be a special case to better capture the effect of the $\\epsilon$ on the implicit bias.\n\n2. This paper only considers a simple setting: the model is only a one-layer linear model and there is no stochastic sampling noise which is typically necessary in practice. As a comparison, authors of [1] have already studied Adam on separable data for homogeneous models, which can cover the single layer model of the current work as a special case. Thus excluding the stochastic sampling noise in the current work is kind of unsatisfying to me since the model is already a simple one. In addition, I think that the authors of the current work should at least repeat the experiments conducted in [1] (such as those for homogeneous neural networks) to further support their theoretical claims, especially considering that the authors claimed in line 210 that their results are more accurate than those of [1].\n\n**Reference**\n\n[1] Wang et al. The implicit bias for adaptive optimization algorithms on homogeneous neural networks.\n\n[2] Cattaneo et al. On the Implicit Bias of Adam.\n\n1. Could the authors explain the contradiction and connection with previous works? Is it possible to start with a non-zero $\\epsilon$ and let $\\epsilon=0$ be a special case? \n\n2. How will adding stochastic sampling noise affect the implicit bias?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "mtXPBn5MYb",
"review_text": "This paper examines the implicit bias of the Adam optimizer in the context of linear logistic regression, demonstrating that it converges to the maximum $\\ell_\\infty$-margin solution under certain mild conditions. The authors note that omitting the stability constant in Adam updates results in a different implicit bias than gradient descent, with or without momentum, which converges to the maximum $\\ell_2$-margin solution. They also explore various decreasing learning rates, showing that Adam's margin converges at a polynomial rate, which is faster than that of gradient descent. Additionally, they provide numerical experiments that support their findings.\n\n- Understanding why Adam performs better than GD in several settings is an important problem and this work takes an important step towards this by showing that Adam has a different implicit bias than GD in the linear logistic regression setting.\n\n- Overall, the paper is well-written and easy to follow. The proof sketch in Section 6 is explained well.\n\n- The paper does not present results for a fixed learning rate and only considers a set of decreasing learning rates.\n\n - The discussion in lines 50-52 and after Corollary 4.7, comparing the rates of Adam and GD, should also comment on the convergence rates for GD with adaptive learning rates (e.g., normalized GD) which have been shown to converge faster (see [1] and related work) than GD.\n\n - (Minor) In Assumption 4.3, ‘non-increasing’ should be ‘decreasing’ or ‘diminishing’.\n\n- The results in prior work on implicit bias of GD are global (hold for any initialization), whereas the results in this paper require an assumption on the initialization (Ass. 4.2). Based on the discussion following this assumption, it might be better to state an assumption on the data and then show that the condition on the initialization holds as a Lemma.\n\n- The paper does not comment on how optimal the obtained rates in Corollary 4.7 are.\n\n**References:**\n\n[1] Wang et al., Achieving Margin Maximization Exponentially Fast via Progressive Norm Rescaling, 2023.\n\nCan the authors comment more on why considering the stability constant $\\epsilon=0$ makes the setting more challenging? I understand the motivation in lines 105-107, but it is unclear what the challenge is since the accumulated second-order moments would be non-zero."
},
{
"confidence": 3,
"rating": 7,
"review_id": "QMFLkYjMUQ",
"review_text": "In this work, the author studies the implicit bias of Adam optimizer for a single layer neural network on separable data. The author's work suggests that, compared to the implicit bias of gradient descent which is the max $ \\ell_2 $ margin solution, Adam solution converges to the maximum $ \\ell_\\infty $ margin solution. For this work, authors take both exponential and logistic loss and find that the convergence speed is on a polynomial order. \n\nIn order to confirm the results, the authors perform experiments on synthetic datasets for binary classification tasks and confirm Adam’s convergence to the $ \\ell_\\infty $ margin comparatively.\n\nThe work is novel (to the best of my knowledge) and interesting as the study of implicit bias of Adam could have further implications in characterizing the difference in optimization behavior of Adam vs SGD in practical scenarios. The assumptions of the work have been clearly presented and seem reasonable. With regard to the $ \\epsilon $, while theoretical results are not provided, the authors include convincing experimental illustrations to convince me of the assumption. I also appreciate the well written proof sketch which helps convey the ideas\n\nAt the moment, I have some concerns with the paper which are more fit to be discussed as questions.\n\n1) Can the authors expand on how they arrive at the right side of inequality after line 292 using 6.1 ? Perhaps take me through the inequality step by step ? \n2) Can the author provide some comments regarding the independence of convergence in the case of $ a = \\frac{2}{3} $ from $ \\rho $ ? Is there some intuition with regards to the boundaries and case on $ a $ ?"
},
{
"confidence": 3,
"rating": 7,
"review_id": "GvQmgodRaD",
"review_text": "This paper studies the implicit bias of the Adam optimizer for logistic regression on linearly separable data. The authors prove that Adam converges to the linear classifier with the maximum $\\ell_\\infty$-margin. This result contrasts with the classical results on (stochastic) gradient descent (with or without momentum), which converge to the maximum $\\ell_2$-margin solution.\n\n- The authors theoretically study a popular yet not well-understood optimization method, Adam, in the context of a well-studied classical problem: logistic regression on linearly separable data. This offers a solid and insightful contribution to understanding Adam. In particular, distinguishing Adam from (S)GD with/without momentum on this classical problem is a very interesting result.\n- The technical contributions are also of independent interest, as they prove the results for Adam without relying on the stability constant (which is closer to practice) and use mild assumptions.\n- The paper is well-written and easy to follow. The proof sketch provides a clear and comprehensive overview of the proof of the main theorem.\n\nThere are no major concerns about this paper. Below are minor comments and some areas for improvement:\n- The paper does not provide an intuition behind why Adam achieves the maximum $\\ell_\\infty$-margin solution, in contrast to GD which achieves the maximum $\\ell_2$-margin solution. It would be great if the authors could offer insights on how the $\\ell_\\infty$-margin arises instead of the $\\ell_2$-margin, for example, through a warm-up analysis with SignGD ($\\beta_1=\\beta_2=0$) or RMSProp ($\\beta_1=0$). One way to provide an intuition is as follows: Gunasekar et al. (2018) proved that steepest descent converges to the max-margin solution, implying that SignGD (steepest descent w.r.t. $\\ell_\\infty$-norm) converges to the maximum $\\ell_\\infty$-margin solution. Since SignGD is known to be a good proxy for Adam, this may offer an insight into why Adam converges to the maximum $\\ell_\\infty$-margin solution.\n- The authors claim that the bounds in Corollary 4.7 are derived under worst-case scenarios and argue that this is why, in practice, we often observe margins converging faster than the bounds in the corollary. However, this statement lacks supporting evidence. The paper should prove that the rate of convergence is tight. Otherwise, the observed faster convergence of margins in experiments might simply indicate that the bound is not tight enough.\n- Some sentences, including those in the abstract, use the term \"convergence\" unclearly. For example, in the abstract, \"this convergence occurs within polynomial time\" does not indicate the objective (the normalized $\\ell_\\infty$-margin in this case) of convergence. This could be confused with other notions of convergence, such as convergence in direction (i.e., $\\frac{w_t}{\\lVert w_t \\rVert} \\to \\frac{w^*}{\\lVert w^* \\rVert}$).\n- (page 6, line 183) According to the paper, the normalized $\\ell_2$-margin converges at a speed of $O(\\log \\log t / \\log t)$ when using GD. However, this should be corrected to $O(1 / \\log t)$. According to Soudry et al. (2018), the normalized weight vector converges to the maximum $\\ell_2$-margin vector \"in direction\" with a convergence rate of $O(\\log \\log t / \\log t)$, i.e., $\\lVert \\frac{w_t}{\\lVert w_t \\rVert} - \\frac{w^*}{\\lVert w^* \\rVert}\\rVert = O(\\log \\log t / \\log t)$. However, the normalized $\\ell_2$-margin converges at the speed of $O(1/\\log t)$, i.e., $|\\min \\frac{\\langle w_t, y_t \\cdot x_t \\rangle}{\\lVert w_t \\rVert} - \\frac{\\langle w^*, y_t \\cdot x_t \\rangle}{\\lVert w^* \\rVert} | = O(1/\\log t)$.\n- (page 1, line 25) Typo: reply on -> rely on\n\n---\n[Gunasekar et al. 2018] Characterizing Implicit Bias in Terms of Optimization Geometry, ICML 2018.\n\n[Soudry et al. 2018] The Implicit Bias of Gradient Descent on Separable Data, JMLR 2018.\n\n- Does Theorem 4.5 imply that Adam (with a learning rate $\\eta_t = (t+2)^{-a}$, $a<1$) reduces loss faster than GD (Adam: $O(e^{-\\gamma t^{1-a} / 4(1-a)})$ vs. GD: $O(1/t)$)? It would be great if the authors could provide a detailed comparison of the convergence rates of loss between Adam and (S)GD with/without momentum.\n- Is $\\beta_1 \\le \\beta_2$ a necessary condition? What happens if we use Adam with $\\beta_1 > \\beta_2$?\n- Assumption 4.4 seems to be a non-standard assumption. Is this assumption a necessary condition? Can you explain why such an assumption is needed?"
}
] | |
xQWJBeK5rh | Structural Inference of Dynamical Systems with Conjoined State Space Models | This paper introduces SICSM, a novel structural inference framework that integrates Selective State Space Models (selective SSMs) with Generative Flow Networks (GFNs) to handle the challenges posed by dynamical systems with irregularly sampled trajectories and partial observations.
By utilizing the robust temporal modeling capabilities of selective SSMs, our approach learns input-dependent transition functions that adapt to non-uniform time intervals, thereby enhancing the accuracy of structural inference.
By aggregating dynamics across diverse temporal dependencies and channeling them into the GFN, the SICSM adeptly approximates the posterior distribution of the system's structure.
This process not only enables precise inference of complex interactions within partially observed systems but also ensures the seamless integration of prior knowledge, enhancing the model’s accuracy and robustness.
Extensive evaluations on sixteen diverse datasets demonstrate that SICSM outperforms existing methods, particularly in scenarios characterized by irregular sampling and incomplete observations, which highlight its potential as a reliable tool for scientific discovery and system diagnostics in disciplines that demand precise modeling of complex interactions. | https://openreview.net/pdf/1b2f2766bee3909361bdfd7042fefaf2beb4f061.pdf | [
{
"confidence": 4,
"rating": 7,
"review_id": "ukKArtvxJe",
"review_text": "The paper introduces the SICSM framework, integrating Selective State Space Models (SSMs) with Generative Flow Networks (GFNs) to tackle challenges in dynamical systems characterized by irregularly sampled trajectories and partial observations. SICSM leverages the adaptive temporal modeling capabilities of SSMs to learn input-dependent transition functions, enhancing structural inference accuracy. It aggregates diverse temporal dependencies and channels them into a GFN to approximate the posterior distribution of the system’s structure. Extensive evaluations across multiple datasets demonstrate SICSM's good performance in accurately inferring complex interactions in partially observed systems.\n\n- The integration of Selective SSMs with GFNs is a novel approach that addresses significant challenges in structural inference for dynamical systems. The adaptive mechanisms for handling irregular sampling and partial observations are particularly innovative.\n- The research is thorough and well-documented, with extensive evaluations across a variety of datasets. The methodological rigor and comprehensive experimental validation enhance the reliability of the findings.\n- The paper is well-organized and clearly written, with detailed explanations of the methodologies and experimental setups. Figures and diagrams effectively illustrate the concepts and results.\n- The proposed SICSM framework has broad applicability in scientific discovery and system diagnostics across multiple disciplines. Its ability to handle real-world complexities such as irregular sampling and partial observations makes it a valuable tool for researchers.\n\n- The implementation of SICSM is computationally intensive, requiring significant resources and expertise. This complexity may limit its accessibility and widespread adoption.\n\n1. How does SICSM handle situations where the interaction structures of the dynamical systems change over time? Are there plans to extend the framework to support dynamic graphs?\n\n2. Can the authors provide more details on the computational resources required for training? Are there any strategies to optimize resource usage?\n\n3. What specific real-world applications do the authors envision for SICSM? Are there particular domains where it has shown exceptional promise?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "20GlwIgOYM",
"review_text": "This paper proposes to combine State Space Models and Generative Flow Networks to perform structural inference in an irregular time series context. The proposed method is evaluated on a series of different tasks where it performs well, and compared to a number of baselines. The method's robustness to short time series and missing observations is evaluated.\n\nThe paper proposes an interesting architecture and solves problems that have the potential to be very relevant in real world contexts, such as biological time series. The empirical evaluation is fairly thorough about testing on many different tasks.\n\nMy main concerns for the paper are its low novelty and its low number of ablations, which make it hard to understand how specific pieces contribute to the performance of the method.\n\nGenerally I'm uncomfortable with the way many things are presented in the paper, it's not always clear what's a novel contribution and what's not. I encourage the authors to be clear and exercise an abundance of caution. \n\n/!\\\\ In my humble opinion this paper uncomfortably downplays its similarity to DAG-GFN [14] and JSP-GFN [15] in several places, and I'm not even an author of these papers. This is especially concerning considering that in many instances JSP-GFN is the closest performing baseline to the proposed method.\n\nI'm a bit put off by the framing of the method. The SSSM is the parameterization, the GFN is the optimization method, the structural inference is the task. The ingredients aren't individually novel (e.g. Mamba, ContiFormer), and some of those combos have been tried before (I'm thinking in paricular here of DAG-GFN/JSP-GFN). I don't really see how \"SICSM [..] redefines approaches to structural inference in complex systems\". Maybe what bothers me is that this kind of language obscures the actual contributions of the paper. Many design choices are close to ones taken in [14-15]. I'd encourage the authors to be more careful here. I understand this may come from the authors' lack of familiarity with English, but it creates an unfortunate ambiguity in deciphering what's a contribution and what is just using prior work.\n\nI'm not sure what an $\\alpha$-distance is, is it meant to be a placeholder for any norm?\n\nSection 3.3 introduces the flow-matching condition, but more modern conditions exist, and this work in particular seems to be using DB (**and not SubTB!** as suggested by the appendix text). Why is it only introduced in the appendix if it is the chosen objective? Why not just directly present the DB condition used? This is an example of the similarity to DAG-GFN being somewhat downplayed; I encourage the authors to exercise caution.\n\n\"To enhance the architectural sophistication of our model, we arrange L Residual Blocks in a sequential configuration, with the output of each block feeding directly into the next.\" This is a good example of an off putting phrasing. This describes a standard residual model, but the phrasing in this paragraph (and others in this section) suggests this is somehow a new way to do things. For example, unless I'm missing something, what the authors as \"intricate multi-hop relationships\" is simply a natural and normal consequence of depth in _deep_ neural networks. Either that or the text is not appropriately explaining the uniqueness of the method, which might be even more concerning.\n\nThe trick presented in (9) is neat, but it does imply spending $B$ times more compute. Are baselines also allowed to use this trick? If not the comparisons may be unfair.\n\nIn section 4.3, the objective is taken from [14-15]. Please use proper attribution.\n\nSection 5.4 poses an interesting hypothesis, but it's unfortunate that it is only qualitatively evaluated. Why not run proper experiments and measure the effect of residual depth? \n\nAnother issue more generally is that the design choice of using a residual SSSM model doesn't seem compared to alternatives. What about a deep transformer with exactly the same tricks? What choices matter? It's nice that the effect of the method is analyzed wrt to for example missing observations and compared to baselines, but what about the method with itself, i.e. ablations?"
},
{
"confidence": 3,
"rating": 6,
"review_id": "QPBG5OKxD2",
"review_text": "The authors consider the problem of structure learning of dynamical systems from irregularly sampled trajectories and partially observed systems. They propose Structural Inference with Conjoined State Space Models (SICSM), a method based on selective state space models (SSMs) and generative flow network (GFNs). The central idea of this work is to use a SSM for modelling the behaviour of dynamical systems while using a GFN to learn the interacting graph structure between the variables of the system. The authors evaluate their proposed approach on a comprehensive set of datasets for various tasks and compare against a numerous baselines.\n\nThe authors present a method that addresses a challenging problem in the domain structure learning of dynamical systems -- i.e. learning system structure from irregularly sampled trajectories and partially observed systems. The use of SSMs to approximate system dynamics while using GFNs to learn the graph structure of the system is unique and novel approach to this problem. The authors provide a comprehensive evaluation of their method over variety of systems for irregularly sampled trajectories and partially observed systems, demonstrating SICSM consistently outperforms counterpart approaches.\n\n- The method has 3 key components: state space model, embedding residual blocks, and a GFN to approximate the graph structure of the system. It is not entirely clear how these individual components interact and the explicit need for the GFN (see questions below). \n- The authors consider a comprehensive set of datasets and baselines, but only one evaluation metrics (AUROC). For example, some other metrics to consider for this task are: structural hamming distance (SHD), F1-score, area under the precision-recall curve (AUPRC). Only considering one evaluation metrics makes it difficult to assess the robustness of the approach.\n- Another method that seems relevant to this work which address an similar problems is CUTS (Cheng et al. 2023). It appears that majority of the baselines considered in this work are or not necessarily methods explicitly tailored to handle irregular time-series. Including a method like CUTS in this evaluation may be important to create a fairer comparison of SICSM. \n\nReferences:\nCheng, Yuxiao, et al. \"Cuts: Neural causal discovery from irregular time-series data.\" International Conference on Learning Representations (2023).\n\n- For the reward defined in Equation 8, what is the explicit form of $R(<G, \\lambda>)$? The authors state that $P(U_{all} | \\lambda, \\mathbf{Adj})$ represents the likelihood model implemented via a neural network. Is this model trained beforehand? Or is the reward being simultaneously learned throughout training with the GFN?\n- A central advantage to using a GFN (or specifically JSP-GFN) to model structure is the ability so approximate the distribution/uncertainty over this structure (and in this case also over the parameters) -- i.e. approximating $P(\\mathbf{Adj}, \\lambda | U_{all})$ instead of just $\\mathbf{Adj}$. In the results, only one deterministic metrics is considered (AUROC). Why not consider a distributional metric to evaluate how well $P(\\mathbf{Adj}, \\lambda | U_{all})$ is approximated, especially given you are comparing to JSP-GFN?\n- What is the motivation of also learning the parameters $\\lambda$ if the primary objective is to learn $\\mathbf{Adj}$? Moreover, there is no evaluation of $P(\\lambda | G)$. If this is an important aspect of the approach, why not include a distributional metrics (as stated in my previous comment), or possibly including evaluation of the negative log-likelihood?\n- What is not entirely clear to me is the use of the state space model (SSM) architecture -- specifically, is $\\mathbf{Adj}$ embedded in the SSM of each residual block? Is the approximated graph structure being used by the SSM or is this an independent output?"
},
{
"confidence": 2,
"rating": 5,
"review_id": "jvXKp4xWKK",
"review_text": "Processes of scientific interest which are representable as graphs, in biology, chemistry, material sciences, mechanics, are an important application for machine learning. Nodes often represent physical objects, some of which influence each other. Nodes exhibit a set of features which can be observed over time. Prior knowledge about the process stems from a mechanistic understanding and can often be represented as the presence or absence of edges between nodes. Node feature observations may be irregularly spaced through time; not all nodes may be observed with every observation. \nThis paper develops a statistical model for this application with support for irregularly sampled and partial observations of node features, as well as prior knowledge incorporation. Prior knowledge is restricted to the indication of presence, but not absence, of edges. Partially observable nodes are assumed to be from a static node set throughout all observations (i.e., nodes are either always observable or always unobservable). Observations are not assume to contain a timestamp indication (as in mobile phone accelerometer readings, which may be irregularly sampled but whose timestamp is read at input).\nThe model's architecture is relatively sophisticated and is based on generative flow networks to represent and learn the structural aspects of the graph, and state space models to represent the evolution of node features over time. \nThe paper presents experiments on 16 datasets stemming from 4 physical models, and compares to 7 other models, showing superiority in scenarios where observations are irregularly spaced or nodes partially observable.\n\nThe paper takes an established problem class (graph systems) with its known challenges (irregular sampling, partial observations), which is not original. However it goes to great lengths to make use of two strong methods, GFN and SSM, with a resulting combination that seems reasonable, strong and of useful application.\nThe paper is generally clear, notations are coherent and legible, several diagrams support the explanation. To improve the writing, a running example might help bridge the abstractions (node, edge, state...) to physical reality, illuminating and motivating the implementation. The same goes to comment on the connection between the model and the applied datasets (some of this is covered in Annex C, with the exception of C.5 which leaves the physical counterparts of modelled data undescribed).\n\nExperimental validation is moderately convincing. Baseline implementations seem strong, with care taken to recover implementations of competing methods, as documented in Annex D. However, all datasets are synthetic. The only real dataset, PEMS, presented in Annex C.5, with results in Annex E.2. In addition, experimental validation seems unconcerned with performance outside the specific cases of partial observations or irregular sampling -- reducing the paper's claim to \"this model is better for these two scenarios only\".\nThere seem to be a duplication in the presentation of datasets (both in sec5.1, between the paragraphs starting l.279 and l.290, and again between Annex C.1 and C.2 vs C.4) -- this is confusing. Also, sec3.3 seems to be internally redundant with duplicated points (e.g. l.151 vs eq3, and l.148 vs l.157, which again is confusing. Numerous sentences have incorrect English syntax which obscures their meaning\n\n* Eq.2: since s' is terminal, isn't any $s'' = s_f$ ?\n* Eq.5: in your contemplated application scenario, the interval between sampling times, or equivalently the timesamp of each sample, allowing to calculate $\\Delta$, is not given with the samples, correct? I've worked on mobile accelerometer/GPS data where sampling is irregular but the timestamps are given, which is why I'd like to make sure. Can you clarify whether a posterior over $\\Delta$ can be pratically recovered?\n* It might be useful to show a concrete, simple example of training data to clarify the scenario described in abstract terms sec3.1. \n* Annex 1 fig5: shouldn't all variables be indexed with $i$, the node id? I'm asking because $A, h^t$ aren't. But if so, how is the interdependence between nodes modelled? \n* fig6: do I have it right that GFlowNet only adds edges, but doesn't remove any, moving from start to end? Does that have as a consequence that any prior knowledge can only formulated as known-to-be-present edges, but not known-to-be-absent edges (impacts Annex F l.978)?\n* Annex C.5: what is the physical model? What is a node, an edge of the model?\n* Annex l.872: how is the % of prior knowledge defined?\n* Annex l.863: link is referred to but missing"
}
] | |
xOCAURlVM9 | Assembly Fuzzy Representation on Hypergraph for Open-Set 3D Object Retrieval | The lack of object-level labels presents a significant challenge for 3D object retrieval in the open-set environment. However, part-level shapes of objects often share commonalities across categories but remain underexploited in existing retrieval methods. In this paper, we introduce the Hypergraph-Based Assembly Fuzzy Representation (HARF) framework, which navigates the intricacies of open-set 3D object retrieval through a bottom-up lens of Part Assembly. To tackle the challenge of assembly isomorphism and unification, we propose the Hypergraph Isomorphism Convolution (HIConv) for smoothing and adopt the Isomorphic Assembly Embedding (IAE) module to generate assembly embeddings with geometric-semantic consistency. To address the challenge of open-set category generalization, our method employs high-order correlations and fuzzy representation to mitigate distribution skew through the Structure Fuzzy Reconstruction (SFR) module, by constructing a leveraged hypergraph based on local certainty and global uncertainty correlations. We construct three open-set retrieval datasets for 3D objects with part-level annotations: OP-SHNP, OP-INTRA, and OP-COSEG. Extensive experiments and ablation studies on these three benchmarks show our method outperforms current state-of-the-art methods. | https://openreview.net/pdf/8985b94f6c2cb16afe1fb713d3e65acc97d532b2.pdf | [
{
"confidence": 3,
"rating": 3,
"review_id": "N3qEmmW82p",
"review_text": "This paper presents a novel 3D object retrieval method. First, to facilitate this task, the authors build 3 datasets for training and evaluation, which may significantly benefit the community. Then the paper propose the Isomorphic Assembly Embedding (IAE) and the Structured Fuzzy Reconstruction (SFR) modules, which are designed to generate assembly embeddings with geometric-semantic consistency and overcome the distribution skew of unseen categories. Besides, HIConv is proposed to capture high-order correlations within and among objects. Extensive experiments show that the method achieves sota performance.\n\n1. This paper builds 3 datasets for the task, which may facilitate future research. \n2. The paper proposes several novel modules to capture the part-level and inter-object features for object retrieval.\n3. The task itself is important in shape understanding.\n\n1. No visualization results.\n2. The presentation is hard to understand. There are quite some complex equations, like Eq 2 and Eq 4. Please briefly explain what they mean and how they work.\n3. In Fig. 1, it shows that intra-object features are extracted before inter-category features. But in Fig. 2, I only see Inter-object features? It's hard for me to match them up.\n4. I still don't understand the input. So you need dense point cloud with ground truth 3D part segmentation as input, right? If the segmenation is not perfect, will the method collapse? if the point cloud undergoes SE(3)-transformation, will the method collapse? Can this method handle partial point cloud input, like the point cloud back-projected from depth map?\n\nsee weakness"
},
{
"confidence": 4,
"rating": 6,
"review_id": "5aRJhX39O0",
"review_text": "The manuscript introduces a framework (HAFR) for addressing the challenge of open-set 3D object retrieval. The authors propose a bottom-up approach focusing on part assembly, leveraging both geometric and semantic information of object parts to enhance retrieval performance across categories, including those unseen during training.\n\nThe HAFR framework consists of two main modules: Isomorphic Assembly Embedding (IAE) and Structured Fuzzy Reconstruction (SFR). The IAE module utilizes Hypergraph Isomorphism Convolution (HIConv) and assembly auto-encoders to generate embeddings with geometric-semantic consistency. The SFR module tackles distribution skew in open-set retrieval by constructing a leveraged hypergraph based on local and global correlations and employs a memory bank for fuzzy-aware reconstruction.\n\nThe authors have created three datasets, OP-SHNP, OP-INTRA, and OP-COSEG, to benchmark their approach. Extensive experiments demonstrate the superiority of HAFR over current state-of-the-art methods in open-set 3D object retrieval tasks.\n\n- The paper presents a method for open-set 3D object retrieval that cleverly integrates part-level information using hypergraphs, which is a unique and promising direction in the field. The HAFR framework is well-thought-out, with clearly defined modules (IAE and SFR) that address different aspects of the retrieval task, from assembly isomorphism to distribution skew mitigation.\n- The construction of three new datasets with part-level annotations provides a valuable resource for the research community and supports the validation of the proposed method.\n- The methodology is clearly described, and the algorithms are well-structured, making it relatively easy for readers to follow the technical contributions.\n- The paper is well-written and easy to follow.\n\n- The paper does not address scenarios with varying numbers of parts per object. Expanding the framework to handle flexibility in the number of parts could improve its applicability.\n- The manuscript could benefit from a discussion on the computational complexity and efficiency of the proposed methods, especially when scaling to larger datasets or higher-dimensional part features.\n- Why not evaluate on the PartNet(https://partnet.cs.stanford.edu/)?\n- Although the paper claims state-of-the-art performance, they do not achieve the best (SDML is the best on OP-COSEG for NDCG metric), what is the reason?\n- Some implementation details, such as network architecture specifics and hyperparameter settings, could be better elaborated to ensure reproducibility.\n- The paper mentions that data and code will be made available upon acceptance, which is good practice. - However, providing this information upfront or during the review process could enhance transparency and reproducibility. For the three datasets, the detailed construction is missing and encourages the authors to publicize the data, facilitating the community.\n- The limitations and failure cases should be discussed comprehensively.\n\nThe manuscript presents a contribution to the field of 3D object retrieval, particularly in the open-set scenario. The proposed HAFR framework is innovative and has been demonstrated to be effective through rigorous experimentation. However, there are areas where the manuscript could be improved, particularly in terms of computational efficiency, limitations on various parts, and other minor issues. Addressing these points would likely enhance the manuscript's impact and applicability in the field."
},
{
"confidence": 4,
"rating": 7,
"review_id": "iGxb5XsCcR",
"review_text": "This paper proposes to utilize the part-assembly representation method to mitigate the distribution skew of unseen categories, enhancing the generalization performance for open-set 3D object retrieval. Compared to previous methods, this paper benefits from part-level representation learning rather than object-level representation, obtaining in a good generalization on unseen categories. To utilize the part-level representation, this paper introduces Isomorphic Assembly Embedding (IAE) and the Structured Fuzzy Reconstruction (SFR) modules. The former can generate the assembly embedding isomorphically for each object, and the latter is used for generating the fuzzy representation thus overcoming the distribution skew of unseen categories.\n\nThe problem is well-motivated and the solution seems working well. The results are good. The paper also contributes three 3D point cloud datasets with multiple part annotations for benchmarking. Extensive experiments on the three benchmarks demonstrate the superiority of the proposed method over current state-of-the-art 3D object retrieval methods.\n\n1. The datasets OP-INTRA and OP-COSEG mentioned in the paper may have limitations in category diversity, number of parts, and dataset size, which may affect the generalization ability of the model.\n2. The framework comprises many sub-architectures, such as the HIConv layer, multiple auto-encoders, fuzzy embeddings, and memory bank, it seems to be relatively complex. However, this paper does not explicitly discuss the computational efficiency of the model, including training and inference time, and computational cost.\n3. Though the paper proposes a solution to the open set problem, the datasets are all virtual. Its generalization ability to unseen categories in real-world applications still needs further verification.\n4. The ablation studies show the effect of the HIConv layer. However, only comparisons with MLP and GIN are performed, but no comparisons with other neural layers such as KAN, nor is the number of HIConv layers ablated.\n5. The experiments are only conducted on the proposed datasets. The generalization ability of the model on a wider data distribution requires more verification. It would be better to add some experiments on previous public datasets or datasets without open-set settings to demonstrate generalization capabilities.\n\nThe quantitative performance comparisons in Table 2 show the superiority of the proposed method. However, this paper only surpassed the second place by a little bit in some metrics, and there is no sufficient statistical information to prove the significance of the results, such as p-values."
},
{
"confidence": 5,
"rating": 5,
"review_id": "PnhZyQSp30",
"review_text": "This paper presents a method for finding similar samples from a set of 3D objects given query objects in an open setting, where objects can belong to both already seen and new categories. This method is based on considering 3D objects as hypergraphs consisting of individual geometric and semantic parts of objects. The hypergraph is used to form Isomorphic Assembly Embedding. The second part of the proposed HAFR framework is the Structured Fuzzy Representation module that constructs a hypergraph based on local certainty and global uncertainty correlation to enable transfer from seen to unseen categories. The authors propose a new layer, HIConv, which improves the quality of the generated representation. The authors demonstrate the effectiveness of their approach on three datasets that they constructed for this task.\n\n- The idea that one can understand the whole object shape from its parts sounds interesting and reasonable.\n- The description of Isomorphic Assembly Embedding and Structured Fuzzy Reconstruction is formal and rather clear.\n- The authors conduct extensive ablation studies of their method.\n\n- Based on the provided experiments, it is unclear if HAFR can generalize well to an unseen domain. Are the results in Table 2 provided for the same suite of model weights?\n- The literature review does not include existing methods for open-set 3d object retrieval and recent methods for closed-set 3d object retrieval.\n- When comparing with other methods, the authors use their own modification of existing multimodal methods. A comparison with modern methods for open-set 3d object retrieval, such as [1], is necessary to demonstrate the effectiveness of this particular method of object representation.\n- The method's description lacks an explanation of how the resulting fuzzy embeddings are used to find similar objects. Additionally, the description contains undefined concepts like isomorphism loss and integration function. If these concepts are not introduced by the authors, please include references to articles where they are defined.\n\n[1] Zhou, J., Wang, J., Ma, B., Liu, Y. S., Huang, T., & Wang, X. (2023). Uni3d: Exploring unified 3d representation at scale. arXiv preprint arXiv:2310.06773.\n\n1. How are fuzzy embeddings used to find similar objects in the target set?\n\n2. What is the size of the memory anchors bank? Will the method remain effective if the dataset contains more than 16 categories?\n\n3. The method is described as open-set, but it requires GT segmentation of the object into parts. How do you see its applicability in real-world scenarios where GT segmentation might not be available for any object? How much would the quality metrics decrease if we used a neural network model for part segmentation?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "1JFubbntBz",
"review_text": "This paper proposes a framework for open-set 3D object retrieval, called the Hypergraph-Based Assembly Fuzzy Representation (HAFR) framework. This model leverages an Isomorphic Assembly Embedding (IAE) to integrate geometric and semantic consistency. Furthermore, a Structured Fuzzy Reconstruction (SFR) is used to overcome the distribution skew of unseen categories. On three point cloud datasets constructed by the authors, this model outperforms the state-of-the-art.\n\n- The motivation for this work is well-established.\n- The idea of using hypergraph structures to achieve high-order correlations both within and between objects is novel.\n- Sufficient quantitative and qualitative comparisons verify the effectiveness of the proposed model.\n\n- In structured fuzzy reconstruction, the value of k in the k-nearest neighbors seems to determine the global uncertainty hyperedge. However, the paper lacks explanation or experiments to clarify the selection of k value.\n\n- While HGM2R [1] employs a multimodal approach, the IAE component appears to be similar to the Multi-Modal 3D Object Embedding in HGM2R. What are the differences and unique contributions of IAE compared to the embedding technique used in HGM2R?\n\n-In Table 2, although HGM2R also utilizes hypergraphs, it shows only slight improvements over previous methods in most metrics. For example, the mAP scores on three datasets are only about 0.1 higher. However, the method proposed in this paper demonstrates a significant improvement over HGM2R on the OP-COSEG dataset, with an increase of nearly 0.6. How can this result be explained?\n[1] Hypergraph-Based Multi-Modal Representation for Open-Set 3D Object Retrieval. TPAMI 2023.\n\nPlease refer to paper weaknesses."
}
] | |
xO9GHdmK76 | Infinite-Dimensional Feature Interaction | The past neural network design has largely focused on feature \textit{representation space} dimension and its capacity scaling (e.g., width, depth), but overlooked the feature \textit{interaction space} scaling.
Recent advancements have shown shifted focus towards element-wise multiplication to facilitate higher-dimensional feature interaction space for better information transformation. Despite this progress, multiplications predominantly capture low-order interactions, thus remaining confined to a finite-dimensional interaction space. To transcend this limitation, classic kernel methods emerge as a promising solution to engage features in an infinite-dimensional space. We introduce InfiNet, a model architecture that enables feature interaction within an infinite-dimensional space created by RBF kernel. Our experiments reveal that InfiNet achieves new state-of-the-art, owing to its capability to leverage infinite-dimensional interactions, significantly enhancing model performance. | https://openreview.net/pdf/b14995433876b9e28417e0ab94774923baeecd15.pdf | [
{
"confidence": 3,
"rating": 6,
"review_id": "wNS1YkuPLA",
"review_text": "This work proposes a novel approach for enhancing neural network performance by scaling feature interaction spaces to infinite dimensions using kernel methods. Recent advancements have introduced feature interaction spaces, but these are often limited to finite dimensions, primarily through element-wise multiplications. To overcome these limitations, the authors propose InfiNet, a model architecture leveraging the Radial Basis Function (RBF) kernel to enable infinite-dimensional feature interactions. Finally, the authors provide several empirical results on standard vision tasks.\n\nThis work provides an interesting generalization of feature-feature interactions via kernels. For the best of my knowledge, this is a novel idea that appears to perform well in practice. However, I am not overly familiar with the current state of the field of deep learning for computer vision. It further provides several larger-scale experiments and interesting ablations.\n\n* there is no theoretical justification that increasing the dimension of the feature-feature interaction space will lead to better generalization. The paper does a good job analysing this question with ablations. However, this remains an open theoretical question.\n\n* I understand that the motivation for this work comes from applications in computer vision. However, since a major focus in this paper is on comparing the proposed approach to self attention, it would be interesting to not only test this method on images, but also on language. \n\n* the method is reported to have lower FLOPs on average than competing methods. Why is that? Is that a major drawback of this method?\n\n* performance improvement on ImageNet is only marginally. In many cases the proposed method even performs worse than competing methods.\n\n* paragraph starting in line 148: this is on over-claim and has to be removed or rigorously proved. It is not clear how a higher order of $k$ implies better generalization or training. Unless shown in this paper or referenced from another paper, this has to be removed.\n\nMinor: \n\n* line 28: more context for formulating self attention that way has to be provided. It is explained in more detail only at the end of section 3. \n\n* caption of figure 2: there is '?'. Moreover, a description of the presented images should be included. What is shown in Figure 2 on the right hand side? This is only explained in the main text,not the caption. This needs to be changed.\n\n* figure 2, first image on the left: hard to read -- text overlaps with drawing.\n\n* what is meant in line 47 + 48? the current formulation is very cryptic. What exactly is linear in $k$?\n\n\n* figure 2: why does the addition and multiplication interactions reach the same accuracy on cifar10? Isn't that basically MLP vs self-attention? I would presume self attention to perform better."
},
{
"confidence": 5,
"rating": 7,
"review_id": "0ZVWY9Akpu",
"review_text": "This paper studies placing a kernel function inside of a neural network architecture to facilitate interaction of features/dimensional expansion. They consider deep convolutional networks with parallel pathway features $x$ and $x'$ and a kernel function computed with both pathways' features as inputs $k(x,x')$. Standard kernel mathematics is used to explain feature expansion. The main novel results are empirical performance of these \"InfiNet\" architectures, which are shown to perform well in a number of computer vision tests.\n\nThe idea of unifying different orders of interaction embodied in various neural network architectures, including Transformers is appealing and probably important. The accuracy of the InfiNet experiments is impressive, with a moderate reduction in FLOPs. The paper is easy to read and well-organized, although suggestions are given for how it could be improved.\n\nMy main concerns with the paper are a lack of context for the approach as well as missing important explanations. I also think a good amount of the math that's included could be considered \"filler\" material that could go into the appendix, since it doesn't represent new results. (I am referring to sections 4.1 and 4.2, most of which can be found in most textbooks which cover kernel methods.)\n\n* Notation which is commonly used in the paper $\\oplus$, $\\otimes$, * is not explained. You should _explicitly_ define it somewhere, at least in the appendix (and refer people there). In particular, people may be confused by * for elementwise/Hadamard multiplication, since in convnet literature this is often the convolution operator. You call this the \"Star Operation\" in line 124, but I think it is just elementwise multiplication.\n* The authors seem to have missed the vast literature on the connections between random features, neural networks at init, and kernel methods. (CKNs are mentioned but without any discussion of the topics I mention here.) In particular, one way that you could approximate the InfiNet architecture would be to take the two feature streams and pass them each into the same wide, random network/layer and compute the dot product of features at the next level. That would only approximate the kernel function in the InfiNet architecture, and is likely less efficient, but it provides a way to perform dimensionality expansion with a more traditional layer. The authors should discuss these connections.\n* Different order of interactions have been studied in random feature and kernel settings already. In random features, interaction order is connected to the sparsity of weights, see e.g. https://arxiv.org/abs/2103.03191 and https://arxiv.org/abs/1909.02603. In kernels, this were referred to as additive kernels https://arxiv.org/abs/1602.00287, also studied in multiple kernel learning https://arxiv.org/abs/0809.1493 (these are just some examples among a larger literature).\n* The authors do not seem to want to release their code. They have said \"Yes\" on Question 5, stating that the code and data are open, but there is no link or indication in the text that the code is available or will be when the paper is published. That seems deceptive.\n\n* There is a tension between dimensional expansion, which leads to expressivity in networks, and generalization, which is typically better in low-dimensional settings. Can you discuss this?\n* When queries and keys in a transformer are computed using a multilayer network with nonlinearities (rather than a single linear layer, as you've considered), aren't the effective order of interactions higher?\n* You claim that the kernel map applied to inputs with $C$ channels takes constant $O(1)$ time (section 4 intro). Wouldn't evaluating the kernel still take $O(C)$ i.e. linear time?\n* Can you please include the matrix/tensor shapes and layer sizes explicitly in section 5.1? They could be put into the appendix. It is unclear how many kernel evaluations are performed and on what shape input.\n\nMinor points:\n* Sentence lines 45-47 is confusing and should be reworded. Also, the combinatorial expression with the limit is unexplained, not obvious, and doesn't seem to contribute anything here. I suggest removing it.\n* Line 61, the expression for span of a certain space is unclear. The main point seems to be that this is an infinite-dimensional function space. Does using this math really add anything?\n* Line 61: \"as low-overhead as exponential operations\" is unclear. Do you mean \"evaluating an exponential function\"?\n* Line 91: \"Kernel Method\" -> \"Kernel Methods\" typo\n* Line 106: \"isotropic\" here is unclear to me, suggest removing\n* Line 110: \"medium\" for the intermediate layer connotes different size, suggest changing to \"intermediate\" or \"middle\"\n* Line 130: Without saying it, are you assuming that the image inputs span the pixel space vector?\n* Line 149: \"two element-wise multiplication\" typo -> \"multiplications\"\n* Notation $W_a \\mathbf{x}$ is confusing: In equation (1) this seems to output a scalar. Is that the same in Eqn (6)? What are the shapes of the W matrices?\n* What is a \"feature branch\"? Unclear throughout.\n* You say the input is passed through \"STEM\" and refer people to the ConvNeXT paper https://arxiv.org/pdf/2201.03545. There is more than one \"stem\" in that paper. Can you be explicit about what you did?"
},
{
"confidence": 3,
"rating": 7,
"review_id": "jeOnTzlXsd",
"review_text": "The authors present a new architecture for computer vision applications that models high-order interactions between features. The architecture is similar to an attention block, but introduces an RBF Kernel layer that captures interactions of order higher than two. The resulting method has strong empirical performance across image classification tasks.\n\n- The idea of the paper is very interesitng and novel.\n- The empirical results show promising performance across involve multiple tasks against sophisticated methods\n\n- The presentation of the method seems overly complex in some places. For example, providing a clearer explanation of each new layer (perhaps in pseudocode) would help. While the Infiniblock definition is clear, the reader needs to go back to the previous section to understand the input/output shaped of the RBF layer, which takes work, and can be made simpler. Making clearer the intuition behind high-order interactions would be helpful as well. Showing examples of what the model learns would be helpful to make things concrete.\n- The empirical performance is reasonably similar to those of previous methods, hence the empirical improvement is not that large.\n\n- What are some examples of features and interactions that help learning and that the new model can learn?\n- Is it possible to analyze or visualize what interactions the model learned?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "WXzNtT793m",
"review_text": "The paper shifts the focus from traditional neural network design, which emphasizes feature representation space scaling, to feature interaction space scaling. It introduces a new model architecture, InfiNet, that enables feature interaction within an infinite-dimensional space using the RBF kernel, leading to state-of-the-art results. The paper also discusses the limitations of current models in capturing low-order interactions and proposes the use of classic kernel methods to engage features in an infinite-dimensional space.\n\n- The idea of the paper is simple, novel and well exposed. \n\n- The paper introduces InfiNet, a model architecture that leverages infinite-dimensional feature interactions using RBF kernels, which enhances model performance of traditional models.\n\n- InfiNet achieves new state-of-the-art performance in various tasks, demonstrating the effectiveness of infinite-dimensional interactions.\n\n- The paper includes extensive experiments on datasets like ImageNet and MS COCO, showing the scalability and efficiency of InfiNet.\n\n- the paper builds on the simple use of kernel methods. The novelty of the methods is minimal, in the end it is an RBF kernel.\n\n- the performance improvement of Infinet over other models is mostly marginal and no errors have been displayed.\n\n- the paper doesn't really have theoretical novelty\n\n- How does InfiNet compare to other models in terms of training time and resource consumption?\n\n- Can the kernel methods used in InfiNet be applied to other types of neural network architectures beyond those discussed?\n\n- Can the authors quantify the increased dimensionality of the kernel methods over simpler operations (sum, product). If the authors take the simplest architecture for imagenet and look at the representations generated by means of using different kernels, can they quantify what is the actual increase in the intrinsic dimensionality of the representation upon training? It is not fully clear to me that the increase in performance is due to an increase in dimensionality.\n\n- the author mention the possibility of exploiting a learnable kernel in place of RBF. Could the author explain and discuss the ratio behind using RBF in place of others? Is it solely driven by the computational complexity. Would the results be different with a different kernel?"
}
] | |
xNncVKbwwS | Universal Online Convex Optimization with $1$ Projection per Round | To address the uncertainty in function types, recent progress in online convex optimization (OCO) has spurred the development of universal algorithms that simultaneously attain minimax rates for multiple types of convex functions. However, for a $T$-round online problem, state-of-the-art methods typically conduct $O(\log T)$ projections onto the domain in each round, a process potentially time-consuming with complicated feasible sets. In this paper, inspired by the black-box reduction of Cutkosky and Orabona [2018], we employ a surrogate loss defined over simpler domains to develop universal OCO algorithms that only require $1$ projection. Embracing the framework of prediction with expert advice, we maintain a set of experts for each type of functions and aggregate their predictions via a meta-algorithm. The crux of our approach lies in a uniquely designed expert-loss for strongly convex functions, stemming from an innovative decomposition of the regret into the meta-regret and the expert-regret. Our analysis sheds new light on the surrogate loss, facilitating a rigorous examination of the discrepancy between the regret of the original loss and that of the surrogate loss, and carefully controlling meta-regret under the strong convexity condition. With only $1$ projection per round, we establish optimal regret bounds for general convex, exponentially concave, and strongly convex functions simultaneously. Furthermore, we enhance the expert-loss to exploit the smoothness property, and demonstrate that our algorithm can attain small-loss regret for multiple types of convex and smooth functions. | https://openreview.net/pdf/7dc76a57e2e7e68167e764bbd0b24f559f8773a9.pdf | [
{
"confidence": 4,
"rating": 7,
"review_id": "9irEgEan0n",
"review_text": "This paper introduces methods for constrained OCO\nwhich automatically achieve the optimal rate without knowing\nin advance whether the losses are convex, strongly convex,\nexp-concave, or smooth, while using only 1 projection per round.\nThis is notable because the standard approach proceeds by combining\nseveral expert algorithms with a meta algorithm; in constrained settings\nthese expert algorithms require implementing a potentially expensive\nprojection. This work avoids projecting each of the expert algorithm\niterates leveraging the constrained-to-unconstrained reduction of Cutkosky 2020.\n\nThe paper addresses a clear and real problem that has been left unaddressed\nby the majority of literature on this topic. The approach is a pretty straight-forward\nmodification of existing reductions, but uses them in a new and unexpected way.\n\nThe main weakness is that the paper feels poorly factored. There is\na very large number of back references to previous equations, and the paper would be\nvery hard to read in print. To actually follow the math, it's almost necessary to\nread the paper with a pdf viewer which can display pop-up previews when hovering over links.\nI think this would be remedied by better factoring the results into lemmas and propositions.\n\nAs noted, the approach is a fairly straight-forward modification of the results from\nCutkosky \\& Orabona (2018) and Cutkosky (2020), and essentially boils down to\nnot dropping negative terms in the analysis, and then exposing matching terms\nin the regret decomposition. I think this is fine overall; these considerations\nare missing from the literature, and this is a fitting\nplace for them to enter the literature.\n\nDo you think there could possibly be a more abstract way to formalize these universal algorithms? The strange thing about these universal algorithms is that a whole new algorithm seemingly needs to be devised every time one wants to incorporate a new kind of loss (e.g. meta-grad -> mahler to handle strongly convex losses). The ideal result would more generally be a reduction which just passes the losses to each of the experts, maybe with some additional side-information, and lets them construct whatever surrogate loss they want with it. In this way there might just be one \"final\" paper on universal guarantees."
},
{
"confidence": 5,
"rating": 6,
"review_id": "Gcp4drRCNA",
"review_text": "This paper addresses the challenge of online convex optimization with unknown smoothness properties of the loss functions, which can be convex, strongly convex, or exp-concave. The authors propose an algorithm that achieves regret bounds of order $\\sqrt{T}$, $\\log T$, and $d \\log T$ respectively, while requiring only a single projection step per round on the original domain $\\mathcal{X}$. Such projections can indeed be computationally expensive. Additionally, the authors present regret bounds with improvment for small losses.\n\nMost algorithms that achieve similar adaptive regret upper bounds rely on meta-algorithms that combine experts (running ONS or OGD with surrogate losses), inspired by the MetaGrad algorithm. Typically, these algorithms necessitate $\\log(T)$ projection steps per round (one per expert), which can be computationally burdensome. Mhammedi et al. (2019) reduced this projection cost to $O(1)$ but at the expense of a $d \\log T$ regret for strongly convex losses. To overcome this, the authors introduce new surrogate losses based on a black-box reduction technique by Cutkosky et al. (2018), which simplifies the constrained optimization problem on $\\mathcal{X}$ to another domain, such as the Euclidean ball, where projections are easier.\n\n- The paper is well-written and offers valuable insights into the use of surrogate losses to adapt to strong convexity or exp-concavity. It may serve as a comprehensive entry point into the extensive literature on universal OCO algorithms.\n- Despite combining various existing techniques, the results are non-trivial and required solving technical challenges, especially for the strongly convex case. The authors introduce novel negative terms in the analysis to achieve their results.\n- Experiments included in the appendix demonstrate that the computational improvements can be significant.\n\n- The theoretical improvements may appear incremental, appealing to a niche audience interested. The improvement being only in the specific case of strongly convex $\\log T$ regret with $O(1)$ projection steps. The primary high-level ideas in the algorithm and analysis are based on prior work.\n- The paper still relies on a meta-aggregation procedure, which, although theoretically effective, is not particularly elegant and maintains a per-round complexity of order $O(\\log T)$. Achieving $O(1)$ complexity per round seems however highly challenging.\n- The convex rate is actually $O(\\sqrt{T \\log\\log T})$, not $O(\\sqrt{T})$ as stated in the results.\n\n- The algorithm requires prior knowledge of the parameters $G$ and $T$, would simple doubling trick allow to tune these?\n- Your algorithm still requires O(1) projection steps on $\\mathcal{X}$ and O(log T) projection steps on $\\mathcal{Y}$. Do you think that projection free algorithms such as variants of Online Frank Wolfe could be used instead of OGD and ONS (up to deteriorating slightly the rate) to remove all projections (or at least the O(log T) on $\\mathcal{Y}$) while still being adaptive to the smoothness?"
},
{
"confidence": 3,
"rating": 5,
"review_id": "RFtqS7BKDS",
"review_text": "This paper studies universal OCO algorithms with fewer projections. Previous work either use $O(\\log T)$ projections per round, or have a sub-optimal dependence on $d$ for strongly-convex loss. This work designs a new surrogate loss to achieve tight regret for Lipschitz convex/exp-concave/strongly-convex losses simultaneously, with only 1 projection per round.\n\nThe technical contributions are solid: this paper makes a strict improvement over previous results.\n\nThe paper is very well-written, clearly introducing the challenges and the main ideas. Details of the analysis and algorithm are nicely explained.\n\nThe contribution seems somewhat incremental to me. The only improvement is a $d$ factor for strongly-convex loss. Such result is nice to know but I'm not sure how significant such it is. In addition, the technical novelty isn't significant either.\n\nSee weaknesses."
}
] | |
xNlQjS0dtO | Keeping LLMs Aligned After Fine-tuning: The Crucial Role of Prompt Templates | Public LLMs such as the Llama 2-Chat underwent alignment training and were considered safe. Recently Qi et al. (2024) reported that even benign fine-tuning on seemingly safe datasets can give rise to unsafe behaviors in the models. The current paper is about methods and best practices to mitigate such loss of alignment. We focus on the setting where a public model is fine-tuned before serving users for specific usage, where the model should improve on the downstream task while maintaining alignment. Through extensive experiments on several chat models (Meta's Llama 2-Chat, Mistral AI's Mistral 7B Instruct v0.2, and OpenAI's GPT-3.5 Turbo), this paper uncovers that the prompt templates used during fine-tuning and inference play a crucial role in preserving safety alignment, and proposes the “Pure Tuning, Safe Testing” (PTST) strategy --- fine-tune models without a safety prompt, but include it at test time. This seemingly counterintuitive strategy incorporates an intended distribution shift to encourage alignment preservation. Fine-tuning experiments on GSM8K, ChatDoctor, and OpenOrca show that PTST significantly reduces the rise of unsafe behaviors. | https://openreview.net/pdf/beb490ad102910ffee423d69bbff98080fde654b.pdf | [
{
"confidence": 5,
"rating": 7,
"review_id": "x4RVjx68YY",
"review_text": "This paper proposes a mitigation strategy called \"pure tuning, safe testing\" to mitigate harmful finetuning issues for LLMs. The strategy is very simple, basically to use a safety system prompt for inference and do finetuning without such a prompt. The core philosophy is that harmful knowledge in the finetuning stage is learned without a safety prompt, but in inference time the the added safety prompt is used and therefore harmful knowledge will not be activated.\n\n1. The studied problem -- harmful finetuning for LLMs by itself is important and has raised widespread public interest among the community, and this paper is one of the early batches of papers to propose a timely analysis and mitigation strategy for the problem.\n\n2. Comprehensive evaluation is conducted to show the effectiveness of the method. \n\n3. The paper is well-written, and the proposed strategy is simple enough to understand, which I think may raise the common interest among the community.\n\n1. The core issue of PTST is that: given that the system prompt has changed between finetuning/testing, it does not make sense to me that why the helpfulness is not degraded while the harmfulness is lowered. Both benign helpful knowledge/harmful knowledge are learned with the finetuning system prompt, changing the template in the inference time will simultaneously lower helpfulness/harmfulness in my view. However, this is not the case in Table 2 (a) and Table 3(a), which indicates that changing the template will not always lower helpfulness (sometimes even increase helpfulness, e.g., CA->CL). I conjecture the reason is that the length of CL prompt is longer, which elicits better helpfulness performance. An explanation for this phenomenon will be appreciated. \n\n\n2. The observation in Section 4 that mixing safety data can reduce ASR is available in Vlguard Zong et al. [2024]. I understand that this is a concurrent finding, but it would be nice if the authors could mention and discuss this in Section 4. \n\n3. The experimental results are not intuitive enough. Particularly, I think it is not ideal to use so many prompts (e.g., TV, TA,CV,CA,CL) for comparison. When I am reading Table 2, I am confused about which one is a safety prompt and which one is not a safety prompt, and therefore, I cannot immediately get the intuition shown by the results. \n\n4. The literature review seems to be comprehensive, but there are a few related works missing. Since (Qi et al, 2024), there are a few mitigation solutions proposed to address the same challenges. I would appreciate it if the authors could appropriately cite and discuss these literature:\n\n------------------Before NeurIPS review cycle------------------\n\n[1] Fine-tuning can cripple your foundation model; preserving features may be the solution https://openreview.net/forum?id=VQ7Q6qdp0P (ICLR2024 template) \n\n[2] Immunization against harmful fine-tuning attacks https://arxiv.org/pdf/2402.16382 (ICLR2024 workshop template)\n\n------------------concurrent------------------\n\n[3] Representation noising effectively prevents harmful fine-tuning on LLMs https://arxiv.org/pdf/2405.14577 (NeurIPS2024 template)\n\n[4] Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning httpsImmunization against harmful fine-tuning attacks https://arxiv.org/abs/2405.18641 (NeurIPS2024 template)\n\n[5] No Two Devils Alike: Unveiling Distinct Mechanisms of Fine-tuning Attacks https://arxiv.org/pdf/2405.16229 (NeurIPS2024 template)\n\n[6] Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models https://arxiv.org/pdf/2405.16833v1 (NeurIPS2024 template)\n\n[7] A safety realignment framework via subspace-oriented model fusion for large language models https://arxiv.org/pdf/2405.09055 (Elsivier Journal template, first available May, 2024)\n\n[8] Navigating the Safety Landscape: Measuring Risks in Finetuning Large Language Models https://arxiv.org/abs/2405.17374 (NeurIPS2024 template)\n\nI am aware that some of the listed work is concurrent work (e.g., con-current submissions to NeurIPS 2024). However, it is encouraged to also cite and discuss them, because that will be beneficial for the development of the research field (but the authors should at least cite those existing works that appeared before the NeurIPS2024 review cycle).\n\n5. Baselines for comparison are lacking. As there are already a few mitigation strategies for harmful finetuning issues, I suggest the authors add one or two baselines, e.g., Vaccine[Huang et al., 2024] for comparison.\n\nSee the weakness part. I still have questions regarding the results:\n\nIn table 2 (b) (c), why changing CL->CV will increase the harmfulness while CL->CA will decrease it?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "v3MXKtihCJ",
"review_text": "This paper shows that the prompt templates used during fine-tuning and inference play a crucial role in safety alignment. Then, the authors propose to fine-tune models without a safety prompt, but include it at test time (user inference), which is counter to intuition. The authors demonstrate their method in the following experiments: when using the same prompts during training on GSM8K and testing, attack success rate increases for a Llama-2-Chat model (the authors considered 5 different prompts). The authors also show the same trend across models GPT-3.5 Turbo, Mistral-7B-Instruct-v0.2, and Llama-2-7b-chat, and across datasets ChatDoctor and OpenOrca.\n\nThis is a paper that points out a new direction in safety alignment for fine-tuning language models. The paper is written very clearly, with a novel method and supporting experimental results.\n\nImprovements to this paper can be made from the following aspects: (1) there still seems to be noise in the experiment results, although that does not take away the novelty in proposing the PTST approach, (2) there should be more discussion about implications of PTST\n\nI have two specific questions and one general question:\n\nQ1. This is a question regarding the experiment results, which I find convincing but nevertheless flawed. In Table 1, (b) shows the trend that this paper is arguing for, specifically that training and testing on the same prompt template makes attacking easier. However, I question whether that is generally the case? The trainTA-testCV entry suffers the same ASR as trainCV-testCV in (b), and the trainCV-testTV entry suffers even higher ASR than trainTV-testTV in (d). I don't think these outliers invalidate the general trend of PTST results, but I still question how universally applicable PTST will be, in the sense that it is unclear whether there is a \"PTST prompt\" that will perform well under all scenarios? Perhaps there is a hidden confounder at play here?\n\nQ2. Is there a particular reason that TV and TA is not included in the further experiments like GPT-3.5 (judging from Table 1, there are certain cases when TV and TA perform the best)?\n\nQ3. This is a high-level question about the message that this paper is sending. Throughout experiments in this paper, it seems like there is always a tradeoff between helpfulness and ASR. Philosophically, is that really the case? I personally like the paper and believe that PTST is an interesting new direction of research, but I wonder whether a sufficiently intelligence machine still needs to give up either helpfulness or safety? I think the paper is lacking in discussion about the implications of PTST, and how the PTST method might inspire future papers to explore the direction of safety aligned fine tuning."
},
{
"confidence": 4,
"rating": 6,
"review_id": "W4RcZqH6iW",
"review_text": "This paper addresses a critical issue, i,e., LLMs' loss of safety after being fine-tuned. The authors pay their attention to the prompt templates used during fine-tuning and testing, which leads to the main observation that fine-tuning with the valina template and testing with the safe template yields the best robustness.\n\n(1) Understanding the effect of fine-tuning on the LLM safety through prompt templates is novel.\n\n(2) The PTST strategy shows promising performance gains when compared with the common strategy where a template is consistently used.\n\n(3) The authors conducted experiments on several templates, models, and datasets.\n\n(1) The authors leave the understanding of the PTST strategy to future work and very limited discussion on the underlying mechanism of PTST can be found. Although it might be hard to develop a rigorous theory explaining the strategy, I still feel it necessary for the authors to at least propose some hypotheses and try to verify them with concrete experiments. \n\n(2) There are cases when the helpfulness of models is notably decreased if we adopt the PTST rule, such as (TV, CL) and (TA, CL) for Llama-7B. \n\n(3) Some lightweight defenses such as Self-Reminder [1] and ICD [2] can be incorporated into the (CL, CL) training scheme, which will serve as good baselines for PTST. Comparison with safeguarding algorithms can help readers better understand the significance of PTST.\n\n[1] Defending ChatGPT against jailbreak attack via self-reminders; Xie et.al; Nature\n\n[2] https://arxiv.org/abs/2310.06387\n\n(1) PTST seems to be a general principle to follow when fine-tuning aligned LLMs and the templates considered are restricted to several existing ones. I am wondering whether this principle can help us design better prompt templates for fine-tuning."
},
{
"confidence": 3,
"rating": 6,
"review_id": "5gyrIk1BeE",
"review_text": "This paper discusses the issue of maintaining model consistency after fine-tuning large language models (LLMs). The research team, through extensive experiments, found that the prompt templates used during fine-tuning and inference play a crucial role in maintaining model safety. The paper proposes the \"Pure Tuning, Safe Testing\" (PTST) principle, which involves not using safety prompts during fine-tuning but incorporating them during testing to significantly reduce the occurrence of unsafe behaviors.\n\n1. Through extensive experiments, it is demonstrated that prompt templates are crucial for maintaining safety during both training and testing.\n2. The PTST approach is proposed, which improves safety performance.\n\n1.Why fine-tune on math datasets (gsm8k, Orca-Math) to verify the model's safety? How does the performance compare when fine-tuned on safety-specific datasets, such as Anthropic/hh-rlhf?\\\n2.The experiments on PTST are insufficient, as they do not adequately compare the effectiveness of the approach with current alignment algorithms such as PPO, DPO, KTO, among others.\\\n3.This paper proposes the PTST algorithm, but it is a training technique and lacks a certain level of innovation.\n\n1.It would be interesting to see whether the approach also scales to more datasets, such as hh-rlhf, or a combination of GSM8k and hh-rlhf for mixed training.\\\n2.Could you explain what the core contributions of the PTST algorithm are? How does it differ from algorithms like DPO?\\\n3.How does the performance of PTST compare to aligner[1] on larger-scale datasets?\\\n[1]Ji J, Chen B, Lou H, et al. Aligner: Achieving efficient alignment through weak-to-strong correction[J]. arXiv preprint arXiv:2402.02416, 2024."
}
] | |
xNZEjFe0mh | Communication-Efficient Federated Group Distributionally Robust Optimization | Federated learning faces challenges due to the heterogeneity in data volumes and distributions at different clients, which can compromise model generalization ability to various distributions.
Existing approaches to address this issue based on group distributionally robust optimization (GDRO) often lead to high communication and sample complexity.
To this end, this work introduces algorithms tailored for communication-efficient Federated Group Distributionally Robust Optimization (FGDRO). Our contributions are threefold: Firstly, we introduce the FGDRO-CVaR algorithm, which optimizes the average top-K losses while reducing communication complexity to $O(1/\epsilon^4)$, where $\epsilon$ denotes the desired precision level. Secondly, our FGDRO-KL algorithm is crafted to optimize KL regularized FGDRO, cutting communication complexity to $O(1/\epsilon^3)$. Lastly, we propose FGDRO-KL-Adam to utilize Adam-type local updates in FGDRO-KL, which not only maintains a communication cost of $O(1/\epsilon^3)$ but also shows potential to surpass SGD-type local steps in practical applications.
The effectiveness of our algorithms has been demonstrated on a variety of real-world tasks, including natural language processing and computer vision. | https://openreview.net/pdf/a65e23614800d52b04091555fc2509133c2dc354.pdf | [
{
"confidence": 3,
"rating": 5,
"review_id": "c03X5TCVcJ",
"review_text": "This work introduces three algorithms for communication-efficient Federated Group Distributionally Robust Optimization. The effectiveness of the proposed algorithms are verified through both theoretical and experimental results.\n\n1) This work studies an important problem of federated group distributionally robust optimization.\n2) The theoretical results show the advantages of the proposed algorithms.\n\n1) This work proposes three algorithms, including FGDRO-CVaR, FGDRO-KL, and FGDRO-KL-Adam. There lacks a comparison between these algorithms. For example, what are the connections and differences between these algorithms?\n2) The analysis for FGDRO-CVaR assumes the loss function to be rho-weakly convex, which is missing from the main context.\n\n1) Missing reference: How about the comparison with this work [1]?\n\n[1] Communication-Efficient Distributionally Robust Decentralized Learning https://arxiv.org/pdf/2205.15614\n\n2) What are the experimental setups for the number of clients and non-IID?\n\n3) In experimental results (Tables 2 and 3), FGDRO-CVaR seems to have no advantages in both task; why do we need this algorithm? Besides, it is better to highlight the best-performance results in Tables 2 and 3.\n\n4) Intuitively, using Adam optimizer can bring training speedup and is supposed to outperform other algorithms. But why do the results show that sometimes FGDRO-KL is better than FGDRO-KL-Adam?\n\n5) In proof, is an assumption of bounded gradient needed? If I don't misunderstand, Line 550 indicates such an assumption.\n\n6) If the loss function assumes to be convex, analyzing the optimal distance between the loss value the minimum loss should be better in Theorem 6.2."
},
{
"confidence": 3,
"rating": 6,
"review_id": "68maMn64i8",
"review_text": "This paper addresses the challenge of reducing communication costs and sample complexity in Federated Group Distributionally Robust Optimization (FGDRO). The authors present the FGDRO-CVaR algorithm and the FGDRO-KL algorithm to address different constraints. Subsequently, they conduct extensive experiments across various real-world tasks, including NLP and CV tasks. The corresponding empirical results confirm the effectiveness of their proposed methods.\n\n1. The exploration of reducing communication costs for federated group DRO is a rarely-studied topic within the FL community.\n\n2. The theoretical convergence analysis for the proposed algorithms is somewhat solid.\n\n3. The authors conduct comprehensive experiments to validate the effectiveness of the devised algorithms.\n\n1. The contributions and novelties of this paper are unclear. It appears that the authors have directly combined existing federated adaptive algorithms with pre-existing federated group DRO methods in this paper.\n\n1. The introduction's treatment of the concept of generalization appears incomplete. It is evident that there are two levels of generalization in Federated Learning, as delineated in [1] and [2].\n\n2. As highlighted in the aforementioned weaknesses, the authors should provide additional clarification regarding the contributions and novelty of this paper. Overall, it appears that the proposed method is primarily a direct combination of existing methods.\n\n3. Similarly, the authors should delineate the challenges and innovations intrinsic to their theoretical analysis. Specifically, they should underscore the complexities involved in analyzing federated adaptive algorithms when applied in federated group DRO.\n\n4. On line 154 of Page 4, what is the relationship between the \"accurate estimate\" and the \"moving average\" in the subsequent sentence?\n\n5. Some minor points to address. The authors might consider offering more empirical results on convergence analysis in the experimental section. Additionally, they should further consider the statistical significance of these convergence analyses.\n\n[1] Hu X, Li S, Liu Y. Generalization bounds for federated learning: Fast rates, unparticipating clients and unbounded losses[C]//The Eleventh International Conference on Learning Representations. 2023.\n\n[2] Yuan H, Morningstar W, Ning L, et al. What do we mean by generalization in federated learning?[J]. arxiv preprint arxiv:2110.14216, 2021."
},
{
"confidence": 5,
"rating": 4,
"review_id": "yfyKmhjgGU",
"review_text": "The paper presents three methods for Federated Learning Group Distributionally Robust Optimization: (i) one tailored to reduce the CVaR which optimizes the top K-losses, (ii) another one tailored to tackle the KL divergence, and finally (iii) one that uses Adam locally. The paper is well written and the ideas are presented. To the best of my knowledge, the proofs are correct. My main concerns are regarding the relevance and importance of the subject, the lack of experiments, and the lack of empirical studies on communication efficiency.\n\n[S1] The paper is well-written, and the ideas are presented. \n\n[S2] The theoretical results are correct, to the best of my knowledge.\n\n[W1] The relevance of the subject is not entirely addressed. See [Q1]\n\n[W2] The experiment section is limited. In particular, the paper does not present any intuition on the problems they are solving. They do not consider the number of samples per server for example. I believe the authors should include a class imbalance problem [AN AGNOSTIC APPROACH TO FEDERATED LEARNING WITH CLASS IMBALANCE - Shen et al, ICLR 22].\n\n[W3] Communication efficiency is not properly addressed by the authors. The authors show the number of communication rounds required, but they do not take into account how much is communicated. The authors claim that this method is more efficient in terms of communication, and they show it theoretically, but in the experiment section, there is no evidence of communication efficiency. I suggest the authors reveal the communication cost associated with each method, measured in the amount of data shared between servers. \n\n[W4] Privacy is an important subject of Federated Learning, but in this paper, there is no analysis of the privacy aspect. Can the authors elaborate on the privacy aspect of this work? \n\n[W5] Federated learning is a technique used to train on a set of machines. The idea is that the number of machines that participate is large. It appears to me that the largest number of servers is 17. This seems to me insufficient for a distributed learning problem.\n\n[Q1] Why should be designed solutions that are distributional robust? And, at what cost? If we compare a method that simply maximizes/minimizes the FL problem, what is the overall loss? I believe the overall loss should be smaller, given that being distributionally robust is a particular case, and therefore, the unconstrained problem achieves a smaller minimum."
},
{
"confidence": 2,
"rating": 5,
"review_id": "Ugjo5WqF4y",
"review_text": "This paper aims to improve the efficiency of existing federated group distributionally robust optimization (FGDRO) when considering two specific types of regularization, condition value at risk and KL divergence. To address the first type of problem, the authors propose FGDRO-CVaR that reduces the sample complexity and communication costs simultaneously. For KL conditions, the proposed FGDRO-KL reduces the sample complexity while retaining the same communication costs. Moreover, the authors integrate the notion of Adam into FGDRO-KL, yielding FGDRO-KL-Adam and achieving better convergence speed.\n\n1. The paper is well-written, though some background information is missing.\n2. The problem is well-motivated. The sample and communication efficiency is a pivotal problem in federated learning, though the benefits are not fully analyzed in the experiments.\n3. The proposed method is grounded and improves over prior baselines.\n\n1. The background can be more thoroughly explained. The authors are encouraged to provide additional context to address the following questions, which will greatly enhance the paper's completeness. Why is federated group distributionally robust optimization (FGDRO) an important problem or technique? What are the sources of the additional communication costs? Why is it necessary to consider two different types of regularization? Are these types of regularization relevant to different applications?\n\n2. My major concerns lie in the experiments and their settings. \n\n - **Data Splits.** While FGDRO's main advantage appears to be its ability to address non-IID optimization, the experimental setup concerning data splits lacks clarity. An analysis of the non-IID levels, such as those derived from different Dirichlet-distributed data splits with varying $\\lambda$ values, is missing. Including more representative baselines, such as SCAFFOLD and FedProx, which are also designed for non-IID optimization, could further enhance the analyses.\n\n - **Performance.** The proposed method performs similarly to the baselines in most experiments. For example, in Tables 2 and 3, apart from the Adam variant, the proposed method is comparable to the baselines. This would be acceptable if the proposed method demonstrated improved efficiency; however, relevant analyses on this aspect are absent from the experiments.\n\n - **Communication or Sample Complexity Analysis.** An empirical analysis comparing complexity versus utility would be beneficial and highlight the advantages of the proposed method. For instance, the experiment in Figure 1 can be extended to a comparison among different baselines.\n\n1. The datasets considered in this paper do not seem common in the existing literature. Could the authors report numbers on datasets like CIFAR-10/100 or EMNIST? Why did the authors choose the datasets in the paper?"
}
] | |
xM5m7J6Lbl | Can an AI Agent Safely Run a Government? Existence of Probably Approximately Aligned Policies | While autonomous agents often surpass humans in their ability to handle vast and complex data, their potential misalignment (i.e., lack of transparency regarding their true objective) has thus far hindered their use in critical applications such as social decision processes. More importantly, existing alignment methods provide no formal guarantees on the safety of such models. Drawing from utility and social choice theory, we provide a novel quantitative definition of alignment in the context of social decision-making. Building on this definition, we introduce probably approximately aligned (i.e., near-optimal) policies, and we derive a sufficient condition for their existence. Lastly, recognizing the practical difficulty of satisfying this condition, we introduce the relaxed concept of safe (i.e., nondestructive) policies, and we propose a simple yet robust method to safeguard the black-box policy of any autonomous agent, ensuring all its actions are verifiably safe for the society. | https://openreview.net/pdf/bf68ea6396e873fe823317adb57a897d04b26805.pdf | [
{
"confidence": 2,
"rating": 7,
"review_id": "qkU9KJfMQU",
"review_text": "This paper defines social markov decision processes (SMDPs) as an MDP generalization incorporating a population of individuals with distinct utility profiles aggregated by a social welfare function. It provides a novel quantitative definition of alignment in this context, then leverages this definition to characterize probably approximately aligned policies and safe policies, prove the conditions under which they exist, and relate them to the accuracy of the reward model.\n\n1. This paper is well written, and the background is particularly clear.\n2. The definitions and theoretical results are thorough and rigorous. This paper precisely relates the probability of aligned behavior to the world model accuracy, which I believe is valuable.\n3. This paper acknowledges that realistic inaccuracy in the world model could cause intolerable uncertainty in the PAA policy, and shows a more practical approach (safeguarding a black-box policy).\n\nEven the more practical approach of safeguarding a black-box policy may have severe limitations. I believe the paper would be strengthened by a discussion of the feasibility of this -- in particular, what is computational complexity of computing $\\mathcal{A}_{safe}$ for a SMDP?\n\nTypo: On line 277, I believe \"expansive\" should be \"expensive\".\n\nHow does the SMDP formalism handle individuals that give assessments on different scales? What assumption(s) does it rely on regarding interpersonal comparisons of utility?"
},
{
"confidence": 2,
"rating": 6,
"review_id": "mUZGgSPdW4",
"review_text": "This paper applies ideas from the Probably Approximately Correct framework to agent alignment. The paper defines a new idea of a policy which is Probably Approximately Aligned and explores the existence of such policies under certain assumptions of social welfare and models of the world. The authors show that probably approximately aligned (and approximately aligned) policies exist when there is a sufficiently accurate world model. However, to compute this policy is quite expensive. Thus, the authors also develop the idea of a safe policy which can be derived using a PAA policy and seems to be a policy that will probably not result in a catastrophically bad state.\n\nOverall the paper appears to be a very reasonable application of a well established form of analysis into a novel domain.\n\nThe main idea of providing bounds for the quality of an agent's policy is very important and will likely be the focus of much work in the near future. This is quite useful work and appears to me as the potential basis for work that can eventually have significant beneficial impact on the world.\n\nThe paper is generally well written and the motivation is clear. In places the math is a little dense but it seems to be as approachable as it can be for this sort of analysis. I do certainly appreciate that you've put a moderate amount of the work into the actual paper rather than stuffing all the important stuff into the appendix.\n\nNot a weakness, but my disclaimer: I was not able to thoroughly review every detail of the math due to time constraints so my understanding of the paper is limited.\n\nThe primary (and minor) issue I see with the paper is that it is quite abstract and doesn't give a clear idea of how close this is to being useful. While obviously difficult to fit into a conference paper, an experimental section may give some intuition for details such as how accurate a world model really needs to be, how beneficial PAA/safe policies are, etc.\n\nIt seems that Sec 3.2 is constructive in a sense and provides a PAA policy. Some further commentary on the practicality of this policy (is it entirely impractical to use it for synthetic experiments, or simply impractical in any useful setting/world model?) would help to contextualize the paper.\n\nYou do a good job of stating weaknesses but it seems that the first weakness listed may be quite significant. Is this work essentially just pushing the real difficulty of aligned policies into the task of building a statistically sound world model?"
},
{
"confidence": 3,
"rating": 6,
"review_id": "r7NuVYd0Xp",
"review_text": "The paper aims to define alignment quantitatively and ensure AI agents' actions are predictable and safe. The paper start by outlines the basics of utility and social choice theory, focusing on quantifying social satisfaction and the conditions under which it is measurable. Next, the paper defines probably approximately aligned (PAA) and approximately aligned (AA) policies and provides a modified sparse sampling algorithm to achieve these policies under certain conditions. The paper also presents the idea of \"safe policies\" and a method to ensure AI actions are verifiably safe for society.\n\n- Originality: This paper introduces a novel, quantitative definition of alignment in social decision-making contexts, drawing from utility and social choice theory.\n\n- Quality: The paper primarily focuses on theoretical contributions rather than empirical experiments. It is well-structured.\n\n- Clarity: The paper provides detailed mathematical derivations and proofs to support the existence of PAA and safe policies. It includes extensive references and context, including foundational works in utility theory, social choice, AI safety, and reinforcement learning, emphasizing the interdisciplinary nature of aligning AI with human values.\n\n- Significance: This work has a significant impact. While primarily theoretical, the work aims to provide a foundation for developing AI systems that could be safely used in critical applications like social governance, policy-making, or resource allocation.\n\nThe safeguarding method is described in a general context, with limited discussion of its applicability to specific real-world problems. Consider adding examples of real-world applications where the safeguarding method could be particularly beneficial. For instance, discuss its application in autonomous vehicle systems, healthcare decision-making, or financial trading algorithms.\n\nCould you provide more detailed steps on how the safeguarding method can be practically implemented in real-world systems? Consider add roadmap with examples on how to adapt a black-box policy into a safe policy."
},
{
"confidence": 2,
"rating": 5,
"review_id": "FlnhSsYALS",
"review_text": "The paper investigates the potential for AI agents to safely make critical decisions, such as those in a government setting, by examining the concept of alignment. It introduces Probably Approximately Aligned (PAA) policies, which are policies that are nearly optimal in aligning with social welfare objectives. The authors draw from utility and social choice theories to provide a quantitative definition of alignment and propose methods to ensure AI actions are verifiably safe for society. They also discuss the practical challenges in implementing such policies and suggest future directions for research in this area. The focus is on developing a theoretical framework that could eventually be applied to AI governance and decision-making processes.\n\nThe authors draw from utility and social choice theories to provide a quantitative definition of alignment and propose methods to ensure AI actions are verifiably safe for society.\n\nI think the problem is not well presented.\n\nI think the problem is not well presented. \n\nE.g. \n\nSection 3.2 - Algorithm for Computing the Policy:\nHow can you the result of Equation (7) are not clearly explained. \n\nEstimation of Reward (Equation 3):\nEquation (3) still appears to be a posterior approach."
}
] | |
xL7Ve14AHA | Regularized Adaptive Momentum Dual Averaging with an Efficient Inexact Subproblem Solver for Training Structured Neural Network | We propose a Regularized Adaptive Momentum Dual Averaging (RAMDA) algorithm for training structured neural networks. Similar to existing regularized adaptive methods, the subproblem for computing the update direction of RAMDA involves a nonsmooth regularizer and a diagonal preconditioner, and therefore does not possess a closed-form solution in general. We thus also carefully devise an implementable inexactness condition that retains convergence guarantees similar to the exact versions, and propose a companion efficient solver for the subproblems of both RAMDA and existing methods to make them practically feasible. We leverage the theory of manifold identification in variational analysis to show that, even in the presence of such inexactness, the iterates of RAMDA attain the ideal structure induced by the regularizer at the stationary point of asymptotic convergence. This structure is locally optimal near the point of convergence, so RAMDA is guaranteed to obtain the best structure possible among all methods converging to the same point, making it the first regularized adaptive method outputting models that possess outstanding predictive performance while being (locally) optimally structured. Extensive numerical experiments in large-scale modern computer vision, language modeling, and speech tasks show that the proposed RAMDA is efficient and consistently outperforms state of the art for training structured neural network. Implementation of our algorithm is available at https://www.github.com/ismoptgroup/RAMDA. | https://openreview.net/pdf/f308a40b9bbe024a4cee23bc7fdaab4fdbd357c3.pdf | [
{
"confidence": 4,
"rating": 5,
"review_id": "WjNxHPLIBa",
"review_text": "This article proposes an optimization algorithm RAMDA for training structured neural networks, which combines a number of optimization techniques including dual averaging, momentum, and coordinate-wise preconditioners. Similar to the existing RMDA algorithm, RAMDA also has the capacity to identify the local manifold structure of the solution. The author(s) provide theoretical analyses to justify the convergence property of RAMDA, and develop an inexact subproblem solver as required by RAMDA.\n\nThe proposed RAMDA algorithm extends the existing RMDA algorithm by adding a coordinate-wise preconditioner, and its theoretical analysis seems to be novel.\n\nI think one major weakness of the current manuscript is the **correctness** of some theoretical results presented in the article.\n\n1. Theorem 1 suggests that the regularizer function $\\psi$ can be nonconvex. However, as the RAMDA algorithm heavily relies on the proximal operator of $\\psi$, how do you define the proximal operator when $\\psi$ is nonconvex? For example, equation (6) is used to define the new iterate $W^t$, but when $\\psi$ is nonconvex, it is likely that the \"argmin\" is a set and is not uniquely defined.\n\n2. Taking a closer look at the proof of Theorem 1, I feel that the author(s) may have a misunderstanding of an existing theorem. In Appendix B, equation (11) is obtained by citing Theorem 10.15 of [1]. However, Theorem 10.15 of [1] applies to functions of the form $F(x)=f(x)+\\psi(x)$, where $f$ is smooth and nonconvex, but $\\psi$ is convex. In other words, the non-convexity only applies to the smooth part, not the regularizer.\n\n3. If the findings above are valid, then the author(s) may need a thorough examination of the technical proofs to see if there is any error.\n\n4. If we assume $\\psi$ is convex, then there should be an Nesterov-accelerated version of Algorithm 2 that converges in $O(\\varepsilon_t^{-1/2})$ iterations, which is faster than the rate given in Theorem 1.\n\n[1] Beck, A. (2017). First-order methods in optimization. Society for Industrial and Applied Mathematics.\n\n\n===============================================================\n\nEdit: during the rebuttal the author(s) seem to have addressed the concerns above.\n\nSee the \"Weaknesses\" section."
},
{
"confidence": 3,
"rating": 7,
"review_id": "XnawlCh1v9",
"review_text": "This paper develops regularized adaptive momentum dual averaging (RAMDA) for structured neural networks. The method uses the preconditioning matrix to accelerate the convergence of a regularized momentum dual averaging (RMDA) method at the price of requiring the local solver (e.g. standard proximal gradient methods) to solve the subproblem. By the preconditioning matrix inspired by the AdaGrad stepsizes in Eq. (2), RAMDA outperforms RMDA and other existing gradient-based methods for solving structured neural networks in various learning applications.\n\n1. Theoretical results suggest the convergence towards the solution to the subproblem when the proximal gradient methods are used as the local solvers, and almost surely convergence of RAMDA that derives from the manifold theory under the standard $L$-smoothness assumption on the objective functions $f$. \n\n2. Empirical results illustrate the superior performance of RAMDA over RMDA and other existing gradient-based methods for various neural network tasks. Clear criteria, e.g. for solving the subproblems, are clearly stated in the numerical experiments.\n\n1. I think there is an error in Eq. (3) where it should be the square root $\\sqrt{\\cdot}$ in the diagonal operator for $P^t$. This is because $P^t$ uses $U^t$ that is computed from the element-wise multiplicative product of the gradient $G^t$. Is $P^t$ inspired by the AdaGrad stepsizes? If so, then adding the justifications on using $P^t$ in RAMDA is worthwhile to better distinct RAMDA from RMDA. \n2. In the experiments, can you comment on the impact of the different $\\epsilon_t$ on the training performance of RAMDA? Because I believe that using $\\epsilon_t$ a bit higher than $10^{-8}$ set in your experiments RAMDA might achieve far lower training time than other methods while keeping still comparable perplexity to solve Transformer-XL with WikiText-103 in Table 4, or Tacotron2 with LJSpeech in Table 5.\n\nI listed questions as part of weaknesses."
},
{
"confidence": 3,
"rating": 7,
"review_id": "n2qbwOWnaT",
"review_text": "#### Summary\nThe paper introduces the Regularized Adaptive Momentum Dual Averaging (RAMDA) algorithm for training structured neural networks. RAMDA addresses the challenge of solving the subproblem involved in the regularized adaptive methods, which typically lacks a closed-form solution. The paper presents an inexactness condition that retains convergence guarantees and proposes an efficient subproblem solver. The algorithm leverages manifold identification theory to ensure that the iterates of RAMDA attain the ideal structure induced by the regularizer at convergence. Extensive experiments demonstrate the effectiveness of RAMDA in various tasks, including computer vision, language modeling, and speech synthesis.\n\n#### Strengths\n1. **Novel Algorithm**: RAMDA combines adaptive momentum dual averaging with efficient inexact subproblem solving, providing a practical and theoretically sound method for training structured neural networks.\n2. **Theoretical Guarantees**: The paper provides strong theoretical support, including convergence guarantees and structure identification, ensuring the algorithm's robustness.\n3. **Practical Efficiency**: The proposed inexact subproblem solver is efficient, making RAMDA feasible for large-scale applications.\n4. **Empirical Validation**: Extensive experiments across multiple domains demonstrate the superior performance of RAMDA in terms of both prediction accuracy and structured sparsity.\n\n#### Weaknesses\n1. **Computational Complexity**: The computational complexity of the proposed subproblem solver, especially for high-dimensional data, needs more detailed discussion.\n2. **Generality**: While the paper focuses on specific types of structured neural networks, extending the methodology to other models and regularizers would enhance its generality.\n3. **Comparative Analysis**: More detailed comparisons with other state-of-the-art methods, beyond the provided benchmarks, would strengthen the empirical validation.\n4. **Implementation Details**: Practical guidelines for implementing RAMDA, including parameter tuning and handling different data distributions, are somewhat lacking.\n\n#### Questions\n1. **Computational Complexity**:\n - Could you provide more details on the computational complexity of the proposed subproblem solver? How does it scale with increasing data size and model complexity?\n\n2. **Generality**:\n - The paper focuses on structured neural networks with specific regularizers. Are there any challenges in extending RAMDA to other types of models or regularizers, such as those used in different machine learning tasks?\n\n3. **Comparison with Existing Methods**:\n - How does RAMDA compare empirically with other state-of-the-art methods for training structured neural networks? Are there specific scenarios where RAMDA significantly outperforms these methods?\n\n4. **Implementation Guidelines**:\n - Can you offer practical guidelines for implementing RAMDA in real-world scenarios? Specifically, how should practitioners tune the parameters, such as the learning rate and the inexactness threshold?\n\n5. **Assumptions and Limitations**:\n - The paper discusses some assumptions and limitations. Could you elaborate on the key assumptions that are critical for the theoretical results, and how robust the method is to violations of these assumptions?"
}
] | |
xImeJtdUiw | Multi-modal Transfer Learning between Biological Foundation Models | Biological sequences encode fundamental instructions for the building blocks of life, in the form of DNA, RNA, and proteins. Modeling these sequences is key to understand disease mechanisms and is an active research area in computational biology. Recently, Large Language Models have shown great promise in solving certain biological tasks but current approaches are limited to a single sequence modality (DNA, RNA, or protein). Key problems in genomics intrinsically involve multiple modalities, but it remains unclear how to adapt general-purpose sequence models to those cases. In this work we propose a multi-modal model that connects DNA, RNA, and proteins by leveraging information from different pre-trained modality-specific encoders. We demonstrate its capabilities by applying it to the largely unsolved problem of predicting how multiple \rna transcript isoforms originate from the same gene (i.e. same DNA sequence) and map to different transcription expression levels across various human tissues. We show that our model, dubbed IsoFormer, is able to accurately predict differential transcript expression, outperforming existing methods and leveraging the use of multiple modalities. Our framework also achieves efficient transfer knowledge from the encoders pre-training as well as in between modalities. We open-source our model, paving the way for new multi-modal gene expression approaches. | https://openreview.net/pdf/ef1635b8ab00b68d5872359001ea59e93a3a9846.pdf | [
{
"confidence": 3,
"rating": 6,
"review_id": "PVGiTOdUKD",
"review_text": "The paper introduces a novel multi-modal model, IsoFormer, designed to integrate DNA, RNA, and protein sequences for predicting RNA transcript isoform expression across different tissues. It utilizes pre-trained modality-specific encoders to generate embeddings that are then combined using a sophisticated aggregation method. The model demonstrates significant improvements in prediction accuracy compared to single-modality approaches.\n\nContriburion:\n\n1. Developed the first general-purpose multi-modal model integrating DNA, RNA, and protein sequences.\n\n2. Demonstrated successful application of transfer learning from modality-specific encoders.\n\n3. Provided a new robust framework for advancing the prediction of RNA transcript isoform expression.\n\n1. Innovative Integration of Modalities: The paper presents the first attempt to integrate three biological sequence modalities (DNA, RNA, and proteins) in a unified model, providing a comprehensive approach reflective of natural biological processes.\n\n2. Effective Transfer Learning: IsoFormer effectively leverages pre-trained encoders to enhance its predictive power, benefiting from both intra-modal and inter-modal transfer learning.\n\n3. Robust Evaluation: Experiments demonstrate the model's capability, outperforming existing methods in predicting transcript isoform expression, which is a challenging task due to its multi-modal nature.\n\n1. Complexity and Computation: The model's complexity and the computational demands might limit its accessibility and use, particularly in environments with restricted resources.\n\n2. More Comprehensive Evaluation for PLM's representation learning capability would make this paper better.\n\n(1)\tFor Tab 5., wonder what’s the performance for “DNA and RNA encoder not pre-trained”\n\n(2)\tCould authors provide evaluation results for DNA, RNA and protein encoder separately on their own popular benchmarking tasks? Ideally, these models should improve on those downstream tasks as well. Would give 7’ or even 8’ if authors conduct these experiments. It could be a great and exciting work in this AI4Bio field."
},
{
"confidence": 4,
"rating": 5,
"review_id": "AdoAS6mcOS",
"review_text": "The paper introduces a new framework for the multi-modality pretrain model according to the Central dogma of biology. The method encode DNA, protein and RNA at the same time. The proposed method can transfer knowledge from the encoders pretraining and modalities.\n\nThe paper is well-organized and easy to follow\nThe authors have proved that the multi-modality of single cell data can help model predictions.\n\nLack of experiments. 1. More ablation studies should be conducted about removing different modalities of the model in Table 2 ( e.g. we observe only RNA can achieve a high performance, what about protein+RNA? ).\n\n 2. More dataset details should be included. The split of training/validation/test sets is not clear. If the authors do the experiments on the same dataset, they should split the dataset according to the tissues to validate the transferability of the proposed method.\n\nThe authors should include more motivation about the proposed method. For example, why can the changed DNA influence the RNA seq? Why not directly predict RNA expression from RNA seq?"
},
{
"confidence": 5,
"rating": 6,
"review_id": "90BY5oyMeL",
"review_text": "The paper models isoform relative abundance across tissues with a multimodal approach based on 3 pretrained encoders for DNA, RNA, and AA sequences. DNA encoder uses a sequence centered on the gene’s TSS, RNA encoder uses the known isoform sequence from RNAseq and the protein encoder uses corresponding AA sequence. They perform multiple ablations on the utility of having all 3 separate encoders and, given separate encoders, how to aggregate them into a single isoform specific embedding/prediction, and look at attention layers of RNA module to find biologically meaningful regions of attention.\n\nIsoform level analysis using 3 separate pretrained encoders for DNA, RNA, and AA sequences is a good strategy. The authors provide useful ablations on the utility of the multi modal approach and on modern strategies for combining those into a single embedding. Looking for biolgoically meaningful interpretations of attention layers is useful.\n\nI don’t think the authors can claim this is the first attempt to combine DNA, RNA, and AA modalities with techniques from NLP. See the recent Evo work here https://www.biorxiv.org/content/10.1101/2024.02.27.582234v2 . While they evaluate their performance against Enformer, that’s a large part of their own model. So the evaluations have an intramural feel to them. It’d be interesting to see how their strategy compares to other multi modal models such as Evo, and more RNA centric work like Borzoi, which looks at a more fine grained look of variant effects on the DNA to RNA relationship. Looking at average isoform abundance across individuals is all well and good, but GTEx also has individual genomes, and genomic variation across individuals will also of course affect splicing patterns and which isoforms come from what individuals.\n\nSome comments and questions:\n\nCentering on TSS will only capture regulatory elements within the chosen sequence length. There could be distal or trans CREs outside\n\nGTEx database has less than 1000 donors, confirming the 5000 individuals claim?\n\nLooking at equation 5 in 4.4, how does f_psi depend on the tissue T? Is it a separate head per tissue, or are you predicting the vector of isoform abundance across tissues with one pass? And is f_theta and f_phi the same f but different weights as f_psi? Where does the summation over i take place?\n\n5.1 does ablations with one DNA encoder, then 5.2 shows superior performance with Enformer as the DNA encoder. So the ablations in 5.1 may not be accurate with respect to this new encoder. It also begs the question of how would the Enformer do as the RNA encoder as well.\n\nHow are RNA and protein sequence lengths handled when they’re longer than the model input sequence size?"
}
] | |
xDrKZOZEOc | Fast T2T: Optimization Consistency Speeds Up Diffusion-Based Training-to-Testing Solving for Combinatorial Optimization | Diffusion models have recently advanced Combinatorial Optimization (CO) as a powerful backbone for neural solvers. However, their iterative sampling process requiring denoising across multiple noise levels incurs substantial overhead. We propose to learn direct mappings from different noise levels to the optimal solution for a given instance, facilitating high-quality generation with minimal shots. This is achieved through an optimization consistency training protocol, which, for a given instance, minimizes the difference among samples originating from varying generative trajectories and time steps relative to the optimal solution. The proposed model enables fast single-step solution generation while retaining the option of multi-step sampling to trade for sampling quality, which offers a more effective and efficient alternative backbone for neural solvers. In addition, within the training-to-testing (T2T) framework, to bridge the gap between training on historical instances and solving new instances, we introduce a novel consistency-based gradient search scheme during the test stage, enabling more effective exploration of the solution space learned during training. It is achieved by updating the latent solution probabilities under objective gradient guidance during the alternation of noise injection and denoising steps. We refer to this model as Fast T2T. Extensive experiments on two popular tasks, the Traveling Salesman Problem (TSP) and Maximal Independent Set (MIS), demonstrate the superiority of Fast T2T regarding both solution quality and efficiency, even outperforming LKH given limited time budgets. Notably, Fast T2T with merely one-step generation and one-step gradient search can mostly outperform the SOTA diffusion-based counterparts that require hundreds of steps, while achieving tens of times speedup. | https://openreview.net/pdf/fe5e7fe1fabed428781b4a7575b830ae83a8c609.pdf | [
{
"confidence": 2,
"rating": 6,
"review_id": "AoMPXAl8H1",
"review_text": "The paper introduces Optimization Consistency Models (OptCM) as a novel method for solving combinatorial optimization (CO) problems efficiently. Traditional diffusion models, although powerful, are computationally intensive due to their iterative denoising processes. OptCM overcomes this limitation by learning direct mappings from noise levels to optimal solutions, enabling rapid single-step solution generation. The contributions of this paper are three-fold. First, OptCM reduces the computational overhead significantly by enabling fast, single-step solution generation while maintaining high solution quality. Second, This protocol ensures that samples from different generative trajectories converge consistently to the optimal solution. Thrid, Introduced at the test stage, this method enhances solution exploration and quality during inference.\n\nOptCM significantly reduces the computational overhead by enabling fast, single-step solution generation, compared to the multiple steps required by traditional diffusion models. This efficiency allows for rapid inference, making it practical for real-time and large-scale applications.\n\nDespite the reduced computational steps, OptCM maintains high solution quality, often outperforming state-of-the-art methods that require more extensive processing. The optimization consistency training ensures that the generated solutions are close to the optimal solution.\n\nThe optimization consistency training protocol is a novel approach that minimizes the differences among samples from varying generative trajectories, ensuring robust and consistent solution generation. This method enhances the model's ability to generalize across different problem instances.\n\nThe introduction of a consistency-based gradient search during the test stage allows for further exploration and refinement of the solution space, improving the final solution quality. This approach bridges the gap between training and inference, making the model more adaptable to new instances.\n\nOverall, the paper's strengths lie in its innovative approach to reducing computational complexity while maintaining high solution quality, its robust and versatile model design, and its impressive performance on benchmark tasks. However, there are some weaknesses in this paper. Some weaknesses listed below might be found to be too general and don't require the authors to address them now.\n\nThe advanced techniques used in OptCM, such as optimization consistency training and gradient-based search, may be challenging to implement and require a deep understanding of the underlying principles.\n\nThe model's performance is closely tied to the quality and diversity of the problem instances used during training. If the training set does not adequately represent the test instances, the model's effectiveness might be reduced.\n\nWhile the paper demonstrates the superiority of OptCM over state-of-the-art neural solvers, it could provide a more detailed comparison with traditional, non-neural methods in terms of both performance and computational efficiency.\n\nHow does the computational complexity of OptCM compare with traditional diffusion models and other state-of-the-art neural solvers? What specific optimizations were implemented to achieve the reported speedup in solution generation?\n\nHow does OptCM compare with classical optimization algorithms like simulated annealing or genetic algorithms in terms of both performance and computational resources? \n\nCan OptCM be effectively applied to real-world problems with noisy, incomplete, or dynamically changing data?"
},
{
"confidence": 4,
"rating": 5,
"review_id": "ux8LKdi2B5",
"review_text": "This paper advances CO DM-based neural solvers under the setting where labeled training graphs are available by considering Consistency Models and gradient search (which was adopted from T2T).\n\n1- The paper is in general well-written and technically sound.\n\n2- The use of CMs to accelerate the sampling procedure.\n\nSee Questions.\n\n1- Generalization is a major problem for supervised neural solvers. The need to train a different model for each graph distribution is a bottleneck for these solvers. For example, in the MIS problem, how does a CM trained on ER700 with p=0.15 generalize when faced with an ER700 test instance with p=0.2? Which p would require different training given that n=700? Furthermore, does the proposed approach need to train a different model for ER2000 with p=0.15? These need to be explicitly explained/investigated as these could be considered as a limitation of the proposed method. The MIS problem differs based on different densities and degree distributions, not only the graph size. SATLIB, GNMs, SBMs can be considered. \n\n2- How does the size of the training dataset impact the outcomes? \n\n3- How does the proposed method handle real-world graphs (such as the SNAP graphs in https://snap.stanford.edu/data/)? What would be the training dataset? In most cases, these graphs do not follow a certain degree distribution or density. This is a major limitation in this method and all other learning-based methods. This point needs to be clearly discussed in the paper. As an alternative, the authors should clearly state that the proposed method only operates when training data points (with true optimal solutions) are available. \n\n4- Scalability results are missing for dense and sparse graphs if the models do generalize to higher n and same density. \n\n5- The run-time comparison with heuristics and ILP solvers does not include training times. The authors should either include them, or explicitly/clearly mention that. \n\n6- The training dataset when using KAMIS to label graphs is similar to training a regression model with inaccurate true values as labels. The reason is KAMIS does not guarantee an optimal solution. For MIS, there are other techniques to generating graphs with guaranteed true MISs (maximum, not maximal) such as the following: Create a graph G with n nodes, make k of them completely connected, and randomly add edges to/from the remaining vertices with degree <= k. Then, the complement graph G' should have at least one independent set of size k and no independent set of greater size. \n\n7- Missing iSCO [1] and the differentiable solver in [2] for comparison. \n\n8- The ER results of DIFUSCO are not the same as were reported in the original paper? I know that DIFUSCO is a diffusion-based method where the sampling procedure starts with a random noise vector drawn from the standard Gaussian. However, a discussion is needed of why lower results are reported here. \n\n9- The novelty is not that significant from the T2T method other than the use of CMs and considering an encoding of size N X 2 instead of N X 1. Using CMs does improve the run-time, and slightly improve the solutions sizes. \n\n10- Generally, the proposed approach and other supervised methods such as DIFUSCO and T2T depend on additional post-processing procedures. This dependence needs to be further explained and investigated. The details for the post processing procedures (such as MCTS and Greedy Decoding) are needed, even if they were also adopted in DIMES and DIFUSCO. For example, if none of these procedures were used, how does this impact the outcomes? \n\n[References]\n\n[1] Revisiting sampling for combinatorial optimization. ICML, 2023.\n\n[2] A differentiable approach to the maximum independent set problem using dataless neural networks. Nueral Networks, 2022."
},
{
"confidence": 4,
"rating": 5,
"review_id": "b00iapDwKL",
"review_text": "This paper presents Optimization Consistency Models (OptCM) for solving combinatorial optimization (CO) problems efficiently. By leveraging the consistency model, OptCM maps varying noise levels to optimal solutions in a single step, significantly reducing computational overhead. This approach is validated through extensive experiments on the Traveling Salesman Problem (TSP) and Maximal Independent Set (MIS), demonstrating significant superior efficiency compared with state-of-the-art diffusion-based models.\n\n1. This paper introduces a consistency model to improve the efficiency of diffusion-based combinatorial optimization solvers.\n2. Extensive experiments on TSP and MIS show that OptCM can outperform existing methods.\n\n1. This work is mainly incremental, based on previous works DIFUSCO [1] and T2T [2].\n2. Larger-size TSPs, such as those with 100000 nodes, should be tested against state-of-the-art learning-based methods [3].\n3. Despite significantly improving solving efficiency, the proposed method is limited in addressing constrained COPs (e.g., CVRP and more complex COPs) and requires optimal solutions as labels.\n\n[1] DIFUSCO: Graph-based Diffusion Solvers for Combinatorial Optimization, NeurIPS, 2023.\n\n[2] T2T: From Distribution Learning in Training to Gradient Search in Testing for Combinatorial Optimization, NeurIPS, 2023.\n\n[3] GLOP: Learning Global Partition and Local Construction for Solving Large-scale Routing Problems in Real-time, AAAI, 2024.\n\nSee weaknesses."
},
{
"confidence": 4,
"rating": 6,
"review_id": "7mgUuDDrWq",
"review_text": "This paper introduced a new algorithm for solving some classic combinatorial optimization problems. The method falls into the category of learn-based generative solvers. More specifically, it is a direct extension of the DIFUSCO [1] and T2t [2] solver, which are diffusion-based generative solvers. The improvement is mostly done through improving on the sampling step of the two aforementioned works with consistency models (CM) [3], a recent notable regime that enables drastic reduction of the number of function evaluations (NFE, or sampling steps) of vanilla diffusion models. The novelty lies in extending CM into discrete regime and combining it with consistency-based gradient search, which is necessary for combinatorial optimization problem. Empirical evaluations show the effectiveness of this new solver, where it achieves competitive objective value in a much shorter time, compared to various baselines. \n\n\n[1] Z. Sun and Y. Yang, “DIFUSCO: Graph-based diffusion solvers for combinatorial optimization, in Thirty-seventh Conference on Neural Information Processing Systems, 2023. \n\n[2] Y. Li, J. Guo, R. Wang, and J. Yan, “T2t: From distribution learning in training to gradient search in testing for combinatorial optimization,” in Advances in Neural Information Processing Systems, 2023.\n\n[3] Y. Song, P. Dhariwal, M. Chen, and I. Sutskever, “Consistency models,” arXiv preprint arXiv:2303.01469, 2023\n\n1. Overall I think the method is novel. The extension of consistency training framework into diffusion-based CO solvers is not trivial, and this work is trying to solve a well-motivated problem. The empirical evaluations are quite convincing compared to diffusion-based CO solvers (of course, one expects so as consistency models inference time is much faster than diffusion generative models). \n\n2. The paper is overall well-written, and the authors seem to have done quite thorough literature reviews to gather sufficient baselines to compare to the performance of their work to.\n\n1. It is necessary to elaborate on why the authors wrote \"... note F1 is exactly the (implicit) the objective of the diffusion and consistency models...\" at line 259. In other words we need to see why (should be a lemma with proof) we have the quantity $F_1$ in equation (6) is the equivalence of the loss in eq (4).\n\n2. It is unclear how the overall training paradigm takes place in practice, including the gradient search part. The authors should include an algorithm box on the training of their OptCM framework. Algorithm 1 on Multistep Consistency Sampling is almost the same as in the original Consistency Models paper, so I suggest the authors move them to the Appendix. If I'm not mistaken, the training of consistency models is quite tricky, for example, it is crucial to design a suitable training time discretization schedule for CM to work well. Could the authors elaborate on this for their problem? \n\n I'm willing to re-evaluate my score if the authors can answer these two points.\n\nSee weakness."
}
] | |
xCUXJqQySD | Med-Real2Sim: Non-Invasive Medical Digital Twins using Physics-Informed Self-Supervised Learning | A digital twin is a virtual replica of a real-world physical phenomena that uses mathematical modeling to characterize and simulate its defining features. By constructing digital twins for disease processes, we can perform in-silico simulations that mimic patients' health conditions and counterfactual outcomes under hypothetical interventions in a virtual setting. This eliminates the need for invasive procedures or uncertain treatment decisions. In this paper, we propose a method to identify digital twin model parameters using only noninvasive patient health data. We approach the digital twin modeling as a composite inverse problem, and observe that its structure resembles pretraining and finetuning in self-supervised learning (SSL). Leveraging this, we introduce a physics-informed SSL algorithm that initially pretrains a neural network on the pretext task of learning a differentiable simulator of a physiological process. Subsequently, the model is trained to reconstruct physiological measurements from noninvasive modalities while being constrained by the physical equations learned in pretraining. We apply our method to identify digital twins of cardiac hemodynamics using noninvasive echocardiogram videos, and demonstrate its utility in unsupervised disease detection and in-silico clinical trials. | https://openreview.net/pdf/b6609bf4e5f0ce3248a6b5f958f0f2f6b75a2c04.pdf | [
{
"confidence": 4,
"rating": 6,
"review_id": "Nl8lyDx1ON",
"review_text": "This paper proposes a novel method for creating patient-specific digital twins using non-invasive patient health data. The authors introduce a physics-informed self-supervised learning (SSL) algorithm that pretrains a neural network on learning a differentiable simulator of the cardiac process. Then, another model is trained to reconstruct physiological measurements from non-invasive data while being constrained by physical equations learned during pretraining. The method is applied to identify digital twins of cardiac hemodynamics using echocardiogram videos, showing good results in unsupervised disease detection and in-silico clinical trials.\n\n* The method uses non-invasive data, avoiding time-consuming and complicated patient interventions.\n* The model accuracy is enhanced by including a physics-based model during training.\n* The authors demonstrate the method's utility in modeling complex physiological processes like cardiac pressure-volume loops with open-sourced datasets.\n\n* Simplifications/assumptions in the Windkessel and LVAD models might not fully capture the complexity of the heart dynamics.\n* The results might be sensitive to low quality non-intrusive data, which can affect the global accuracy of the method.\n* It is not clear the demographic diversity of the echocardiography dataset. Thus, the model might not generalize well across all the segments of the population.\n\n* Lines 251-255, Table 3: Which criteria was used to select the learnable and the fixed parameters of the model? Are these selected by using some kind of sensitivity analysis based on the state-space matrices in Eqs. 21 and 24?\n* Line 708: Appendix C1 already addresses the ill-possedness of the inverse problem. That means that the trained model could potentially assign the same digital twin to two different patients due to the similarity of their echocardiograms. Have the authors found any difficulty in this regard? \n* Have the authors considered using an easier and more accessible nonintrusive techniques such as electrocardiograms? Would the use of several modalities of non-intrusive data for the same patient increase the performance of the model?\n\nMinor comments:\n* Line 227: \"tune-able\" might refer to \"tunable\".\n* Line 252: Incorrect reference, Table A.2. might refer to Table 3.\n* Line 814: Incorrect reference, Figure D.2. might refer to Figure 7.\n* Figure 7: The figure model seems to be incomplete/offset.\n* Equation 35: A parenthesis is missing after $R_{NO}$.\n\nFinal comment: The paper is well structured, the results are promising and the methodology has moderate impact on the field. However, it's still unclear if this model could handle the variability and complexity of human physiology by only non-invasive measurements. Based on the comments above, I reccomend a weak accept."
},
{
"confidence": 4,
"rating": 6,
"review_id": "vJ72tbmO3L",
"review_text": "This paper introduces a novel methodology for identifying patient-specific digital twins using noninvasive medical imaging, particularly focusing on cardiac hemodynamics. By leveraging a physics-informed self-supervised learning approach, the research addresses the challenge of modeling digital twins without invasive data collection. The process involves pretraining a neural network to simulate physiological processes and then fine-tuning it to reconstruct physiological measurements from noninvasive modalities, constrained by the learned physical equations. This framework allows for the simulation of patient-specific physiological parameters and the potential for conducting in-silico clinical trials and unsupervised disease detection.\n\n1. The paper introduces a cutting-edge method combining physics-informed neural networks with self-supervised learning to tackle the inverse problem of estimating physiological parameters from noninvasive imaging data.\n\n2. By utilizing noninvasive data, the proposed method significantly reduces the need for invasive procedures, enhancing patient safety and comfort, and potentially broadening the applicability of digital twin technology in routine clinical practice.\n\n3. The methodology's ability to simulate detailed physiological conditions and interventions opens up vast possibilities for its application in personalized medicine, including unsupervised disease detection and in-silico clinical trials, which can significantly accelerate the development of therapeutic strategies.\n\n1. Insufficient performance comparison. The paper only compares PINN and Neural ODE methods. There are many variations of PINNs methods which outperform the original one. The paper should choose more solid baselines.\n\n2. No ablation study. The paper proposed a two-stage training strategy, but it didn't show why it is necessary.\n\n1. How do the results of the model change if the estimated parameters are dynamic rather than constant? Considering the medical context where a patient's condition is continually changing, assuming constant parameters seems unrealistic. How might this affect the reliability and accuracy of the model used in such scenarios?\n\n2. Can you explain why blood flow is often modeled using an electrical circuit analogy, and why it adheres to Kirchhoff's laws of voltage and current?"
},
{
"confidence": 4,
"rating": 5,
"review_id": "oS7KvX7uOO",
"review_text": "I have read this manuscript during ICML review. It looks the same so I copied my previous review.\n\nThe authors presented a method to infer the physical parameters θ of physiological process (heart pumping blood) from noninvasive observation y (the echo image). The mapping from y to θ cannot be directly learned due to the lack of paired data. Instead, they find that an intermediate variable x_bar (the EF) can be annotated by experts (x_bar=g(y)) which can also be calculated based on θ (x_bar=m(M(θ))). Thus the observable pair (y, x_bar) provides the supervision for learning θ = F_inv(y), through the relation x_bar=m(M(F_inv(y))) where m is rule-based and M is a solution to ODE. They first train a surrogate network to approximate M(θ) through synthetic simulation, followed by learning the parameter of F_inv on observations {(y, x_bar)}.\n\n1. The paper is clearly written and easy to read with clear symbols.\n2. The idea of introducing a physical model helps better characterise the physiological system and provide a learning method.\n3. The surrogate model overrides the cost of solving ODE and make it differentiable during learning.\n4. The inference of physical parameters helps to perform virtual experiments.\n\n1.The authors compared their methods in predicting EF from echocardiogram with supervised 3DCNN but did not outperform them in MAE. It should be noted that the EF is calculated on ED and ES segmentation and supervised segmentation network is expected to perform even better.\n2.Lack of validation. The authors performed validation by comparing EF derived from the physical parameters to the observed EF. But this does not guarantee the correctness of their physical parameters. In fact, arbitrary intermediate variables can be defined and could also lead to the comparable prediction of EF.\n3.Training the surrogate model of x=M(θ) need to generated synthetic samples while this is domain-dependant. Whether the learned mapping fits extreme or shifted situations is unknown, lacking of uncertainty analsysis.\n\n1.Could the authors provide more solid validation of their inferred physical parameters?\n2.Could the authors provide the uncertainty/robustness/generalisation ability of the surrogate model?"
},
{
"confidence": 3,
"rating": 3,
"review_id": "Q9Yi7i7AP5",
"review_text": "The paper proposes a method to identify parameters for digital twin models of patients using non-invasive health data, eliminating the need for invasive procedures. This method focuses on scenarios like cardiac hemodynamics, where traditionally invasive measurements (e.g., through catheterization) can be predicted using non-invasive data (e.g., echocardiograms). The novelty of the method is to solve the associated inverse problem specifically for that patient, so that personalized predictions can be performed.\n\nThe proposed method uses a two-step SSL approach that structurally resembles pretraining and finetuning in SSL. First, a neural network is pretrained on synthetic data to learn the forward dynamics of a physics-based model. Then the pretrained model is then used to train another network on actual non-invasive patient data to predict physical parameters.\nApplication to Cardiac Hemodynamics:\n\nThe paper illustrates how to apply the above method for cardiac hemodynamics using echocardiography. This allows the prediction of patient-specific pressure-volume (PV) loops using non-invasive echocardiogram videos.\n\nThe paper is overall well-written and it identifies a real problem with potential high impact: the design of personalised medical twins to avoid invasive procedures\n\nThe authors focus only on a very specific medical use-case, they don't try to generalize their method to more cases. In the introduction the authors illustrate this as a generic approach, therefore I was surprised to not find an attempt to support their vision with more application examples. Without the demonstration that this method can have a broader use I don't think this paper is suitable for presentation at this conference. Also, the authors don't run convincing ablations to demonstrate that their approach is sounding.\n\nHave you experimented with this method beyond Cardiovascular Hemodynamics?\nYou implement a 3D-CNN - how did you arrive at such architecture? have you run ablations?"
}
] | |
xCIbVuXwPM | Trading off Consistency and Dimensionality of Convex Surrogates for Multiclass Classification | In multiclass classification over $n$ outcomes, we typically optimize some surrogate loss $L: \mathbb{R}^d \times\mathcal{Y} \to \mathbb{R}$ assigning real-valued error to predictions in $\mathbb{R}^d$. In this paradigm, outcomes must be embedded into the reals with dimension $d \approx n$ in order to design a consistent surrogate loss. Consistent losses are well-motivated theoretically, yet for large $n$, such as in information retrieval and structured prediction tasks, their optimization may be computationally infeasible. In practice, outcomes are typically embedded into some $\mathbb{R}^d$ for $d \ll n$, with little known about their suitability for multiclass classification. We investigate two approaches for trading off consistency and dimensionality in multiclass classification while using a convex surrogate loss. We first formalize partial consistency when the optimized surrogate has dimension $d \ll n$.
We then check if partial consistency holds under a given embedding and low-noise assumption, providing insight into when to use a particular embedding into $\mathbb{R}^d$. Finally, we present a new method to construct (fully) consistent losses with $d \ll n$ out of multiple problem instances. Our practical approach leverages parallelism to sidestep lower bounds on $d$. | https://openreview.net/pdf/8719768b890f591c4ee35ef267ec626035024441.pdf | [
{
"confidence": 4,
"rating": 7,
"review_id": "Ckrrx0HGho",
"review_text": "This paper studies surrogate loss design and the trade-off between surrogate consistency and loss dimension. The contributions are three-fold: (1) the characterization of the hallucination region, where the decoded prediction from the surrogate loss minimizer gives a class with no target probability mass, indicating a completely \"irrational\" prediction; (2) the construction of the (\"weakly but reasonably\" inconsistent) calibrated surrogate and link under the low-noise setup; (3) the decomposition of property eliciation into multiple elicitation problems with low dimensions.\n\n+ **Very well-motivated problem**: Consistent/calibrated surrogate losses, the main topic of this paper, are sometimes too restrictive because the (in)consistency analysis hinges on \"far-fetched\" distributions---as argued by the authors, and as can be seen in some counterexamples to show the inconsistency of the well-known Crammer-Singer hinge loss. If we can remove such \"pathological\" situations from the entire distribution space, we would have a nicer characterization of loss functions. This is the central motivation of this paper.\n+ **Interesting instances to show calibrated surrogates under the low-noise condition**: This paper mainly investigates two situations to convince the benefits of the relaxed notion of calibration: the unit cube and permutahedron. Both make sense with reasonably practical scenarios and provide the first attempt to show the benefits of incorporating the knowledge of the noise level in loss function design.\n\n+ **The result with the unit cube may need $d \\\\geq n$**: At first sight, the statement of Corollary 7 (the calibrated link design for the unit cube) does not have any restriction on $\\\\alpha$ (say, any dependency on the embedding $d$), unlike Corollary 8 (the calibrated link design for the permutahedron). When I look at its proof, I suspect we need the condition $d \\\\geq n$ for this because the proof leverages that \"$P\\_\\\\alpha^y$ is a strict subset of the orthant that contains $v\\_y$\" at l.500. Otherwise, it is strange because we can choose an arbitrarily small embedding dimension $d$.\n+ **The trade-off could be seen only for the unit cube case**: The main contribution of this paper is to showcase the trade-off between the consistency and the embedding dimension, as suggested by the title. However, we may not see a clear trade-off in the unit-cube case. This is also related to the above point: Once we have $d \\\\geq n$, there may not be a clear trade-off for $\\\\alpha$ and $d$. Given this, we are not very sure how universally we can observe the trade-off across different elicitation problems.\n\nNevertheless, I don't think these points are significant enough to undermine the contributions of this paper. I expect the authors to address them, which leads to the presentation of the contributions in a more fair/precise manner.\n\nHere are the major questions.\n+ In the proof sketch of Theorem 3, it is not very straightforward to see $\\\\mathrm{conv}(\\\\mathrm{vert}(P\\_{-y})) \\\\cap \\\\psi\\_y^\\\\varphi \\\\ne \\\\varnothing$. This is important to ensure that $\\\\mathcal{H} \\\\ne \\\\varnothing$ (before invoking Helly's theorem). Even if this is merely a sketch, it is better to discuss it in my opinion.\n+ Figure 1 (especially the left one) is challenging to understand. Although I can guess that each vertex of the rectangle corresponds to $v\\_y$, I cannot see what each area \"ad\", \"cd\", etc. mean exactly. Either the figure or caption should be refined. Figure 1R (and its caption) is slightly more digestible.\n+ In Theorem 5, the constructed link contains the \"point-set\" distance $\\\\|u - P\\_\\\\alpha^y\\\\|\\_2$. What does it mean exactly? It appears several times afterward (for example, in Theorem 6). I guess we can recover the statements without significant modifications if we modify the link definition slightly.\n\n----\nSome minor points:\n\n- In the last paragraph of the introduction, it is a bit too sudden to talk about the mode for unfamiliar readers. You may say a few words about the relationship between the mode and multiclass classification.\n- In Definition 1, the very general report space is introduced, which I'm not sure is really necessary or not. Subsequently, the discussion is only based on the case $\\\\mathcal{R} = \\\\mathbb{R}^d$.\n - By the way, the domain of a surrogate loss $L$ should be $\\\\mathcal{R} \\\\times \\\\mathcal{Y}$ in Definition 1, not $\\\\mathcal{Y} \\\\times \\\\mathcal{Y}$.\n- Throughout the paper, \"mode\", \"prop\", and \"vert\" look better with \\\\mathrm, not with \\\\text.\n- In Construction 1, $v\\_y$ is not formally introduced (while I could catch its meaning). In my opinion, this should be formally introduced because it appears repeatedly in the paper.\n- At l.171, \"To show that ...\" does not form a grammatically complete sentence.\n- At l.174, what does it mean by \"the vertex figure\"?\n- At l.194--196, the sentence is not easy to understand mainly because it contains two \"if.\" Could you rephrase it without multiple \"if\"?\n- At l.201, the level set $\\\\psi\\_y$ for the general link is not defined as far as I see, even though the level set for $\\\\psi^\\\\varphi$ is defined in Definition 7.\n- At l.201, $R\\_\\\\mathcal{Y}$ probably matches $R$. Is it true? I'm asking out of curiosity.\n- At l.209, what does it mean by \"a robust sense\"?\n- At l.213, the last \"via $(L,\\\\psi)$ with respect to $\\\\ell_{0-1}$\" is relevant to the definition of the strict calibration region $R\\_\\\\mathcal{Y}$. However, if you append \"via [...]\" in this way directly after $R\\_\\\\mathcal{Y}$, it is a bit hard to understand at first sight.\n- At l.229, $\\\\delta\\_y$ is not defined (though I catch the meaning).\n- At l.233, the notation $\\\\psi\\_\\\\alpha^\\\\varphi$ should be changed if possible because this notation with the subscript/superscript is confusing with the level set notation $\\\\psi\\_y^\\\\varphi$.\n- At l.250, $=$ (in the beginning) should be $\\\\subseteq$ in my understanding.\n- At l.251 and l.267, the link notation $\\\\psi\\_\\\\alpha^P$ should be $\\\\psi\\_\\\\alpha^\\\\varphi$. In regard to this, the notation of $\\\\psi\\_\\\\alpha^{P^\\\\square}$ and $\\\\psi\\_\\\\alpha^{P^w}$ in Section 4.2 are better to be consistent with $\\\\psi\\_\\\\alpha^\\\\varphi$ if possible---the superscript stands for the embedding function, not the embedded space.\n- At l.269, \"$P\\_\\\\alpha^y$ is [...] pairwise disjoint\" should be complemented with \"in $y$\" to be clearer.\n- At l.274, the strict properness is not introduced. Actually, I think the usual calibration suffices in this context.\n- In the appendix, can you make the theorem numbers consistent with those used in the main part? You can do this by using the restatable package."
},
{
"confidence": 2,
"rating": 6,
"review_id": "2TGwDzu3yT",
"review_text": "This paper proposes a method called polytope embedding, which embeds multiclass predictions onto real numbers. The paper studies the properties of this embedding, like hallucination and calibration. Further, with low-noise assumptions, the authors showed more calibration results for their embedding in some cases like embedding into the unit cube.\n\nThe topic of this paper on how to design a consistent surrogate loss seems interesting. The results proved in the paper also relate to topics that people care about like hallucination and calibration.\n\nEven though each result seems interesting, the structure of this paper is not very clear to me. I am open to different arguments but I think it is hard for readers to understand what the authors are trying to show with this paper.\n\nI am wondering what exactly the authors' definition of partial consistency is. How exactly are the methods trading off consistency and dimensionality? If the authors mean consistency by hallucination and calibration, which theorem shows how dimension affects these properties."
},
{
"confidence": 4,
"rating": 7,
"review_id": "y2wlD9XJMS",
"review_text": "The paper examines the trade-off between consistency and dimensionality in multi-class classification. It has been known that the lower bound on the dimension for consistent surrogate losses under any distribution is $n - 1$, wheren $n$ is the dimension of the input space. The authors propose the notion of partial consistency, which permits the establishment of surrogate losses at much lower dimensions than the input dimension. This method allows for the consistency of lower-dimensional surrogate losses under a low-noise condition and the construction of dimensionally reduced consistent surrogate losses across multiple problem instances.\n\n1. The paper is the first study to explore the control of the trade-off between consistency and dimension in multi-class classification.\n\n2. The paper is well-written and clear, even though it is theoretically dense.\n\n1. The paper demonstrates the existence of distributions under which consistency holds for low-dimensional surrogates but does not offer much guidance on how to identify such distributions for a given surrogate loss.\n\n2. The paper restricts its study and analysis to asymptotic guarantees.\n\n1. Theorem 5 establishes the existence of an $\\alpha \\in [0, 0.5)$ across distributions where consistency is maintained. Is it possible to determine or estimate the value of $\\alpha$ for a specific surrogate loss to guarantee consistency?\n\n2. The max hinge loss, formulated by Crammer and Singer (2001), is inconsistent when $\\alpha = 0$ and consistent when $\\alpha = 0.5$. Do the findings of this paper potentially include such a result as a particular example?\n\n3. The paper examines the asymptotic guarantees of Bayes consistency. Could some of these results be extended to incorporate non-asymptotic consistency guarantees, such as excess error bounds or H-consistency bounds?\n\n4. The low-noise condition mentioned in Line 228 can be considered as the multi-class version of Massart's noise condition. Is it possible to extend the analysis to encompass the broader Tsybakov noise condition?\n\n5. Could the authors provide further elaboration on how to choose an embedding in practice, based on the theoretical results?\n\n6. The results in the paper apply to multi-class classification. Is there any possibility of generalizing them to other scenarios, such as ranking?\n\n7. The use of $\\mathcal{H}$ to denote the hallucination region in line 164 could lead to confusion, as $\\mathcal{H}$ is commonly associated with the hypothesis set. Is there an alternative notation that could be used to avoid this ambiguity?\n\n8. The notion of partial consistency is mentioned in both the abstract and introduction, yet it is not formally defined in the main text. It seems that partial consistency suggests that surrogate reports may not correspond to a single distribution. A formal definition would likely enhance reader comprehension.\n\n9. The approach of using multiple problem instances and aggregating their outputs is intriguing. However, might this method be vulnerable to data imbalances across different problem instances? I am eager to hear the authors' opinions on this."
},
{
"confidence": 4,
"rating": 7,
"review_id": "Wz4FkCteG8",
"review_text": "In this paper, the problem of constructing consistent multiclass surrogate losses for the 0-1 loss while reducing the dimension of the scoring function is studied. The concept of partial consistency, which can be dated back to the study of multiclass SVM, is used as a crucial part of this work. It is first revealed that any losses with scoring function’s dimension less than #class number-1 will lead to severe misclassification. Then low noise assumption is used to enable the trade off between (partial) consistency and the scoring function’s dimension number. An attempt of recovering full consistency with several scoring functions of lower dimensions is also made.\n\n1. A clear trade off between the strictness of low-noise assumption and scoring function’s dimension number is quantified.\n\n2. The analysis of hallucinations is also enlightening, which provides an interesting insight of why some well-suited models (with some losses) can make wrong predictions even with no real-world evidence.\n\n3. The clear presentation and layout of the results make them easy for readers to understand.\n\nWhile the dimension of predictor is reduced and thus the computational cost of training can be promisingly reduced, the computational cost of the inference stage may increase compared with that of the traditional #class number-dimensional scoring functions. Is there any remedy for this problem?\n\nPlease see the weaknesses."
}
] | |
x9eFgahVBI | From Unstructured Data to In-Context Learning: Exploring What Tasks Can Be Learned and When | Large language models (LLMs) like transformers demonstrate impressive in-context learning (ICL) capabilities, allowing them to make
predictions for new tasks based on prompt exemplars without parameter updates. While existing ICL theories often assume structured training data resembling ICL tasks (e.g., x-y pairs for linear regression), LLMs are typically trained unsupervised on unstructured text, such as web content, which lacks clear parallels to tasks like word analogy. To address this gap, we examine what enables ICL in models trained on unstructured data, focusing on critical sequence model requirements and training data structure. We find that many ICL capabilities can
emerge simply from co-occurrence of semantically related word pairs in unstructured data; word analogy completion, for example, can provably arise purely through co-occurrence modeling, using classical language models like continuous bag of words (CBOW), without needing positional information or attention mechanisms. However, positional information becomes crucial for logic reasoning tasks requiring generalization to unseen tokens. Finally, we identify two cases where ICL fails: one in logic reasoning tasks that require generalizing to new, unseen patterns, and another in analogy completion where relevant word pairs appear only in fixed training positions. These findings suggest that LLMs' ICL abilities depend heavily on the structural elements within their training data. | https://openreview.net/pdf/fa1542b6c9fd42a5d63296cab95e6690e9d75499.pdf | [
{
"confidence": 2,
"rating": 6,
"review_id": "PiFfBlbDIw",
"review_text": "The paper basically presents three theoretical analyses related to ICL. The section 2 shows that we can use CBOW to do the (country)-(capital) kind of ICL. The section 3 shows that positional embeddings, multiple layers in autoregressive LM, and blocked noise structures are important for ICL. The section 4 shows that ICL could fail when there are systematic and consistent mismatches between the training sequence and testing sequence.\n\nI think this paper is easy to follow and most explanations are clear (One minor suggestion: it would be more clear to also illustrate the correct answer of each prompt and provide some brief explanations such as the prompt in section 3 tries to repeat the first letter of a word). I choose fair in the presentation rating because I feel that the paper oversells its contributions in the title and abstract.\n\nAll the claims are supported by both strong theoretical conclusions and empirical simulations. The theoretical contributions are novel to me but I am not a theoretical researcher. Since the situations/preconditions of the claims are extremely simple, I think its significancy is not high for practitioners, but the contributions might be significant for theoretical researchers and might inspire the follow-up work.\n\nI think the main weakness of this paper is the mismatch between scope it seems to cover and its actual scope. The title and abstract suggests that this paper tries to study why the ICL works well given the unstructured training data in practice, but what the paper actually did is thoroughly studying 3 toy situations. \n\nI understand that we often have to simplify the situations in order to get strong theoretical conclusions. I also think that, at least to me, it is difficult to derive those theoretical results in such simplified situations and all the toy situations are relevant to the ICL. Nevertheless, I think these situations are not very representative to most of the practical ICL settings. After all, most ICL is beyond just relying the co-occurrence statistics of the sentences like CBOW, finding the first letter of the word, and repeating some words in the context. \n\nI understand that nowadays, one paper often needs to oversell its scope in the title and abstract to get attentions. Hence, although I suggest that the authors can revise the main storyline to reduce the overselling, I am fine with the current title and abstract if this paper is accepted at the end. I am also not a researcher who studies theory, so I do not know how significant or novel these theoretical results are. Therefore, I would like to let other researchers who have better theoretical background to rate this paper.\n\nI have no specific question to the authors, so I think the rebuttal won't change my opinion. I won't lower my score if the authors choose to skip the rebuttal to my comments."
},
{
"confidence": 4,
"rating": 5,
"review_id": "HMWXtwxmMi",
"review_text": "This paper studies the emergence of in-context learning (ICL) in both CBOW (Mikolov et al,. 2013) and Transformer models. The focus is on simple synthetic settings that can be studied both theoretically and through synthetic experiments with small models. The paper identifies co-occurence as a key ingredient for ICL to emerge in CBOW. Then the paper considers how positional information is critical for a Transformer (or any) model to identify certain order-dependent patterns. Finally, the paper presents two sythetic scenarios involving repetition in which ICL fails with a simple model.\n\n- The paper begins by identifying synthetic scenarios in which co-occurrence within a sentence is sufficient for a continous bag-of-worlds (CBOW) model to be able to perform ICL. The paper proves two theorems identifying when co-occurrence statistics are sufficient to ensure that CBOW could perform ICL. \n- The paper then proves that positional information is required to perform certain sythetic tasks related to the ordering of tokens.\n- Finally, the paper identifies two sythetic settings in which one might expect ICL to work.\n- Sythetic experiments support each of the above claims.\n\n- The paper states that \"ICL is achievable by only modeling co-occurrence information using [CBOW]\". However, this seems to miss the generality with which the term ICL is used. That is, ICL is commonly used for generation tasks such as full-sentence machine translation (not just the simple token-level translation examples in this paper). So to say that \"ICL is achievable\" seems like a misuse of the terminology. Without a more careful definition of ICL, this statement is invalid.\n- After showing that Llama 2 is unable to use ICL to translate the word English words \"soon\" and \"main\" to Indonesian, the paper claims that \"these models should be equally likely to produce the correct answer for any given [word], irrespective of its relevance to the in-context examples. However, our experiment demonstrates that this is not the case\". This is a huge leap for a poorly designed experiment. Llama 2 was trained on 98.08% English data. The amount of Indonesian language data may have been miniscule. As such, co-occurence may offer an explanation for the result, but adjacency might be equally informative. To speak of co-occurrence without any discussion of adjacency seems a bit odd here. This same issue appears later in the paper's claim \"This suggests ICL may arise from co-occurrence information\", whereas a claim that it is informed by co-occurrence might be more apt.\n- It is not clear to this reader why one would expect the setting in Section 4.1 to succeed via ICL in the first place. For example, we also wouldn't expect these settings to suceeed if they were presented to a supervised learner either because of the mismatch between the training examples and the prompt example.\n- The paper relegates the entire 2.5 page related work section to the appendix. It would be better to include more in the main paper; at present only lines 25-32 in the Intro address prior work making it difficult to position this paper early on.\n\n- In line 258, the paper claims that \"each token in V should be present as the first token in both the training and test sets.\" But shouldn't we be interested in whether this is really required in the largest of LLMs? Is there any way to connect this result back to larger models?"
},
{
"confidence": 3,
"rating": 7,
"review_id": "TfFuZbwyy8",
"review_text": "The paper studies the emergence of ICL using a synthetic setting. Particularly, it focuses on the importance of concurrence statistics to ICL, and shows that under some simplified conditions, a CBOW-styled model is proven to complete the correct completion for an ICL example. The paper additionally proves the importance of position encodings in the studied setting, showing that when the ICL task is inherently task dependent, position encodings is necessary for good performance.\n\nThe paper studies an important problem. The approach---reconstructing LM behavior in much \"shallower\" models---is intriguing and can be applied to additional problems concerning LMs. The technical claims are well presented and the paper is overall very readable.\n\nThe main weakness is that the paper studies a very synthetic setting. I understand some simplification are needed for the derivation of theoretical results, and this is OK. But, for example, it would be interesting to try deriving results on cases where the input consists of valid grammatical sentences, rather than a concatenation of tuples. If that is not possible, the paper should clearly state the disparity between \"real\" ICL setting and this setting. While LMs can be presented with tuples in inference time, they are usually not trained on such tuples, but rather on free form language.\n\nThe experimental results use cross entropy loss rather than the squared loss used in the theory. Why?"
},
{
"confidence": 2,
"rating": 6,
"review_id": "knEYn209q2",
"review_text": "The paper investigates the emergence of ICL from training on unstructured data. It explores two types of ICL tasks: the first involves input-output pairings that frequently co-occur within sentences, and the second comprises recognizable patterns that do not commonly co-occur. The authors demonstrate that the first task can be addressed by modeling co-occurrence information, and highlight the importance of positional information and blocked noise structures through the second task. Additionally, the paper discusses scenarios where ICL fails. Both theoretical and experimental evidence are provided in the paper.\n\n- It enhances understanding of how the structure of training data influences the emergence of ICL capabilities\n- the paper provides a mix of theoretical proofs and empirical validations to support its claims\n\n- There is a lack of experiment details in the paper, such as the number of training sentences used, the frequency of each input-output pair's repetitions within training sentences, and the methodology for generating training and evaluation data.\n- The scope of the experiments is limited, using small datasets and simplistic model architectures. Moreover, there is an absence of real-world data.\n- There is uncertainty about whether the findings would scale well to complex real-world data, larger models and higher embedding dimensions.\n\nCan you add examples of training sentences and prompts for experiments?"
}
] | |
x7usmidzxj | On Convergence of Adam for Stochastic Optimization under Relaxed Assumptions | In this paper, we study Adam in non-convex smooth scenarios with potential unbounded gradients and affine variance noise. We consider a general noise model which governs affine variance noise, bounded noise, and sub-Gaussian noise. We show that Adam with a specific hyper-parameter setup can find a stationary point with a $\mathcal{O}(\text{poly}(\log T)/\sqrt{T})$ rate in high probability under this general noise model where $T$ denotes total number iterations, matching the lower rate of stochastic first-order algorithms up to logarithm factors. We also provide a probabilistic convergence result for Adam under a generalized smooth condition which allows unbounded smoothness parameters and has been illustrated empirically to capture the smooth property of many practical objective functions more accurately. | https://openreview.net/pdf/9eb7ec4036b150c6641f5c68c16748db9695dbd6.pdf | [
{
"confidence": 5,
"rating": 7,
"review_id": "HfzLk8kYm1",
"review_text": "This paper provides two probabilistic convergence rates for Adam with generalized affine variance noise under smoothness and generalized smooth condition, respectively, which achieves comparable results to many prior results.\n\nPlease see the above Summary.\n\n1. I suggest that authors should provide detailed formulas for some notations including $\\textbf{g}_t, g(\\textbf{x}), \\nabla f(\\textbf{x})$. Is $\\nabla f(\\textbf{x})$ the gradient with the form of expectation.\n\n2. In line 118, the reference [10] is cited repeatedly.\n\n3. Section 5 is used to discuss the most related works and make comparisons with the main results in this paper. However, authors only discuss the most related works without any comparison with their main results. \n\n4. As mentioned in 1., if $\\nabla f(\\textbf{x})$ is the gradient with the form of expectation, the two results (Theorems 3.1 and 4.1) in this paper are not fully high probability since $\\frac{1}{T} \\sum_{t=1}^T ||\\nabla f(x_t)||^2$ is equivalent to $\\frac{1}{T} \\sum_{t=1}^T ||E_{z_i}[\\nabla f(x_t,z_i)]||^2$ ($z_i$ is the training data defined by me) which is smaller than $\\frac{1}{T} \\sum_{t=1}^T E_{z_i}[||\\nabla f(x_t,z_i)||^2]$. In other words, the results with the form $\\frac{1}{T} \\sum_{t=1}^T E_{z_i}[||\\nabla f(x_t,z_i)||^2]$ can directly derive the corresponding results with the form $\\frac{1}{T} \\sum_{t=1}^T ||E_{z_i}[\\nabla f(x_t,z_i)]||^2$, to say nothing of the one with additional high probability. Therefore, from my perspective, high probability is not an advantage for this paper, but rather weakens $\\frac{1}{T} \\sum_{t=1}^T ||E_{z_i}[\\nabla f(x_t,z_i)]||^2$.\n\n1. What is the symbol $\\xi$ in Equation (1)? It should be explained in the main text.\n\n2. What are the meanings of “right parameter” mentioned in line 4 (Abstract) and “problem parameter” in line 19?\n\n3. Although, in line 139, authors denote “it’s easy to verify that (8) is strictly weaker than L-smoothness” and provide a concrete example $f(x)=x^4$, can detailed proof be given to verify this argument?\n\n4. In Table 1, authors make comparisons with some prior work. However, the forms of some results (such as [33, 40, 49]) are not in the same form as the average form $\\frac{1}{T} \\sum_{t=1}^T ||\\nabla f(x_t)||^2$ in this paper. Therefore, are these comparisons appropriate?"
},
{
"confidence": 5,
"rating": 6,
"review_id": "Y1aZkavbI5",
"review_text": "In this paper, the authors analyze the convergence of Adam under milder noise conditions (affline variance) and milder smoothness conditions (both $L$-smoothness and $(L_0,L_q)$-smoothness) and propose a $O(\\text{polylog}(T)/\\sqrt T)$ convergence rate.\n\nThis paper analyses the convergence of Adam under milder smoothness conditions compared to the previous work. The result is relatively solid and convincing. The writing structure is also relatively clear.\n\n1. As the author claimed in their paper, they did not provide numerical experiments in this paper. While this paper is a theoretical paper focusing on the convergence analysis of Adam, some simple numerical experiments aligning with the results will make it more convincing.\n2. This paper exhibits a slight lack of novelty. Since after checking out the proof details, I found that the crucial techniques were almost proposed by the previous related works. However, this weakness is trivial, especially for a theoretical paper, and as I claimed in the Strength part, the result of this paper is solid.\n\n1. I suggest the authors could recall the readers of the definitions in the proof part, since numerous variables are introduced for proof, like $\\mathcal{G}_t$, $M$, $\\hat M$, $a_t$, $b_t$ and so on. It's a little inconvenient to check the definition in the previous pages each time.\n2. Since coordinate-wise calculations are commonly used in the proof, I suggest the authors could also consider demonstrating their results based on the $L_\\infty$ smoothness condition, as discussed in [1]. Also, I wonder about the difference of $(L_0,L_q)$-smoothness and local smoothness. \n3. For lemma B.2, I happened also to use this result before and I suggest the author cite [2], as I found this result in lemma A.2 of [2].\n4. How do the authors use the Cauchy-Schwarz inequality in the third inequality of line 740? Is this simply derived from $ab \\leq 1/4a^2 + b^2$ ? (Here $a, b$ are both scalars).\n5. In formula (58), line 557, where the last $\\sqrt{\\log}$ term comes from?\n6. What's the meaning of formulas (59) and (60) since $G^2 \\sim O(\\text{polylog}T)$ has been claimed in formula (7)\n\n[1] Balles, Lukas, Fabian Pedregosa, and Nicolas Le Roux. \"The geometry of sign gradient descent.\" arXiv preprint arXiv:2002.08056 (2020).\n\n[2] Zou, Difan, et al. \"Understanding the Generalization of Adam in Learning Neural Networks with Proper Regularization.\" The Eleventh International Conference on Learning Representations."
},
{
"confidence": 3,
"rating": 6,
"review_id": "MmXOEDAXVs",
"review_text": "This paper studies the high-probability convergence of Adam in the non-convex setting under relaxed assumptions. The authors consider a general noise condition that governs affine, sub-Gaussian, and bounded noise conditions. They also consider a generalized smoothness condition motivated by language model experiments. Under these assumptions, they obtain a convergence rate of $\\text{poly}(\\log T/\\delta)/T$, where $T$ is the number of iterations and $\\delta$ is the confidence level.\n\n1. Their result look novel and significant. They have shown the high-probability convergence of Adam under relaxed conditions than all previous papers.\n2. The proofs look correct. \n3. The paper is well-written and results are clearly presented.\n\n1. One major concern is, by choosing $\\beta_2=1-1/T$, does the author essentially reduce Adam to SGD with momentum, as this makes $v_t$ almost a constant? Btw, I think for [18] and [23] in Table 1, $\\beta_1$ should be $1-1/\\sqrt{T}$. Please also check other rows more carefully.\n\nI will increase the score if this concern is addressed.\n\n1. The term “affine variance noise\" is confusing to me. I think it should only refer to Equation (2), not Equation (3), as \"variance\" is defined as an expectation. If I understand correctly, the term in Line 3 refers to (3), which means the condition (A3) is actually stronger than (2), right?\n2. Why in Table 1 you did not include [19] which you discussed in Section 5.1?\n3. The rate in Theorem 4.1 is dimension dependent, whereas the rate in Theorem 3.1 is dimension free. Do you think it is something fundamental in the relaxed smoothness condition?"
}
] | |
x7pjdDod6Z | MeshFormer : High-Quality Mesh Generation with 3D-Guided Reconstruction Model | Open-world 3D reconstruction models have recently garnered significant attention. However, without sufficient 3D inductive bias, existing methods typically entail expensive training costs and struggle to extract high-quality 3D meshes. In this work, we introduce MeshFormer, a sparse-view reconstruction model that explicitly leverages 3D native structure, input guidance, and training supervision. Specifically, instead of using a triplane representation, we store features in 3D sparse voxels and combine transformers with 3D convolutions to leverage an explicit 3D structure and projective bias. In addition to sparse-view RGB input, we require the network to take input and generate corresponding normal maps. The input normal maps can be predicted by 2D diffusion models, significantly aiding in the guidance and refinement of the geometry's learning. Moreover, by combining Signed Distance Function (SDF) supervision with surface rendering, we directly learn to generate high-quality meshes without the need for complex multi-stage training processes. By incorporating these explicit 3D biases, MeshFormer can be trained efficiently and deliver high-quality textured meshes with fine-grained geometric details. It can also be integrated with 2D diffusion models to enable fast single-image-to-3D and text-to-3D tasks. **Videos are available at https://meshformer3d.github.io/** | https://openreview.net/pdf/0137993914b1c34b105ba8ce5545d99389e3b12a.pdf | [
{
"confidence": 4,
"rating": 7,
"review_id": "9U13nRPH0n",
"review_text": "This paper introduces MeshFormer, a sparse-view reconstruction model designed to generate high-quality 3D textured meshes from sparse RGB images and their corresponding normal maps. By leveraging voxel representation, 3D inductive biases, SDF loss, and normal information, the model shows comparable inference performance to concurrent methods, while the entire training process can be completed using only 8 GPUs within a week (concurrent methods typically require around 100 GPUs). Experimental results demonstrate the effectiveness of the design.\n\n1. The authors provided a detailed explanation of the motivations behind the model designs (including the introduction of voxel representation, the introduction of 3D full (or sparse) convolution, and so on) and demonstrated the reasonableness of these choices.\n\n2. Compared to baseline methods, this model is simpler to train and demonstrates better qualitative and quantitative results. \n\n3. The ablation study demonstrates the effectiveness of normal input, SDF supervision, geometry enhancement, and other methods proposed in the paper.\n\n1. Although the authors provide detailed textual descriptions in the method section, it would be better if more mathematical symbols and equations were used, which could explain the entire pipeline more clearly and unambiguously.\n\n2. For reproducibility, the authors should provide more implementation details, including a more detailed model architecture, the values of hyperparameters (e.g., \\lambda in the loss function), and other relevant information.\n\n3. The authors don’t report the comparison of inference time and memory usage between the proposed model and the baseline models.\n\n1. Can the normal maps of the mesh be completely consistent with the normal maps predicted by the model after the post-processing algorithm?"
},
{
"confidence": 5,
"rating": 8,
"review_id": "Tt47afhkrP",
"review_text": "The paper proposes a high-quality feed-forward 3D object reconstruction method from sparse view RGB images. It uses an explicit voxel structure for better geometric inductive bias, auxiliary inputs such as 2D diffusion generated normal images and SDF representation for better geometric details, and an end-to-end trainable pipeline that eliminates the need for multi-stage refinement. The method gives high quality reconstruction results, especially in terms of fine-grained and smooth geometry.\n\n1. Although the network architecture and 3D representations are more complicated than previous methods, they are end-to-end differentiable and alleviate the training burden of multi-stage refinement.\n2. The idea of using 2D diffusion generated normal images as input to the reconstruction pipeline is interesting and insightful.\n3. It is more computationally efficient to train (Line 73).\n4. The qualitative results are impressive, especially the mesh normals.\n\n1. In original LRM the only supervision signal needed is RGB images. The proposed method, however, needs access to the full 3D shape for supervising the occupancy. It is fine for hand-made 3D assets but might poses some difficulty when trying to scale to real datasets.\n\n1. Table 3 row (a) shows the impact of normal input. When you remove the normal input, do you also remove the normal output and the normal loss? I ask this because in section 3.3 you say learning from RGB to geometric details directly can be difficult, so it makes more sense to just remove the normal input but preserve normal supervision to compare."
},
{
"confidence": 5,
"rating": 7,
"review_id": "U9Seowt9eX",
"review_text": "In this work, the authors propose a sparse view reconstruction model that utilizes a set of images (with camera poses) and corresponding normal maps to produce a reconstructed textured mesh. The primary contribution lies in adopting voxel-based 3D representation and employing a network architecture that integrates both 3D convolution and attention layers. Moreover, direct geometry supervision (SDF loss) is applied during the training process, alongside rendering-based losses. Experimental results demonstrate that the generated 3D shapes achieve state-of-the-art performance when compared to existing works on the single-view to 3D task.\n\nHowever, as highlighted in the weakness section, there are potential misclaims regarding the technical contributions. It is highly recommended to revise the manuscript to cite and discuss these related works. Despite this, I am currently inclined towards accepting the paper and would be happy to champion it if the aforementioned issues are addressed in the final version.\n\n- The writing is clear and easy to follow.\n- The combination of SDF loss and rendering losses appears novel for training a feed-forward based network. Additionally, the ablation study in Table 3(b) clearly indicates that SDF supervision is crucial for achieving good geometry, as evidenced by the significant CD difference between (b) and (g).\n- Although [33] has explored using normal maps for the reconstruction task, it seems new to employ normal maps as inputs and supervision for a feed-forward reconstruction network.\n- Experimental results demonstrate state-of-the-art performance over existing baselines, as shown in Table 1 and Figure 3. Furthermore, it is illustrated that existing methods cannot achieve similar performance given the same computational resources (Table 2).\n- The ablation study confirms that various components are essential for the final performance, including considering normal input and SDF supervision.\n\nPossibly Misclaimed Technical Novelties:\n\nHowever, the current manuscript may contain several misclaims regarding its technical novelties.\n\nOne claimed novelty is the adoption of a 3D voxel representation. However, the use of 3D voxel-like volumes in reconstruction is not a new idea and has been well-explored in various works, including:\n\nA. Generalized Deep 3D Shape Prior via Part-Discretized Diffusion Process, CVPR 2023\n\nB. SDFusion: Multimodal 3D Shape Completion, Reconstruction, and Generation, CVPR 2023\n\nC. Locally Attentional SDF Diffusion for Controllable 3D Shape Generation, SIGGRAPH 2023\n\nD. One-2-3-45++: Fast Single Image to 3D Objects with Consistent Multi-View Generation and 3D Diffusion, CVPR 2024\n\nE. Make-A-Shape: a Ten-Million-scale 3D Shape Model, ICML 2024\n\nAdditionally, the use of convolution + transformer layers to process grid input seems to be standard procedure in 2D generation tasks, as seen in:\n\nDiffusion Models Beat GANs on Image Synthesis, NeurIPS 2021\n\nSimilar architectures have also been widely adopted in some of the aforementioned 3D reconstruction works, such as [A, C, D, E].\n\nRegarding image conditioning, the cross-attention with image patch features is also well-explored in various works mentioned above, such as [C, D, E].\n\nSome suggestions: \n- Considering the above existing and concurrent works (Weakness Section), it is difficult to be convinced that some of the proposed modules are novel. It is highly recommended to cite and discuss the differences with these prior works and adjust the claims accordingly.\n- Although it is acknowledged in the limitation section that the reconstruction performance will be affected by the errors of 2D models, it is recommended to include this as one of ablation case in Table 3 to better visualize this limitation.\n- Furthermore, as no real-world images have been tested within the proposed framework, it is advisable to avoid from using the term \"open-world\" (L384) to describe the current framework in order to prevent overclaims."
},
{
"confidence": 5,
"rating": 7,
"review_id": "dp7NSayvyW",
"review_text": "This paper proposes an improved framework for feed-forward reconstruction models. The authors advocate a number of improvements over the initial design of Large Reconstruction Model, including model architecture and training schemes. Experiments show that the method reconstructs better geometry and texture on Google Scanned Objects and OmniObject3D datasets.\n\n- The paper is focused on ablating different components for feed-forward sparse-view reconstruction, and in-depth analyses are provided for each design choice. Although there are no complicated new method proposed, such analysis bring value for understanding how and why each component works.\n- The proposed method is evaluated on (preprocessed) real-world multi-view datasets, showing improvements over baselines on all metrics. Extensive ablative analyses are also provided to better understand the behaviors of the proposed method.\n\n- Since this is more of an analysis paper, it would be good if the authors could also document the other components that were tried/ablated but did not see significant differences.\n- Since training resources was discussed and compared, it would be nice if there could be an analysis on the mesh generation/reconstruction quality over training time.\n\nPlease see the questions in the weakness section."
},
{
"confidence": 5,
"rating": 7,
"review_id": "W4hA3ADSDk",
"review_text": "In this work, the authors propose MeshFormer, a sparse-view reconstruction model that explicitly leverages 3D native structure, input guidance, and training supervision. They leverage 3D sparse voxels as their representation and combine transformers with 3D (sparse) convolutions to inject 3D prior. Additionally, they propose to take the corresponding normal maps together with sparse-view RGBs as input and also generate them as output, which could be used for geometry enhancement. Extensive experiments show that MeshFormer can be trained efficiently and outperforms state-of-the-art methods in terms of generating high-quality textured meshes.\n\n- MeshFormer is able to generate high-quality textured meshes with fine-grained geometric details.\n\n- The authors find that using normal images together with RGB images greatly helps in predicting geometric details. Additionally, the model outputs a normal map, which can be used for geometry enhancement.\n\n- The proposed method explicitly leverages 3D native structure, input guidance, and training supervision, resulting in faster convergence speed and better geometric details.\n\n- Pixel-based 2D methods (e.g., LGM) can preserve thin details, while 3D-based methods often smooth these details. How do you justify that? For example, in Figure 3 Column 4, the loose thread of the toy is captured by LGM, while MeshFormer ignores it.\n\n- The proposed name \"VoxelFormer\" seems improper to me. It seems more like a 3D UNet with a deep bottleneck composed of multiple transformer layers.\n\n- The projection-aware cross-attention layer projects 3D voxels onto the m views to interpolate m RGB and normal features. However, in the object case, one 3D voxel usually only corresponds to one view (due to occlusion). This cross-attention is projection-aware but not truly 3D-aware. Have you tried some occlusion-aware attention in your sparse model? Since you already have the coarse structure of the object, it could be used to filter out unneeded features.\n\n- According to Table 3 (d), you mention \"we replace the cross-attention with simple average pooling and observe a significant performance drop.\" Could you also try max-pooling? Additionally, do you concatenate the 3D feature voxel at every level of the network, as done in One-2-3-45++?\n\n- Do you use a shared backbone (trainable DINOv2) for both RGB and normal images? Do you use Plücker embedding here? \n\n- Could you provide a more detailed description for the Sparse VoxelFormer architecture? For example, how many sparse convolution layers are used in each resolution?\n\n- Instead of joint training, have you tried splitting the dense model and sparse model for two-stage training?\n\n- The output voxel resolution is $256^3$, while the SDF supervision is $512^3$. I notice that there is an interpolation step in Figure 2. It would be better to add a short text description for this.\n\n- Do you use GT multi-view normals for the teaser? If you use the GT normal images, please include that in the caption.\n\n- I suggest discussing XCube [a] in your literature review. XCube also utilizes sparse voxels as their 3D representation and leverages 3D sparse UNet with transformer layers. Additionally, they generate 3D shapes in a coarse-to-fine manner and use tiny MLPs to predict various attributes, such as normals, semantics, and SDF.\n\n[a] XCube: Large-Scale 3D Generative Modeling using Sparse Voxel Hierarchies. CVPR 2024."
}
] | |
x7AD0343Jz | Limits of Transformer Language Models on Learning to Compose Algorithms | We analyze the capabilities of Transformer language models in learning compositional discrete tasks. To this end, we evaluate training LLaMA models and prompting GPT-4 and Gemini on four tasks demanding to learn a composition of several discrete sub-tasks. In particular, we measure how well these models can reuse primitives observable in the sub-tasks to learn the composition task. Our results indicate that compositional learning in state-of-the-art Transformer language models is highly sample inefficient: LLaMA requires more data samples than relearning all sub-tasks from scratch to learn the compositional task; in-context prompting with few samples is unreliable and fails at executing the sub-tasks or correcting the errors in multi-round code generation. Further, by leveraging complexity theory, we support these findings with a theoretical analysis focused on the sample inefficiency of gradient descent in memorizing feedforward models. We open source our code at https://github.com/IBM/limitations-lm-algorithmic-compositional-learning. | https://openreview.net/pdf/b5c608c4483d2ed10fc624c08dd00561cddf4f4c.pdf | [
{
"confidence": 3,
"rating": 4,
"review_id": "0txzQgon0j",
"review_text": "This paper studies whether transformers can efficiently learn compositional discrete tasks. In particular, the paper introduces two new tasks: pointer execution neighbor and pointer execution reverse multicount as well as using multiplication and highest subsequence sum from prior work. First, small models are trained from scratch, showing substantially slower learning on the composition than on the subtasks. Next, API models are prompted to solve the same tasks and perform somewhat poorly. Some theory is also provided showing how models that memorize can struggle to learn compositions efficiently.\n\n1. The paper proposes an interesting question as to whether we can determine whether a language model has some higher level concept of task composition that allows it to learn compositions of previously learned tasks efficiently. \n\n2. The paper includes a nice theoretical result via a complexity theory reduction that shows how composition is hard if we assume stylized models that memorize the training data.\n\n1. H1 as written cannot be disproven empirically since the \"constant\" could just be larger than those tested. It seems in the experiments \"constant\" means 100. If that is what is meant, then just say so in the hypothesis. \n\n2. It is not clear if the notion of \"sub-task\" is somehow atomic and unique. This makes hypothesis H2 and H3 somewhat ill-defined too. It is possible that there are different sub-tasks (and perhaps many more of them) that better track how the model actually learns. Just because we can posit one way to compositionally solve the task does not mean that the model will learn that way (or even that it can necessarily represent that composition). \n\n3. It is not clear why the new tasks are necessary or what specifically they add over prior, simpler tasks. There needs to be more justification of the somewhat complicated tasks to explain why they are necessary. Generally the presentation of these tasks and the results was unclear and could use improvement to make it more visually clear how matches and transitions are meant to happen in the tasks and more precisely what all the baselines are doing in the experiments. \n\n4. It is not clear why one would expect an untrained small (150m) language model to somehow be able to compose subtasks without being trained to perform composition. As such, the results that the composition does not just arise and indeed takes longer to learn is not surprising. \n\n5. I am somewhat worried that the way the strings representing the nodes are designed is interacting badly with the tokenizers in the API experiments. These are clearly \"out of distribution\" types of words and they may be tokenized in ways that make it very difficult to solve the task. Did you do any analysis of how these strings get tokenized? The tokenizers are publicly available. Also, it is difficult to fit this section into the story of the paper since there is no comparison to the learning of the subtasks.\n\nSee weaknesses."
},
{
"confidence": 2,
"rating": 6,
"review_id": "9DIRWfvxxd",
"review_text": "This paper focuses on analyzing the transformer language models' learning and transferability on compositional discrete tasks. Specifically, it has four hypothesis, and the author studies for a variety of language models, whether does these hypothesis hold. \nH1. An LLM can learn to perform a compositional task with constant number of datapoints. \nH2. An LLM can learn to perform a compositional task, given as many sample as the most difficult sub-task required.\nH3. An LLM can learn to perform a compositional task, given the data samples of relearning all sub-tasks for learning the composition.\nH4. An LLM can learn to perform a compositional task, given more data samples in H3.\nThe authors introduces a new benchmark for creating systematic sub-tasks and testing compositionally.\nWith LLaMA model, H4 holds; with both GPT-4 and Gemini, using H1 (prompting) fails to perform the tasks, or multi-round code generation with COT technique.\n\nOriginality: 3.5 / 5\n\nThis paper examines how the number of datapoint samples affects the learning of compositional tasks in existing transformer-based large language models (LLMs). The authors created a new, challenging compositional dataset based on computation graphs and demonstrated that learning compositional tasks with LLMs is highly data-inefficient. While the BIG-bench paper indicates the insufficiency of reasoning and compositional abilities in LLMs, this paper innovatively provides a concrete, quantitative, and extensive study on the extent of this insufficiency.\n\nQuality: 3.5/5\n\nThe empirical study is pretty extensive with both LLaMA, GPT-4 and Gemini. There are multiple prompting techniques adopted with GPT-4 and Gemini, all of them fails to generate a reliable result. There are also very interesting theotrical proofs in the appendix to bolster the authors' claims. \n\nClarity: 3/5\n\nFigure 1 is hard to understand just by staring at the graph. For each task, it only provides one example which is non-trivial at all. One can hardly figure out its ground truth program for each example, and whether in a task of PE, is the underlying program the same across all the datapoints. I believe a descriptive caption by the side of each task is necessary. For example, PE refers to a program that takes a sequence of words and returns a list of words all colored green, where the first output word matches the first input word, and any subsequent output word starts with the last two characters of the previous word. However, the figures and tables in the experimental section are pretty clear and helpful to understand. \n\nSignificance: 2.5/5\n\nUnderstanding the problem of the data inefficiency in transformer based LLMs is important to the community which focuses on data efficiency and reasoning, such as neuro-symbolic community.\n\nAs stated in the strengths above. One of the main issue is the clarity issue of the tasks. Besides \"what is the task\", I also want to understand \"why these two tasks are needed\". What do PEN and PERM these two datasets bring?\n\nQ1. Is the PEN dataset only corresponding to one underlying program? \nQ2. What are the insights of PEN and PERM these two datasets?"
},
{
"confidence": 4,
"rating": 7,
"review_id": "Q2YOdZspie",
"review_text": "This paper evaluates the compositional learning abilities of Transformer-based models with LLaMA-like architecture on tasks requiring the composition of several discrete sub-tasks. To this end, the paper reuses two existing compositional algorithmic tasks and introduces two new ones, focusing on how many samples are needed for models to learn to compose the sub-tasks compared to the sample efficiency of learning the sub-tasks themselves. The study measures the efficiency of models when trained from scratch and the effectiveness of prompting the pretrained language models GPT-4 and Gemini. The experiments suggest that hypotheses that compositional learning requires no more samples than the sum of samples needed for each subtasks should be rejected. The paper also performs few-shot prompting with GPT-4 and Gemini with different prompting techniques to investigate their ability to learn to compose or decompose algorithms in-context and find that they are unreliable for executing sub-tasks or correcting errors in multi-round code generation. Finally, the paper uses complexity theory to support these findings, suggesting that when training feedforward models to memorize information with gradient descent, the sample inefficiency is inevitable.\n\n1. Aside from achieving state-of-the-art performance on many academic benchmarks, transformer-based language models are the undisputed workhorse for numerous real-world applications. However, their training does not necessitate compositional learning explicitly, while many of the tasks they are tasked at solving do require such capability. As such, understanding the limits and requirements for these models to learn to compose independent skills is key to drive our understanding of these foundational models and to improve them. \n2. The analyzed tasks in the paper are very well defined to verify that a model that learns a task must know how to perform the subtasks, and that given capability to solve the subtasks, a model must only learn to compose these abilities to solve the task itself. Creating such settings is not trivial, and goes a long way to enhance our understanding of the compositional learning abilities of transformer models.\n3. The paper provides a very thorough literature review and contextualizes the work around prior work very well.\n4. The presentation of the paper is generally very nice, making a technical read easier and fluent.\n\n1. The authors correctly identify tokenization as a possible negative confounder in the defined testbed, and thus use character-based tokenization for the training experiments. However, the same care is not taken when investigating the abilities of GPT4 and gemini to perform in-context-learning. Namely, given the highly synthetic nature of the inputs, it is highly possible that both the out-of-domain distribution of these inputs (deviating from natural texts) as well as differences in how inputs are tokenized (for example, one key can be tokenized with a single token, another by three tokens, and a third be tokenized along with parts of the corresponding value) confounds and skew the results, hindering their usefulness.\n2. Moreover, while the authors study many prompting techniques to evaluate GPT4 and Gemini, they use a constant 8-shot prompting. It is known that these models can benefit greatly from increased number of demonstrations, and has been shown that for out-of-domain tasks, one has to prompt the model with significantly more than 8 prompts to perform well (e.g. Levy et al., 2022, Bertsch et al. 2024, Agarwal et al. 2024)\n3. The proposed testbed is very well defined, but a single transformer based model is being studied. Understanding the contextualization of the results given difference in the model and tasks properties (for example width-depth ratio, scaling behavior with respect to parameter count or effect length of the inputs) would be very beneficial.ֿ\n\n1. Can you imagine similar experiments with more natural compositional tasks that can be learnt, and then used to benchmark SOTA LLMs in the settings they were designed and trained for? For example, do you think it is possible to use something similar to the unobserved local structures as proposed by Bogin et al. (2022) to create such settings?\n2. Can you try repeating the experiments with clear instruction for GPT4/Gemini but using as keys only tokens in their respective vocabularies separated by e.g ‘-‘ so that the tokenization and highly synthetic nature of the task would have less detrimental effect on the results?\n3. What are the exact specifications of the trained model? You mentioned it is 150M parameters, does that mean it is a 12-layer decoder only model? What was the training procedure in terms of hyperparameters? Do you see any scaling behaviors a-la Kaplan et al., 2020? For example, does using a larger model decrease the amount of samples needed? Also, does the depth of the model affect the ability to learn to compose subtasks?\n4. Presentation suggestion - in a few places in the text, it would be very helpful for the reader to be able to read the descriptions if they were coupled with examples.\n 1. In section 2.2 you define sub-tasks in the computation graphs. I think including a small demonstration of such a computational graph, along with toy examples of sub-tasks and tasks would go a long way to make this section clearer and improve the smoothness of the reading.\n 2. In Section 3 you define and explain the different tasks and subtasks of your testbed. While Figure 1 is very nice and contributes a lot, it does not explicitly show the different sub-tasks or the procedures involved in deriving the output from each input, and in some cases (for example the counts in PERM) it may take time for the reader to understand what are the exact tasks. I think a more explicit demonstration of the procedure would be very helpful. It can either be a step-by-step demonstration of the procedure for each task (added in the appendix for brevity), or even a textual derivation of the procedure applied to clarify the operations being performed at every step.\n 3. In section 4, it will be very useful to add a table with an example input and output used for each task and their statistics (e.g. length, histogram on number of neighbor steps needed etc). If the inputs and outputs in Figure 1 are representative, you can also say that directly and point there."
},
{
"confidence": 2,
"rating": 4,
"review_id": "iOUHVqAS2M",
"review_text": "The paper investigates the capabilities of Transformer-based language models in learning compositional discrete tasks. The authors evaluate both training LLaMA models and prompting GPT-4 and Gemini-Pro on tasks that require the learning of compositions of several discrete sub-tasks. The results indicate that these models exhibit significant sample inefficiency: LLaMA models require more data to learn compositional tasks than to relearn all sub-tasks from scratch, and in-context prompting with GPT-4 and Gemini is unreliable and often fails in multi-round code generation. The findings are supported by a theoretical analysis showing the sample inefficiency of gradient descent in memorizing feedforward models.\n\n- The paper evaluates both training from scratch and in-context prompting methods, providing a thorough analysis of the models' capabilities.\n- The authors introduce new algorithmic tasks designed to test compositional learning and providing a theoretical framework to support the empirical findings.\n- The study offers a deep dive into the limitations of current LLMs, supported by both empirical data and theoretical arguments, which can guide future research in learning compositional tasks.\n\n- The tasks and settings used in the experiments may not cover the full range of real-world applications, limiting the generalizability of the findings. \n- The performance and conclusions drawn are heavily dependent on the specific tasks designed by the authors, which might not fully represent other compositional learning scenarios.\n- Personally, it took me a while to understand how the given algorithmic tasks are designed and how they relate to the broader context of compositional learning. For instance, the `PERM' problem was not immediately intuitive to me.\n\n- How well do the findings translate to practical, real-world applications beyond the synthetic tasks used in the experiments? Any specific reason to introduce new algorithmic tasks for evaluation?\n- Would the models perform differently on a broader variety of compositional tasks, particularly those that are more complex or domain-specific?\n- What specific modifications to model architecture or training strategies could be employed to enhance the sample efficiency of Transformer models in compositional learning?"
}
] | |
x69O84Df2G | Multi-Reward Best Policy Identification | Rewards are a critical aspect of formulating Reinforcement Learning (RL) problems; often, one may be interested in testing multiple reward functions, or the problem may naturally involve multiple rewards.
In this study, we investigate the _Multi-Reward Best Policy Identification_ (MR-BPI) problem, where the goal is to determine the best policy for all rewards in a given set $\mathcal{R}$ with minimal sample complexity and a prescribed confidence level. We derive a fundamental instance-specific lower bound on the sample complexity required by any Probably Correct (PC) algorithm in this setting. This bound guides the design of an optimal exploration policy attaining minimal sample complexity. However, this lower bound involves solving a hard non-convex optimization problem. We address this challenge by devising a convex approximation, enabling the design of sample-efficient algorithms. We propose MR-NaS, a PC algorithm with competitive performance on hard-exploration tabular environments. Extending this approach to Deep RL (DRL), we also introduce DBMR-BPI, an efficient algorithm for model-free exploration in multi-reward settings. | https://openreview.net/pdf/5486797112e04e32f08ece8809e3e0a9d0845d17.pdf | [
{
"confidence": 2,
"rating": 7,
"review_id": "2tHVj2bkPc",
"review_text": "The present article extends the track-and-stop approach of Garivier et al. to a multi-reward MDP setup. Given an MDP problem with a finite number of reward functions the aim is to develop an algorithm that learns optimal policies for all reward functions simultaneously. Under (drastic) assumptions the authors present a sample based variant of track-and-stop in the multi reward setup. The algorithm is based on replacing the theoretical model complexity $T^*$ (in a multi reward variant) from Garivier et al. by an upper bound $U^*$ that can be estimated. Estimating $U^*$ during the exploration phase results in a practically implementable termination condition that results in worse complexity than the theoretical termination condition. The algorithm is tested on a few (very similar) tabular examples and compared to simple algorithms. An educated guess is performed to design a deep variant of the algorithm that is tested on a multi reward card pole variant and deep sea. Results beat easily the (terrible) results of the benchmark algorithms used.\n\nThe article is extremely dense with information, perhaps it would have been better to split the article in 2 or 3. The theoretical development is an extension of several earlier articles of the authors with very detailed mathematics. While the tabular examples are of limited practical interest (the assumptions are just too strong) the deep variant is interesting. While I am not so sure about the relevance of the tabular setting, the relevance in the deep setting is obvious (and was realised of the authors for cardpole). The ability of generalisation of NN makes it interesting to train the network simultaneously on perturbations of the true reward function in order to learn a more stable policy that might work well for even more reward settings.\n\nThe scientific quality of the article is very high. There is hardly any typo. I appreciate a lot the critic discussion of limitations, this is far above the rather low scientific standard in ML. I could not check the entire proofs of the appendix, what I read seemed solid. \n\nGood article! I am curious to follow up on the future development of the robust DQN variant.\n\n- I did not enjoy reading the article very much as I was pushed into first reading other articles to get a rough understanding of what is going on. Even reading the basis of the present article ([17], [40]) was not enough. To get an idea why $T^*$ is considered one needs to go all the way back to [20], and even further. It feels a bit like a community wants to stay among each other, the usual RL researcher is excluded by the style of writing. I would suggest to completely skip the discussion in the end (almost a page) and instead use the space to explain the reader what the technique is about and why the theoretical estimate naturally lead to Algorithm 1.\n- The assumptions are drastic and should be discussed more clearly. I guess the authors have bandit background and skip discussions of issues that are typical for bandits. In particular, assuming knowledge of rewards and/or the reward gaps. While this is not unusual in bandits it is very much in RL. I am perfectly fine with such theoretical results in particular, as the authors implemented an educated guess algorithm in the deep setting that addresses this issue with the reward gaps.\n\nHere is a number of questions and comments.\n\n- $\\alpha$ should also be an input for the algorithm 1. Why is there no dependence on $\\alpha$ in the theoretical results?\n- How do you compute U in Algorithm 1 without knowing the rewards gaps? You estimate M, but what about the gaps? They are estimated in the deep setup, why not in the tabular setting?\n- I think the algorithms are not sufficiently explained. In the tabular case $M_t$ should be explained in more detail. What shall the reader get from \"use estimate $M_t$ of the model\", \"update $M_t$\" without knowing $M_t$? In the present form the article is hardly comprehensible (even for me as a Mathematician). In that generality (what estimate is used?) I am a bit sceptical about the validity of Theorem 3.3, but as said, I could not try to understand all proofs.\n- The use of Borel-Cantelli in the proofs is not correct as it is. The events are not independent and this direction of Borel-Cantelli requires (pairwise) independence. I am pretty sure the SARSA paper contains the details on how to use a conditional independence version of Borel-Cantelli.\n- I am a bit puzzled about the connection of Theorem 3.1 and 3.3. If I am not wrong, Theorem 3.1 is a bandit theorem, requiring to play independent \"arms\" $(s,a)$ while Theorem 3.3 (the algorithm) is an RL theorem requiring rollouts $(s_0,a_0, s_1,a_1,...)$. The latter is obviously much harder and depends a lot on the exploration method. Unfortunately, I have six 50pages NeurIPS papers on the table and cannot go fully through every proof. Could you add two sentences why the theorem should hold? For instance, SARSA convergence holds by rewriting SARSA as Q-learning plus an error that goes to zero by the exploration condition. Here, the situation is much harder.\n- A crucial point is to replace $T$ by $U$. In line 169 you mention an upper bound for $U$, which has an additional $1/\\Delta^2$ compared to the theoretical bound. Is there a better theoretical bound? The authors should at least discuss briefly in the main text that there is something non-trivial going on (as in the SARSA paper where on-policy exploration is essentially as good as off-policy exploration). As it is it one quickly overlooks the difference in the algorithm to the \"allocation\".\n- I cannot see a scaling in the number of rewards. Is there any theoretical understanding on how $\\tau$ scales in the number of rewards?\n- How does your algorithm compare to just train for one reward after the other? Does that make sense? The training time for card pole seems quite high but I might be wrong. Typically card pole plots show the number of episode (ca. 200-300), not the number of steps. Is it clear your algorithm is faster than just training 5x card pole? How about comparing your algorithm to training 5x card pole with your different reward settings but keeping the neurone network from the reward before (and using some replay buffer to avoid overfitting)? My feeling is that such a naive approach might be better than the benchmarks used. \n- Similar question: Comparing your deep algorithm to performing several deep trainings for different rewards. Is your approach particularly suitable to avoid overfitting? Does your algorithm help for generalisation (as you see in the cardpole example?)."
},
{
"confidence": 3,
"rating": 4,
"review_id": "yYAxJt2TEf",
"review_text": "This paper studies the problem of best policy identification for RL with multiple rewards. The goal is to efficiently identify the best policy for given rewards with a high-level confidence. Authors provide an instance-dependent lower bound for the studied problem and introduce a provably-correct algorithm for a convex approximation of the original problem in the tabular setting. Extensions to deep RL is discussed with numerical results.\n\nThe authors demonstrate how to efficiently identify optimal policies across multiple rewards. The strengths of this work are summarized as follows:\n\n1. The studied setting is interesting and of practical concern when we seek to optimize the performance across a set of rewards.\n\n2. An instance-dependent lower bound is identified for any Probably-Correct algorithm in the setting of multi-reward best policy identification problem.\n\nWhile this paper clearly articulates the idea of performing, here are some problems and potential improvements that authors are suggested to address and consider. More specifically,\n\n1. Environments for the deep RL part is too simple. The Cartpole swing-up and DeepSea environments cannot fully demonstrate the performance of the proposed algorithm in more complex, real-world scenarios. It would be beneficial to include experiments on more challenging benchmark environments for better assessment of scalability and practical applicability.\n\n2. The policy optimality is only considered and defined for stationary deterministic policy (as in definition 2.1), which can be too restrictive. It is not clear when considering the set of Markovian policies (which can be stochastic), whether the proposed lower bound still holds, and whether the performance of the algorithm is still optimal. \n\n3. Theoretical guarantees for the deep RL extension is unavailable. Sample complexity bounds are only provided for tabular settings, which leads to the dependencies on the cardinality of state and action space. And empirical studies for the deep RL settings are not sufficiently convincing due to the simplicity of the environments.\n\n4. In terms of the theoretical results of the lower bound, the proof structure closely follows the prior results (e.g. Marjani et al. 2021, Taupin et al. 2022) in single-reward RL. It is not quite obvious what are the main technical challenges and novelties of extending the prior results (e.g. Marjani et al. 2021) from single-reward RL to multiple-reward RL.\n\n5. While the studied setting can be interesting, the relationship between the MR-BPI problem and reward-free RL as well as multi-objective RL somewhat remains unclear throughout the context. Though discussion has been provided in Section 1 and 5, I am not fully convinced that reward-free RL cannot solve the concerned practical scenario in multi-reward best policy identification problem. Indeed, reward-free RL assumes rewards are unknown, whereas the studied settings assume the knowledge of rewards. As a result, it is not surprising to see that properly utilizing the knowledge of rewards can lead to better performance as shown in the numerical results: the proposed algorithm (MR-NaS) significantly outperforms RF exploration strategies (RF-UCRL, ID3AL). However, reward-free RL is a more general type of algorithms and can be particularly useful in practice when it is hard to accurately learn rewards or when rewards are sparse. As such, the emphasis of the two settings are rather different. It might not be a fair comparison, and it is desirable to provide the fundamental reasons that can explain such performance improvement in numerical experiments. Therefore, more thoughtful insights should be provided to clearly explain the difference and relationship between these settings.\n\n6. Some minor aspects:\n - Grammatical errors need to be addressed, e.g. Line 354, line 91.\n - In line 75-80, if only deterministic policies are been considered, it is more appropriate to write $a_t = \\pi(\\cdot|s_t)$ etc. Do not use the probability simplex notation for policies.\n\n1. Do we treat MR-BPI as dedicated exploration strategies for multi-objective RL?\n\n2. Most reward-free RL focuses on episodic settings, whereas this paper studies discounted settings for multi-reward MDPs. Is there any particular reason for choosing this (simpler) setting? Do you foresee any technical challenges that can be difficult to resolve in episodic multi-reward settings?\n\n3. In Line 81 - 86 (Set of rewards), for each reward vector, each coordinate $i$ represents the reward for the $i$-th state-action pair, which is a scalar, why do we need the canonical basis $\\mathcal{R_{canonical}}$ of rewards, where each element is a vector? (When you write $\\mathbb{R}^{SA}$, I assume each element is a real number, not a vector). Do you assume for each $(s, a)$, there is a different reward function? Could you provide a concrete example of your definition of \"set of rewards\" with your notation? I assume you intended to say there exists $m$ (global) reward functions, and this set of functions $\\mathcal{R}$ is thus in $\\mathbb{R}^{m \\times SA}$."
},
{
"confidence": 3,
"rating": 6,
"review_id": "Dl914ZVzvZ",
"review_text": "The paper addresses the challenge of identifying the best policy in RL when there are multiple rewards. The authors get a lower bound on the sample complexity and design an optimal exploration policy. The authors propose two algorithms: MR-NaS for tabular environments and DBMR-BPI for Deep RL. These algorithms efficiently identify the best policy across multiple rewards in RL.\n\nThe paper presents a comprehensive and well-balanced analysis of theoretical and empirical results. The appendix provides supplementary evidence that strengthens the authors' arguments.\n\n1. The paper is inspired by [17], but it would be better to explicitly acknowledge this inspiration in the main part, rather than mention it at Remark C.2. Furthermore, a more in-depth discussion of the challenges in proof caused by the novel forcing policy would strengthen the paper's contribution.\n\n2. Even though more details is covered in the appendix, the paper should provide some details on the convex optimization. For example, a discussion of the computational costs associated with these methods would provide valuable context for readers. Sometimes, the computational cost of convex optimization methods are high. \n\n\n3. The abstract would be better if providing a more impressive motivation for multi-reward RL, emphasizing its significance and potential impact.\n\n4. Didn’t put some innovative aspects in the main text, leaving vital details to be discovered in the appendix. This can lead to important contributions being overlooked.\n\nPlease address the issues mentioned in the weaknesses."
}
] | |
x4Kk4FxLs3 | Pard: Permutation-Invariant Autoregressive Diffusion for Graph Generation | Graph generation has been dominated by autoregressive models due to their simplicity and effectiveness, despite their sensitivity to ordering. Yet diffusion models have garnered increasing attention, as they offer comparable performance while being permutation-invariant. Current graph diffusion models generate graphs in a one-shot fashion, but they require extra features and thousands of denoising steps to achieve optimal performance. We introduce PARD, a Permutation-invariant Auto Regressive Diffusion model that integrates diffusion models with autoregressive methods. PARD harnesses the effectiveness and efficiency of the autoregressive model while maintaining permutation invariance without ordering sensitivity. Specifically, we show that contrary to sets, elements in a graph are not entirely un-ordered and there is a unique partial order for nodes and edges. With this partial order, PARD generates a graph in a block-by-block, autoregressive fashion, where each block’s probability is conditionally modeled by a shared diffusion model with an equivariant network. To ensure efficiency while being expressive, we further propose a higher-order graph transformer, which integrates transformer with PPGN (Maronet al., 2019). Like GPT, we extend the higher-order graph transformer to support parallel training of all blocks. Without any extra features, PARD achieves state-of-the-art performance on molecular and non-molecular datasets, and scales to large datasets like MOSES containing 1.9M molecules. | https://openreview.net/pdf/c1a62f1c53db519f90d14aed3ca68d4ed80a9146.pdf | [
{
"confidence": 3,
"rating": 6,
"review_id": "K6QTFVHQWF",
"review_text": "The paper introduces PARD, a graph generation model that combines autoregressive and diffusion models. Traditional autoregressive models are effective but sensitive to order, while diffusion models are permutation-invariant but need many denoising steps and extra features. PARD overcomes these issues by generating graphs block-by-block using a partial order of nodes and edges. It employs a shared diffusion model with an equivariant network and a higher-order graph transformer, supporting parallel training like GPT. PARD achieves state-of-the-art performance on various datasets, including large ones like MOSES, without extra features.\n\n1. This work presents a successful showcase of the combination of autoregressive modeling with diffusion model on graph.\n2. The proposed partial order ensures the permutation-invariance in the autoregressive generation process.\n3. Impressive experimental results show the effectiveness and efficiency.\n\n1. It is unclear how the diffusion model is employed in PARD. Sec 3.1 and the second part of Eq. 6 are not quite relevant to each other. Can you elaborate on that?\n2. Please provide proof or reference for some statements. e.g.: 2-FWL expressivity for the proposed higher-order transformer\n3. Please provide the results of other baselines on QM9 if possible.\n\n1. Seems that in Fig. 2 the prior of the diffusion process is not the same as the second part of Eq. 6. What is the choice of the graph distribution at timestep T?\n2. The total number of diffusion steps is directly related to number of blocks. I anticipate the generic graphs will have more blocks than QM9. Can you show the total number of diffusion steps for other datasets?\n3. Some of the discussions are redundant can be moved to appendix, like the comparison to DiGress and GRAN in Sec. 3.1 and Sec 3.2, the energy view in Sec. 3.4. The running time comparison can be moved to the experiment section."
},
{
"confidence": 3,
"rating": 6,
"review_id": "vvo8lV2C8W",
"review_text": "This paper proposes a graph generation method that combines AutoRegressive (AR) models and diffusion models. By utilizing a unique partial order, it addresses the issue of non-exchangeable probabilities in AR models and the efficiency problem in diffusion models.\n\n1. The proposed block-wise AR diffusion model in this paper offers a new idea for graph generation, particularly by introducing the use of weight-degree to differentiate blocks.\n2. The limitations of equivariant networks demonstrated in this paper also hold value for further exploration and resolution within the community.\n3. The overall structure and writing of the paper are relatively clear.\n\n1. There is a part in the paper that I believe needs to be clarified more clearly to ensure logical coherence. Why does diffusion based on equivariant network solve the flaw in equivariant modeling? I think besides the analogy of tempering iron (or higher/lower energy), more mathematical proofs are needed.\n\n2. Ablation of PPGN is necessary to demonstrate its effectiveness.\n\n3. Following the experimental settings of GDSS, NSPDK is also an important metric for QM9 and ZINC250K.\n\n1.\tIs there a reasonable explanation for the significant improvement of FCD on ZINC250K compared to other baselines? Similarly, why is there such a large difference in performance between Scaf. and baseline methods on MOSES?"
},
{
"confidence": 4,
"rating": 3,
"review_id": "o2kx5C8BuN",
"review_text": "The work proposes a new graph generative model based on an autoregressive procedure. It proposes an approach to deciding a partial order of graph nodes according to their degrees in a node-removal procedure. Based on the partial order, the work devises a new graph generative model.\n\nThe graph algorithm of deciding a partial order of graph nodes would be interesting if such an algorithm does not exist in the literature of graph theory.\n\nThe work lacks justification. As the field has moved to generative methods with discrete-diffusion models, which are already permutation-invariant, it is less clear about the advantage of designing a complex autoregressive model to satisfy the permutation-invariant property. \n\nThe advantage of the model is not obvious even considering only autoregressive models. Note that Chen et al. [9] have an approach of \"optimizing\" node orders for the generative model and show that the likelihood calculation is more accurate with their approach than a pre-determined order. How does the work justify its advantage over such an approach?\n\nThe analysis in 3.3 does not seem to be reasonable. The **probability calculations** are indeed the same for nodes in the same orbit, but they may get different connections in the sampling procedure and then break the symmetry. The analysis in 3.3 is well known, and it is not a concern for generative models. In some diffusion-based generative models, the starting graph is a graph with no edges, then all nodes are in the same orbit, but it is not an issue at all because the edge sampling process will break the symmetry. \n\nWithout clear justification, I don't know where performance improvements are from (maybe architecture improvement?). I feel that the work should have a thorough investigation of the model.\n\nHow do you justify the advantage of using an autoregressive model with partial order?"
},
{
"confidence": 3,
"rating": 7,
"review_id": "qX9D32cSku",
"review_text": "This paper proposes to integrate autoregression models with diffusion models seamlessly to harnesses the effectiveness and efficiency of the autoregressive model while maintaining permutation invariance without order sensitivity. It also proposes architectural improvement to make the model and algorithm efficient and scalable. The presentation is smooth and the experimental results on both molecular and general graph generation demonstrate its effectiveness.\n\nIt proposes a novel graph decomposition method considering not individual node and its degree but subsets of nodes with structual similarity. In this way, it removes node order sensitivity in the graph but only needs to maintain the order of the blocks. Within each block, the diffusion model focuses on a much smaller graph and thus has the efficiency to generate a denoised graph.\n\nIt would be better if the authors can provide some insights about the hyperparameter the maximum of hops $K_h$.\n\nIt would be better if the authors can provide some insights about the hyperparameter the maximum of hops $K_h$."
}
] | |
x4HMnqs6IE | $\text{ID}^3$: Identity-Preserving-yet-Diversified Diffusion Models for Synthetic Face Recognition | Synthetic face recognition (SFR) aims to generate synthetic face datasets that mimic the distribution of real face data, which allows for training face recognition models in a privacy-preserving manner. Despite the remarkable potential of diffusion models in image generation, current diffusion-based SFR models struggle with generalization to real-world faces. To address this limitation, we outline three key objectives for SFR: (1) promoting diversity across identities (inter-class diversity), (2) ensuring diversity within each identity by injecting various facial attributes (intra-class diversity), and (3) maintaining identity consistency within each identity group (intra-class identity preservation). Inspired by these goals, we introduce a diffusion-fueled SFR model termed $\text{ID}^3$. $\text{ID}^3$ employs an ID-preserving loss to generate diverse yet identity-consistent facial appearances. Theoretically, we show that minimizing this loss is equivalent to maximizing the lower bound of an adjusted conditional log-likelihood over ID-preserving data. This equivalence motivates an ID-preserving sampling algorithm, which operates over an adjusted gradient vector field, enabling the generation of fake face recognition datasets that approximate the distribution of real-world faces. Extensive experiments across five challenging benchmarks validate the advantages of $\text{ID}^3$. | https://openreview.net/pdf/773ad46986854b909ee32c632457744c279a0962.pdf | [
{
"confidence": 4,
"rating": 5,
"review_id": "w1NbBGyF67",
"review_text": "This paper presents a method called $\\text{ID}^3$ for the task of synthetic face recognition. The authors highlight that the accuracy of face recognition using generated data still lags behind that of training directly on real face data. They propose optimizing the generation process from the perspectives of diversity and consistency.\n\n- Clear explanation of formulas and algorithm flow.\n- Achieved SOTA results compared to methods from the past two years.\n\n- There has been extensive research on ID preserving, and recent models based on LDM (e.g., Face0, PhotoMaker, FaceStudio, InstantID) can also be used for synthetic face recognition. The paper lacks analysis and comparative experiments on these models.\n- The Face Attribute Conditioning Signal includes age and pose (pose angle range: [-90°, 90°]). However, the visual results in the paper do not reflect these attributes. The variation in pose is minimal, and there is no demonstration of different levels of age (which you mentioned as [0-100]).\n- The paper devotes too much space to mathematical derivations and lacks intuitive visual results. For example, using different attributes and ID information to guide the model could be visualized by showing how the various layers of the Unet perceive this information.\n\n- What is the resolution of the training and generated images? \n- How long does the training process take using 8 NVIDIA Tesla V100 GPUs? \n- What is the image generation speed during inference? \n- How much GPU memory is required for inference?\n- How do the ID information and attribute information affect the Unet in the network structure? Is it through cross-attention or along with the timestep information?"
},
{
"confidence": 4,
"rating": 4,
"review_id": "dqrDEbXjU1",
"review_text": "This paper focuses on synthetic face recognition and proposes to concentrate on three aspects: inter-class diversity, intra-class diversity, and intra-class identity preservation. Based on those, an ID-preserving loss is employed to generate diverse but identity-preserving facial images. This paper also demonstrates the proposed loss is equal to lower bound of an adjusted conditional log-likelihood over ID-preserving data.\n\n1. This work is well-written and well-organized. It brings some insights for SFR.\n2. The idea of 3 aspects is good, and quite general for SFR\n3. The proposed method shows advances when using the FFHQ dataset\n\nHere are several concerns regarding this work:\n1. The idea of Attribute Conditioning Signal is not fit for synthetic face recognition tasks, because factors contributing to solid FR training cannot be determined by simply adjusting face attributes. One reason is that the attribute network (e.g., pose, age) is not generalized enough, as the pre-trained models are obtained from relatively small-scale datasets compared to FR datasets. Additionally, the authors have not addressed which attributes are effective for FR, leaving this important question unanswered.\n\n2. The performance trained on FFHQ dataset appears good; however FFHQ dataset has explicitly banned its use for face recognition applications. Furthermore, FFHQ is relatively small(210k images) which doesn’t contain enough diversity, that’s the reason facial attributes can be of improvement in this experiment. For more details on FFHQ please refer to: https://github.com/NVlabs/ffhq-dataset\n\n3. When it comes to the relatively large dataset CASIA-WebFace, the improvement over DCFace is marginal. One problem is that DCFace is trained with CASIA-WebFace only, not the FFHQ+CASIA mentioned by the author.\n\n4. Experiments are not sufficient. For example, DCFace provides experiment results on 3 data volumes: 500k, 1M and 1.2M. These are not included in this paper.\n\n5. There are some typos, for example, Y_i should be given in line 194\n\nBased on Algorithm 3, the pipeline would be: firstly, generate multiple embeddings close to the anchor; Then use the diffusion model to synthesize the images. The questions are:\n1. Given the unpack(Yi) and generate different attributes, the generated identity image would be affected by the attributes. Do the authors test whether the identities change given the different face attributes on the test? How to make sure the generated identity aligns with the input embedding in the generation phase? \n2. How to make sure the diffusion model can generalize(recognize) this specific input embedding, considering the training embedding only covers a small range of the available embedding(training set)?"
},
{
"confidence": 4,
"rating": 5,
"review_id": "1wixPqUuNZ",
"review_text": "This paper proposes ID3, an identity-preserving-yet-diversified diffusion model for generating synthetic face data for face recognition. ID3 leverages identity embeddings and facial attributes to control inter-class and intra-class diversity of generated faces while preserving intra-class identity consistency, demonstrating state-of-the-art performance on multiple synthetic face recognition benchmarks.\n\nSee questions section in detail.\n\nSee questions section in detail.\n\nThe paper addresses a well-motivated and important problem in the field of synthetic face recognition. The proposed ID3 model demonstrates significant innovation in leveraging diffusion models conditioned on identity embeddings and facial attributes to generate diverse yet identity-consistent synthetic faces. Moreover, the theoretical analysis provided in the paper, which proves the equivalence between minimizing the proposed loss function and maximizing a lower bound on an adjusted data likelihood, lends credibility and rigor to the proposed approach.\n\nHowever, there is room for improvement in the presentation and writing of the manuscript. One area that could benefit from further clarification is the explanation of notations and symbols used in the mathematical formulas. For instance, the meaning of the variable d in S\nd−1 is not clearly defined, which may lead to confusion for readers. Additionally, the formatting and typesetting of some equations, such as Equation 3, could be enhanced to improve readability and aesthetic appeal."
},
{
"confidence": 5,
"rating": 8,
"review_id": "vXzxZxa68p",
"review_text": "The paper \"ID3: Identity-Preserving-yet-Diversified Diffusion Models for Synthetic Face Recognition\" introduces a novel synthetic face recognition (SFR) approach using diffusion models. It focuses on maintaining identity consistency while providing high diversity in generated face images. The proposed ID3 model leverages identity-preserving losses and a structured sampling algorithm that respects identity characteristics. This effectively addresses the common pitfalls of existing SFR approaches that lead to poor generalization on real-world data.\n\n* **Originality**: The paper presents an innovative use of diffusion models tailored to synthetic face recognition, emphasizing identity preservation.\n* **Quality**: Demonstrated improvement over state-of-the-art models through extensive benchmarking.\n* **Clarity**: Exceptionally clear presentation and thorough explanation of the methodology and results.\n* **Significance**: This paper addresses significant challenges in synthetic data generation and offers substantial benefits for training more robust and generalizable face recognition systems.\n\n* **Generalization**: Additional tests on further diversified real-world datasets could strengthen the generalization claims.\n* **Complexity**: It would be beneficial to have details on the computational demands and scalability of the model when deployed in practical, real-world scenarios.\n\n1. What measures have been taken to ensure the model's robustness against diverse ethnic and age groups, given the model's reliance on identity embeddings?\n2. Are there potential improvements or variations in the diffusion model that could further enhance identity preservation without sacrificing diversity?\n3. How does the model perform under constrained computational resources, and are there any strategies for optimizing its efficiency?"
}
] | |
x4EoTQW7ka | DropBP: Accelerating Fine-Tuning of Large Language Models by Dropping Backward Propagation | Large language models (LLMs) have achieved significant success across various domains. However, training these LLMs typically involves substantial memory and computational costs during both forward and backward propagation. While parameter-efficient fine-tuning (PEFT) considerably reduces the training memory associated with parameters, it does not address the significant computational costs and activation memory. In this paper, we propose Dropping Backward Propagation (DropBP), a novel approach designed to reduce computational costs and activation memory while maintaining accuracy. DropBP randomly drops layers during backward propagation, which is essentially equivalent to training shallow submodules generated by undropped layers and residual connections. Additionally, DropBP calculates the sensitivity of each layer to assign an appropriate drop rate, thereby stabilizing the training process. DropBP is not only applicable to full fine-tuning but can also be orthogonally integrated with all types of PEFT by dropping layers during backward propagation. Specifically, DropBP can reduce training time by 44% with comparable accuracy to the baseline, accelerate convergence to the same perplexity by 1.5$\times$, and enable training with a sequence length 6.2$\times$ larger on a single NVIDIA-A100 GPU. Furthermore, our DropBP enabled a throughput increase of 79% on a NVIDIA A100 GPU and 117% on an Intel Gaudi2 HPU. The code is available at [https://github.com/WooSunghyeon/dropbp](https://github.com/WooSunghyeon/dropbp). | https://openreview.net/pdf/796111fc2cfe878a800d38ba648d7c6104be3373.pdf | [
{
"confidence": 4,
"rating": 6,
"review_id": "pSderOOBkZ",
"review_text": "The paper introduces DropBP, an innovative approach to accelerate the fine-tuning of Large Language Models (LLMs) by selectively dropping layers during backward propagation. This method is presented as a means to reduce computational costs and activation memory, significant challenges in the efficient fine-tuning of LLMs. The authors have provided a clear implementation of DropBP as a PyTorch extension and demonstrated its effectiveness through experiments on various LLMs and datasets.\n\n- The concept of dropping backward propagation layers to reduce computational overhead is differential from previous work and addresses an important issue in training large models.\n\n- The paper includes extensive experiments that validate the effectiveness of DropBP in reducing training time and memory usage while maintaining accuracy.\n\n- The development of a PyTorch extension for DropBP facilitates easy integration with existing training codes, enhancing the practical applicability of the method.\n\n- The motivation is not well illustrated. I agree with that dropping sublayers could lead to training efficiency as the model turns to a shallower counterpart. However, I mean, pervious work like LayerDrop and others omit the layer computation in the forward pass. Then the computation could be removed in the subsequent backward computation with essential engineering efforts. Thus it lacks a clear distinction in terms of technical innovation compared to these previous works.\n\n- While the paper proposes omitting sublayer computation in the backward pass, it's unclear why the forward pass computation remains unchanged. Justifying this choice or exploring alternatives would strengthen the contribution.\n\n- The faster convergence observed in Figure 5 with DropBP compared to the vanilla model is counterintuitive. The observation here quite confuses me since the backward pass optimizes a partial computation graph, concerns regarding overfitting arise. The paper would benefit from a discussion on potential regularization techniques employed to address this, and a comparison with related work (e.g., [1]) that utilizes sublayer dropping for regularization in training a deep Transformer model. \n\n\n \n [1] Li et al., 2021 (AAAI) Learning Light-Weight Translation Models from Deep Transformer\n\nsome typos\n\n- Line 49: As a results -> As a result\n- Line 62: a effective -> an effective"
},
{
"confidence": 4,
"rating": 6,
"review_id": "ZcNgB8amun",
"review_text": "The paper proposes a novel method to reduce the computational and memory costs associated with fine-tuning large language models (LLMs). The authors introduce DropBP, a technique that randomly drops layers during backward propagation, effectively reducing the computational operations (FLOPs) and activation memory needed. This method assigns drop rates based on the sensitivity of each layer to ensure stable training. The approach is applicable to both full fine-tuning and parameter-efficient fine-tuning (PEFT) methods. The paper reports significant improvements in training time, convergence speed, and maximum sequence length when fine-tuning LLaMA2 models with DropBP.\n\n- DropBP introduces a novel method for reducing the computational and memory costs associated with fine-tuning LLMs. This is an important contribution to the field, given the increasing size and complexity of these models.\n\n- The paper provides empirical evidence that DropBP significantly reduces training time (by 44%), accelerates convergence (1.5× faster), and increases the maximum sequence length (up to 6.2×) on a single NVIDIA A100 GPU. These results demonstrate the effectiveness of the approach. The authors conduct thorough experiments on multiple datasets and models, providing a robust evaluation of DropBP's performance across different scenarios.\n\n- The paper mentions that the sensitivity calculation is done only once and has negligible overhead. However, more details on this process and its potential impact on training time would provide a clearer understanding of any trade-offs involved.\n\n- The paper could benefit from a more detailed theoretical analysis of why DropBP works as effectively as it does. This would strengthen the paper by providing a deeper understanding of the underlying principles.\n\n- Can you provide more details on the sensitivity calculation process? Specifically, how is the sensitivity of each layer computed, and what is the computational overhead associated with this step?\n\n- What are the best practices for tuning the drop rates in DropBP? Are there guidelines or heuristics that practitioners can follow to optimize performance for their specific use cases?\n\n- How well does DropBP integrate with other recent advancements in efficient training techniques, such as mixed precision training or distributed training frameworks? Have you explored these combinations in your experiments?"
},
{
"confidence": 3,
"rating": 6,
"review_id": "DwLTvyVPUV",
"review_text": "The paper proposed to drop layers during backward prop (BP) based on layer sensitivity. The method aims to reduce the cost for gradient computation and storage for intermediate activation in full BP.\n\n1. Reducing the cost of full BP in PEFT has been an important challenge. \n2. The method is simple and is easy to integrate to either full fine-tuning or PEFT. \n3. Experiments demonstrate that DropBP can speed up the training while retaining the accuracy. The resulting memory reduction makes longer sequence modeling accessible.\n\n1. The idea of optimizing NNs with sparse gradient is not new. This paper needs to add more discussion and comparison with related works in sparse learning e.g., [1-3]\n2. Table 1 only shows results on two datasets and limited benchmark. It is unclear if the method works well for generation tasks and domain-specific transfer learning.\n3. It is unclear which algorithm is used to solve the constraint minimization problem, i.e., to determine the layer-specific rates based on sensitivity, and its extra computational cost.\n4. (Minor) In fine-tuning, DropBP drops a set of layers. However, the sensitivity of a set of layers may not be accurately represented by the direct summation of the sensitivities of individual layers in the set.\n\n[1] Sun, Xu, et al. \"meprop: Sparsified back propagation for accelerated deep learning with reduced overfitting.\"\n[2] Sung, Yi-Lin, Varun Nair, and Colin A. Raffel. \"Training neural networks with fixed sparse masks.\" \n[3] Brock, Andrew, et al. \"Freezeout: Accelerate training by progressively freezing layers.\"\n\n1. What is the long context modeling performance after applying DropBP?\n2. Could the authors present Figure 5 with # of steps as the x-axis to demonstrate faster convergence?\n3. I wonder if the sensitivities would evolve, and the drop rate needs to be re-allocated through training."
}
] | |
x33oWJQyH0 | Unsupervised Object Detection with Theoretical Guarantees | Unsupervised object detection using deep neural networks is typically a difficult problem with few to no guarantees about the learned representation. In this work we present the first unsupervised object detection method that is theoretically guaranteed to recover the true object positions up to quantifiable small shifts. We develop an unsupervised object detection architecture and prove that the learned variables correspond to the true object positions up to small shifts related to the encoder and decoder receptive field sizes, the object sizes, and the widths of the Gaussians used in the rendering process. We perform detailed analysis of how the error depends on each of these variables and perform synthetic experiments validating our theoretical predictions up to a precision of individual pixels. We also perform experiments on CLEVR-based data and show that, unlike current SOTA object detection methods (SAM, CutLER), our method's prediction errors always lie within our theoretical bounds. We hope that this work helps open up an avenue of research into object detection methods with theoretical guarantees. | https://openreview.net/pdf/7878a3bbb19a093b6e8f4e67ce9a0e0e1dfa65b6.pdf | [
{
"confidence": 4,
"rating": 7,
"review_id": "6lJvjUapel",
"review_text": "This paper proposes an autoencoder based object detection model that makes predictions about object positions in an unsupervised manner. Imporantly, the authors can provide theoretical guarantees/ bounds about the degree of the model's detection error.\n\nThe paper is well written and it is easy to follow the authors motivation and structure of the work. I also find the idea very valuable to investigate the theoretical bounds of such models.\n\nI have summarized my questions and issues and limitations that I see here: \n\nIn the context of the CLEVR experiments, I am wondering why the authors don’t evaluate concerning the Gaussian standard deviation as they did for the first dataset?\n\nThe authors claim the method requires dynamic objects, but they never mention this in main text. Can the authors provide more justification of this? I.e. why this is/ what part of the approach requires this?\n\nI don’t understand how the decoder can learn to reconstruct complex shapes other than spheres (due to the Gaussian assumption). Also the authors mainly evalaute on data with objects that are spherical. Thus, is it possible to evaluate on other shape forms? If so what is the error here compared to spherical shapes? I do not mention this as a limitation, but it seems quite important to put the method into context. What would be potential ideas to mitigate handling more complex objects?\n\nI do not have enough knowledge about the details of the CutLer and SAM models, but why should the theoretical bound of this work hold for these works as well (the authors compare these in Fig. 6)? Specifically, the authors state \"only for our\nmethod are the position errors always guaranteed to be within our theoretical bound.\" so my question is: why should the other methods lie within this theoretical guarantee?\n\nI am a little confused by the related works section. The authors discuss object-centric representation methods whose goal, unlike that of their method, is to learn a full representation of an object. This includes much more information than just position. In other words, it seems the method of this paper focuses “only” on the learning the position of an object. While this does not diminish the significance of the work, I think the work could benefit from discussing more on this difference between these works, to make the comparisons more fair and also focus more on works that focus on unsupervisedly localising object in images (i.e., works that only focus on position and not on the other aspects of object reperesentations), e.g., [1,2]. So in the end I am also wondering if the authors should actually narrow down the title/contribution claims to \"Unsupervised Object Localisation with Theoretical Guarantees\"?\n\nIf the authors can remark on these issues above, I am happy to consider raising my score.\n\n[1] Siméoni, Oriane, Chloé Sekkat, Gilles Puy, Antonín Vobecký, Éloi Zablocki, and Patrick Pérez. \"Unsupervised object localization: Observing the background to discover objects.\" In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3176-3186. 2023.\n[2] https://github.com/valeoai/Awesome-Unsupervised-Object-Localization\n\nsee above"
},
{
"confidence": 4,
"rating": 6,
"review_id": "v1fpXhZeoS",
"review_text": "This paper explores Unsupervised Object Detection with Theoretical Guarantees. This method is a significant advancement in the field of object detection as it provides theoretical guarantees on the accuracy of the detected object positions. By introducing a new approach that ensures reliable object localization, the research contributes to enhancing the robustness and accuracy of unsupervised object detection systems.\n\nThe method provides theoretical guarantees on recovering true object positions up to small shifts, which is a significant strength compared to traditional empirical approaches in object detection. The ability to interpret the latent variables as object positions enhances the interpretability of the model and facilitates understanding of the learned representations. The use of an autoencoder with a convolutional neural network (CNN) encoder and decoder, modified to be translationally equivariant, offers a unique and innovative approach to unsupervised object detection.\n\nThis work explores the unsupervised object detection, and theoretical analysis. However, the dataset for the experiment is not common, and few comparative experiments with common SOTA object detection model. Besides, although this work provides the theoretical guarantees to recover the true object positions up to quantifiable small shifts, there is no analysis whether it only exists in the unsupervised domain,.or can be adopted in the supervised domain.\n\n1. In the experiments, the datasets for evaluation is the CLEVR data, please explain why choose it, not other popular object detection datasets?\n2. This work validate the theoretical results using lots of experimental results, however only few experiments are carried out for the comparison with SOTA.\n3. In object detection, the popular model is about YOLO, and also the metric including accuracy, mAP, and IoU, etc are also the common in supervised object detection."
},
{
"confidence": 5,
"rating": 5,
"review_id": "CPS2DRGmxz",
"review_text": "The paper proposes a new idea for unsupervised object detection where an CNN based auto-encoder architecture is employed and the latent representation is trained to learn position of objects in images. They further provide theoretical analysis of the proposed idea under strong assumption about the input data and model characteristics. Results from on synthetic data experiments is also provided\n\nThe idea presented in the paper is interesting as it tries to solve the object detection problem in an unsupervised manner by modeling the latent space such that it explicitly learns object positions.\n\nThe paper lacks results and discussion on the experimental details on how the idea can be effectively implemented. This is particularly important to understand the merits of the proposed idea as it has strong assumption on model architecture and input data (e,g, size of objects). For example, it is not clear how the authors processes input data during training, how the min-batch sampling is done, what input-target pairs are?, what regulations are important to use if at all, how the over-fitting is prevented given the very simplified experimental setting.\nFurthermore, it is not clear from the paper how the latent space can learn any semantic information to reconstruct the images as it modeled to learn the position of the objects.\n\n- Can the authors provide more clarification on the training procedure and the important aspects that are necessary for the model work?\n- Given that the latent space is learning the position encoding for the object, how is it possible to learn to model semantics for reconstruction loss?\n- how does the model performance change relative to diversity of object shape and appearance in a single image?\n- why is it important to use positional encoding?\n- how the reconstruction quality can be guaranteed, especially in a realistic setting?"
},
{
"confidence": 5,
"rating": 6,
"review_id": "KIEbGd1MQV",
"review_text": "This paper presents the first unsupervised object detection approach that is theoretically shown to recover the true object positions up to quantifiable small deviations that are related to the encoder and decoder receptive field sizes, the object sizes, and the widths of the Gaussians used in the rendering process. The authors conduct a thorough analysis of how the error depends on each of these variables and conduct synthetic experiments that validate our theoretical predictions up to a precision of individual pixels. \nOn a high level, their architecture is based on an autoencoder that is fully equivariant to translations, which they achieve by making the encoder consist of a CNN followed by a soft argmax function to extract object positions, and making the decoder consist of a Gaussian rendering function followed by another CNN to reconstruct an image from the object positions. \nThe authors also conducted synthetic experiments, CLEVR-based experiments, and real video experiments that validated their theoretical findings up to a precision of individual pixels.\n\nI do like the analysis of the current state-of-the-art detection models SAM and CutLER and it is interesting to find that in some cases their errors are much higher than the bound derived by this method.\n\nThis paper is well-written and easy to follow.\n\n1. It is interesting to learn that SAM and CutLER's errors are sometimes much higher than the bound derived by the proposed method. I would be interested to hear from the authors if they have any insights on how this finding could be used to improve these methods, especially CutLER, which is also an unsupervised object detection and instance segmentation model.\n\n2. The majority of the experiments in this paper are conducted on synthetic datasets, and it is questionable whether the findings can be generalized to real images and videos. Could the authors provide some experiments on real images or videos? \n\n3. Continuing on the previous point, most objects in the synthetic datasets are rigid and have very consistent shapes. However, the challenges in object detection are often in detecting the non-rigid objects or partially occluded objects. I am curious to see if the proposed method can be used to handle these cases.\n\nPlease check the weakness section"
}
] | |
x2zY4hZcmg | Dynamic Model Predictive Shielding for Provably Safe Reinforcement Learning | Among approaches for provably safe reinforcement learning, Model Predictive Shielding (MPS) has proven effective at complex tasks in continuous, high-dimensional state spaces, by leveraging a *backup policy* to ensure safety when the learned policy attempts to take risky actions. However, while MPS can ensure safety both during and after training, it often hinders task progress due to the conservative and task-oblivious nature of backup policies.
This paper introduces *Dynamic Model Predictive Shielding* (DMPS), which optimizes reinforcement learning objectives while maintaining provable safety. DMPS employs a local planner to dynamically select safe recovery actions that maximize both short-term progress as well as long-term rewards. Crucially, the planner and the neural policy play a synergistic role in DMPS. When planning recovery actions for ensuring safety, the planner utilizes the neural policy to estimate long-term rewards, allowing it to *observe* beyond its short-term planning horizon.
Conversely, the neural policy under training learns from the recovery plans proposed by the planner, converging to policies that are both *high-performing* and *safe* in practice.
This approach guarantees safety during and after training, with bounded recovery regret that decreases exponentially with planning horizon depth. Experimental results demonstrate that DMPS converges to policies that rarely require shield interventions after training and achieve higher rewards compared to several state-of-the-art baselines. | https://openreview.net/pdf/c706ae06d8c7d1a093c6b0c855ff445166fb6629.pdf | [
{
"confidence": 3,
"rating": 5,
"review_id": "sFToyBBhSi",
"review_text": "Naive model predictive shielding may overly restrict exploration thereby preventing an RL agent from learning a policy with good performance. In order to prevent this, the authors propose a method to optimise a backup policy that is provably safe using an online planner. An approximate model such as double integrator or differential drive is used for planning. Improvements are demonstrated on five benchmarks that involve static or dynamic obstacle avoidance as compared to provably safe and approximately safe RL methods. A provable guarantee is provided for recovery regret.\n\nPresentation is clear with backing proofs and demonstrable results\n\nProblem that is being solved is clearly delineated and addressed using sound techniques\n\nExperimental comparisons are performed rigorously with attention to detail\n\nLiterature review and comparisons are partial to the RL literature. There is a long-standing literature in control [A, B, C] to use an approximate model to plan using predictive control. A whole host of methods to learn a safety-filter/shielding on the fly has been explored with robust optimization-based offline and online control techniques. Most of these methods would implicitly solve the problem this paper is trying to address. However, it is interesting that the paper uses the Q function in the online optimization. This aspect is novel and unique to this paper.\n\nIt is unclear how much computation and time it takes to run MCTS online at each time in order to do dynamic shielding at runtime.\n\nDynamics model such as double integrator and differential drive are too simple. It would be interesting to see how well these would work with more complicated and/or higher-dimensional dynamics.\n\n[A] Breeden, Joseph, and Dimitra Panagou. \"Predictive control barrier functions for online safety critical control.\" 2022 IEEE 61st Conference on Decision and Control (CDC). IEEE, 2022.\n\n[B] Wabersich, Kim P., and Melanie N. Zeilinger. \"Predictive control barrier functions: Enhanced safety mechanisms for learning-based control.\" IEEE Transactions on Automatic Control 68.5 (2022): 2638-2651.\n\n[C] Wabersich, Kim P., et al. \"Data-driven safety filters: Hamilton-jacobi reachability, control barrier functions, and predictive methods for uncertain systems.\" IEEE Control Systems Magazine 43.5 (2023): 137-177.\n\nHave the authors explored the space of RL training algorithms and methods to test this approach?\n\nAre there any advantages of using DMPS if the performance policy is not using RL and uses imitation learning or no learning at all? Exploration is important only for RL."
},
{
"confidence": 3,
"rating": 7,
"review_id": "etJk49jzQH",
"review_text": "This paper proposes a new method for safety shielding. More precisely, the authors extend Model Predictive Shielding (MPS), where an agent reverts to a safe backup policy if, for the next predicted state, this policy would not be able to guarantee safety anymore. MPS is often overly conservative, particularly in cases where the backup policy is very different from the final policy (for example, it may only consider breaking, while the final policy may be able to steer around an object). To improve this, whenever an action is potentially unsafe, the agent first uses a short-horizon planner to see if there exists some safe action that may be better than the one of the backup policy (i.e., one for which the backup policy could still recover in the future, but for which our learned agent predicts a higher reward). The authors formalize this framework and show recovery regret for this framework diminishes exponentially with the horizon. Next, they show that an implementation of this framework outperforms prior methods, both in terms of performance and the number of required shield invocations.\n\nThe topic of the paper, safety shielding, is relevant and significant. Safe RL (and particularly, safety shielding) is a promising line of research but is often overly conservative in practice: the methods proposed in this paper take a step toward reducing this problem while still giving formal guarantees about safety. The topic is relevant for the NeurIPS community (particularly those interested in RL), both as a method that could immediately be used or to extend the method to more complex settings (i.e., with a stochastic/unknown model).\n\nThe paper is well-written and easy to read: the intuition behind the method is clear, and the analysis of the results is easy to follow. The framework is well formalized (using visualizations where helpful), and the given pseudo-code helps with reproducibility. The experiments are extensive and convincingly show the advantages of the proposed method.\n\nApart from some minor remarks that I add below, this paper has one main weakness: it does not clearly indicate the computational complexity of its method nor the scalability. The results do not show computation times, and (as far as I could tell) no mention is made of either the average planning time or some time limit for this planning phase. From some ball-parking, the additional time required for this method may be significant (solving up to millions of short-horizon planning problems), so a quantification of this computational cost should be provided.\n\nSome more minor remarks:\n* The paper only mentions how the framework is implemented (i.e., what RL & planning method it uses) in the appendix: it would be nice to (briefly) mention this in the results section as well;\n* In Table 2, the results of CPO and TD3 are not bold, even though some are equal to those of the best frameworks: this should be fixed;\n* One limitation of the proposed framework is that it assumes the environment is deterministic: it would be nice to mention this in the limitations section.\n\nAs mentioned in the 'weaknesses' section, I have one main question: how does the computational complexity of your method compare to those of the benchmarks, particularly MPS? I will change my rating if this question is not adequately answered."
},
{
"confidence": 5,
"rating": 7,
"review_id": "0DJo7Vrlze",
"review_text": "The authors introduce Dynamic Model Predictive Shielding (DMPS) an extension of Model Predictive Sheilding (MPS) that adress some of its key limitations, such as overconservatism when deploying the backup policy which consequently hinders exploration of the neural 'task' agent and slows down conergence. The key innovation of DMPS is that it incoropoates a local planner for dynamic recovery that leverages both a pre-computed backup policy and the neural 'task' policies Q values for planning for short and long horizon returns while maintaining safety of the system. DMPS is a provably safe reinforcement learning (PSRL) method, meaning that it guarantees safety of the system (regardless of the underlying neural 'task' policy) by only exploring within the set of safe and recoverable states defined by the backup policy. This realised by planning for a fixed n step trajectory and checking whether the system can be recovered by the backup policy given the action proposed by the agent. The authors demonstrate that DMPS outperforms MPS and other baselines in terms of performance and safety in various benchmarks. It also emphasizes the importance of aligning reinforcement learning agents with real-world safety requirements, while discussing some of the limitations of their approach.\n\nThe paper has several strengths: I find that the paper is very well written and easy to follow, with sufficient details in necessary places and abstractions in other places where the details may not immediately matter, as such, it is a very nice read. The theoretical analysis of the recovery regret is convincing and interesting. Furthermore, the overall framework is attractive from the point of view that it is provably safe, something I personally find is crucial for deploying RL in the real world, rather than safe at convergence or in expectation like a lot safe RL methods. I find that the dynamic planning module is an innovative solution to the intuitive issue faced by most shielding methods (Figure 2) and I feel that this work constitutes a step in the right direction for improving shielding methods and making them more practical. The experimental evaluation I feel is strong and thorough as in most cases DMPS clearly outperforms MPS and REVEL, although I think it is missing something (see weaknesses).\n\nThe key weakness of the PSRL framework is the reliance on a perfect (or sufficiently accurate) dynamics model of the environment, the safety performance of the backup policy and the computation of the safe invariant set. In contrast to the first shielding approaches for RL [1], which operate primarly on discrete state and actions spaces, DMPS does not need to compute the shield before learning can begining which significantly reduces the engineering overhead before training. This of course comes at a cost, in practice the shields in [1] are substatially more lightweight during \"inference\", (although in theory there could be exponential blow up) in part due to only operating on discrete or discretized state/action spaces but also because a lot of the necessary computation is done before hand. This is a key limitation of DMPS as it relys on planning at each timestep which might be costly and infeasible for on-board computation or edge devices. Fruthermore, it seems that there is still a significant amount of prior knowledge required for DMPS to work effectively, first we have to have a \"perfect\" dynamics model (for provable guarantees) secondly I presume we need to handcraft a backup policy and then compute its invariant safe set so as to plan for recoverability. The first limitation is mentioned in the paper but not really discussed in much detail, the second limitation is find is crucial and I don't think is really mentioned in the paper. In particular it is a non-trivial challenge to come up with a backup policy that has a maximal safe invariant set, perhaps for the environments the authors consider it is easy (just decelerate) but for more dynamics environments and in general this is not the case and I feel like more discussion about both these limitations (i.e. the limitations of the PSRL setting) is needed. \n\nWhile I find the experimental evaluation compelling I feel it is slightly misleading and it is missing something. In Table 2 CPO and TD3 score the same or higher in a few of the static benchmarks but there scores are not in bold, is there a reason for this that I am missing? I also feel like a comparison to PPO-Lag or DDPG-Lag would really help make the results that bit more convincing.\n\nAll that being said, in principle I advocate for acceptance of this paper.\n\n[1] Alshiekh, Mohammed, et al. \"Safe reinforcement learning via shielding.\" Proceedings of the AAAI conference on artificial intelligence. Vol. 32. No. 1. 2018.\n\nMost of my questions are technical:\n\nFor each of the enviroments you consider how are the backup policies constructed and how are the invariat safe sets determined?\n\nFor each of the environments what are the maxmimum number of steps needed to deccelerate the vehicle to zero or avoid obstacles and is your choice of n=5 sufficient?\n\nWhat would be suitable ways of modelling the environment from experience to obtain uncertainty estimates, for example would Gaussian Process modelling suffice?\n\nDo you assume any observation noise or just perfect access to the current state, if not how would you incorporate this into your framework?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "xVqcTwIpOt",
"review_text": "The approach called dynamic model-predictive shielding for safe reinforcement learning is proposed as an improvement over its static counterpart. The main idea is to optimize for expected return on action with respect to the reinforcement-learning task when choosing a shielding backup action, and to incorporate planning horizon prediction into learning for the policy to learn to avoid unsafe actions. This dynamic version is evaluated on several static and dynamic obstacle-avoidance benchmarks and compared to static model-predictive shielding and three more planning-based approaches.\n\nThe core idea of the approach is interesting and potentially valuable: to achieve synergy between safety and optimal performance in model-predictive shielding via incorporating planning into policy learning and taking expected performance into account during backup planning. Similar attempts have been done previously. In comparison, this work proposes a novel notion of \"recovery regret\" as a heuristic to guide mutual integration of planning and reinforcement learning. \n\nThe strength of the paper is in extensive evaluation and comparison to multiple approaches. The notion of recovery regret can also be of independent interest for model-predictive shielding research. Dynamic shielding outperforms other approaches in the evaluation in terms of the number of shielding invocations, which indicates synergy between planning and learning over time.\n\nPotential weaknesses of the approach are in scalability of the planner and tightness of the probabilistic bounds on safety.\n\nMinor:\n- \"more optimal recovery approach\" --> an optimal/a better\n\nQuestions\n1. In Figure 1, what are green and red lines, and a blue blob?\n2. How does the local planner scale with respect to the look-ahead?\n3. Does the local planner have to recompute the look-ahead prediction every time it is invoked or does it reuse previous results if the agent continues along the same trajectory?\n4. What is the overhead of the planner's computations?\n5. How does the planning limit affect safety and optimality guarantees?\n6. MCTS typically struggles to plan for overly constrained problems and complex planning tasks. How does the approach scale with respect to the planning task complexity?"
},
{
"confidence": 4,
"rating": 5,
"review_id": "bUuTH1ZONd",
"review_text": "The paper seeks to address provably safe RL problems where safety must be ensured even during training. It proposed DMPS, which enhances prior Model Predictive Shielding approach, to dynamically select safe actions when danger is imminent. DMPS employs local planner to plan for recovery actions and the planner objective consists of both short-term and long-term rewards. Feedback from the planner can then be used to incrementally train the neural policy to guide it towards safe policy set.\n\n1. Quality\n* Overall, the approach described in the paper is sound and it combines many established components (e.g. backup policy, local planner, estimate future reward using model unrolling and Q-estimate) to facilitate safe RL. \n* The paper provides theoretical bound on the recovery regret as the sampling limit in the local planner approaches inifinity.\n2. Clarity\n* The paper is written in a clear and lucid manner. The figures, algorithm and equations are structured in a way that is easily understandable to the readers.\n\n1. Originality\n* The main difference between DMPS and MPS is the use of local planner when backup policy is triggered. The technical approach used in DMPS is not particularly new as there are already some similar approaches of estimating a safety Q-value and perform planning based on the Q-value [1, 2].\n2. Significance\n* The only difference between DMPS and the prior MPS seems to be the local planner and (as discussed in point 1) this local planner is not particularly novel. Having said that, I do agree that the proposed DMPS does show improvement over MPS in some experiment scenarios.\n* While the paper mentions a small planning horizon is sufficient for the local planner to plan safe yet rewarding actions, I feel that this may not be true in most cases. To steer the agent back to safety (and yet rewarding), a long sequence of actions may be required. If the planning horizon is set too small, then DMPS falls back to backup policy and the performance would be the same as MPS. In this case, I guess the only solution is to increase in planning horizon and in turn increase the computational overhead of DMPS exponentially.\n* The local planner requires perfect information of the transition and the transition must be deterministic. This may restrict its applicability, especially given that there're prior work on model-based RL where transition can be stochastic and learned instead. \n\nReferences \n[1] Clavera, I., Fu, Y. and Abbeel, P., Model-Augmented Actor-Critic: Backpropagating through Paths. In International Conference on Learning Representations. \n[2] Thomas, G., Luo, Y. and Ma, T., 2021. Safe reinforcement learning by imagining the near future. Advances in Neural Information Processing Systems, 34, pp.13859-13869. \n[3] Nagabandi, A., Kahn, G., Fearing, R.S. and Levine, S., 2018, May. Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning. In 2018 IEEE international conference on robotics and automation (ICRA) (pp. 7559-7566). IEEE. \n[4] Chua, K., Calandra, R., McAllister, R., and Levine, S. Deep reinforcement learning in a handful of trials using probabilistic dynamics models. In Advances in Neural Information Processing Systems. 2018. \n[5] Janner, M., Fu, J., Zhang, M. and Levine, S., 2019. When to trust your model: Model-based policy optimization. Advances in neural information processing systems, 32.\n\n1. In the first sentence of Section 6.2, you mentioned that the total return is averaged over the last 10 training episodes. Are you evaluating them using the same (single) random seed and only use the last 10 training episodes for evaluation? \n\n2. Given that both the states and actions are continuous, how do you apply MCTS to the local planner? \n\n3. TD3 only maximizes a single reward objective. In your experiments, I guess you performed some sort of reward shaping for it to balance between safety and reward. Can you elaborate how do you incorporate safety into its objective and is there any weighting used? \n\n4. Similarly for CPO, how do you incorporate safety into it? Do you specify a safety violation constraint? \n\n5. (Related to Qn 3 & 4) I am surprised that TD3 and CPO rapidly overfits to conservative policy in dynamic environment. What do you think is the reason and is the weighting between safety and reward dynamically tuned?"
}
] | |
x2780VcMOI | A Polar coordinate system represents syntax in large language models | Originally formalized with symbolic representations, syntactic trees may also be effectively represented in the activations of large language models (LLMs). Indeed, a ''Structural Probe'' can find a subspace of neural activations, where syntactically-related words are relatively close to one-another. However, this syntactic code remains incomplete: the distance between the Structural Probe word embeddings can represent the \emph{existence} but not the type and direction of syntactic relations. Here, we hypothesize that syntactic relations are, in fact, coded by the relative direction between nearby embeddings. To test this hypothesis, we introduce a ''Polar Probe'' trained to read syntactic relations from both the distance and the direction between word embeddings. Our approach reveals three main findings. First, our Polar Probe successfully recovers the type and direction of syntactic relations, and substantially outperforms the Structural Probe by nearly two folds. Second, we confirm that this polar coordinate system exists in a low-dimensional subspace of the intermediate layers of many LLMs and becomes increasingly precise in the latest frontier models. Third, we demonstrate with a new benchmark that similar syntactic relations are coded similarly across the nested levels of syntactic trees. Overall, this work shows that LLMs spontaneously learn a geometry of neural activations that explicitly represents the main symbolic structures of linguistic theory. | https://openreview.net/pdf/4e0896e6752ad21f70d6149f2f3582b264bfeedc.pdf | [
{
"confidence": 4,
"rating": 8,
"review_id": "LlRzQBPXjp",
"review_text": "This paper proposes polar probes, a kind of structural probe that learns a distance and rotation function that can more accurately classify syntactic structure from language model representations than previous approaches. In particular, the question of whether direction can represent the type of syntactic relationship is answered. The authors find that their polar probe outperforms previous methods quite significantly and show\n\nThis paper is very well presented and a pleasure to read. The empirical findings are strong and clearly support the hypothesis that the direction, as well as the angle of the representations of an LM projected on a plane represent the syntactic relationships encoded by the model. The authors show that this interpretation is able to much more strongly reconstruct ground truth syntactic parses from hidden state representations than structured probes. The controlled dataset provides a clean comparison in a challenging setting and is a useful resource for future work. A major finding of this work is that it vastly raises the upper bar for how well we should consider syntax to be encoded by language models.\n\nWeaknesses like the focus on dependency parses and drawbacks of current tokenizers are addressed in limitations, but are still weaknesses nonetheless.\n\nPlease include UUAS, LAS and Balanced Accuracy for the evaluation on the controlled dataset separately for comparison.\n\nAs thorough as this paper is, I think it could go deeper on the model analysis. It's nice that the layer-wise analysis is consistent with previous work, but this would be mostly expected. For example, could the authors show that models of different sizes capture more/less syntactic complexity? Is there a critical point where syntax becomes well represented and gains are diminishing after more scaling? Do larger models capture more of the \"tail\" of rare syntactic constructions? This could be carried out on the GPT2 or Pythia family of models.\n\nNits:\n\n- please make the text in the legend/axis labels for figure 3 bigger\n\n- Typo L36: \"proposed settle\"\n\nN/a"
},
{
"confidence": 2,
"rating": 5,
"review_id": "JtI5pA6U8Y",
"review_text": "Whereas prior work (Hewitt and Manning 2018) probed syntactic distance and depth, this work proposed to push that forward by also probing headedness and dependency type. Specifically, this doesn't separately probe those three, but aims for a single vector space where euclidean distance defines syntactic distance but the difference vector maps to a UD dependency label (optimized with a contrastive learning objective).\n\nIt is a pretty well-written paper, and the framing of the angular probe idea seems well explained and to have some elegance to it (in aiming for a single underlying representation); parts of the implementation seem well-considered to get that single representation.\n\n- If viewed merely as a combination of probing structure and labeling, it is very similar to a work like Muller-Eberstein et al. 2022. The advantage of this paper - having more of a shared representation -- is appealing, but I wish the consequences of that shared space were better explored.\n- Analysis was somewhat lacking: for a probing paper, there were relatively work showing what this tells us about the syntactic behavior of models.\n\n- Have the authors looked at extended dependencies? The notion of differences as dependency type seems more specific (and to imply more interesting consequences) if there are multiple \"paths\" through a tree."
},
{
"confidence": 3,
"rating": 5,
"review_id": "RelYgyPWeD",
"review_text": "Previous work introduced linear probes to explore how syntactic relationships are encoded in LLM embeddings. This work aims to take it a step further and examine how types of syntactic relationships are encoded in the LLMs. They introduce a polar probe that when optimized can predict the type of syntactic relations via the angle between them. In a multi-faceted evaluation, the model outperforms baselines (which are essentially ablations of the model) in terms of stronger cosine similarity between the same relations, and in terms of tree accuracy (UAS/LAS).\n\n- An interesting paper with a clear contribution, building on existing probing work while asking a couple new research questions\n\n- The results appear convincing\n\n- The potential to explore syntax through the lens of LLMs, especially when LLMs can be easily trained on unlabeled text, or especially when LLMs are increasingly multilingual, points to some exciting future directions.\n\n- The evaluation also includes some linguistically interesting example cases. Essentially exactly what I would have asked for (in addition to the larger corpora studies)\n\n- I find the distinction between probing and parsing to be not entirely clear. At the point where the evaluation is in terms of UAS/LAS, could this not be compared directly to parsers on the same data (especially since building on top of LLM embeddings would be the most performant solution)? And where would the discrepancies be, and what would that mean? Do LLMs not encode those relationships?\n\n- In general the paper seems to suffer from a lack of convincing baselines. The baselines presented -- the structural or angular probe, are steps along the path to the polar probe.\n\n- Cosine similarity between identical syntactic categories is surprisingly low (to me). The ranking of categories in terms of the strength of that correlation is also surprising, ith things like 'case' being quite strong. In general there are many \"odd\" patterns that I don't have an intuitive explanation for why they occur, and aren't discussed in detail in this work.\n\n- There is no dedicated related work. I do think the parsing literature, and especially the parsing-on-top-of-LLMs literature is relevant.\n\nSuggestions / Questions:\n\nQ1 - How the trees are predicted is not clear / whether these are a series of independent predictions or whether they are processed sequentially or decoded jointly?\n\nQ2 Do the same syntactic relations that occur at different levels of the sentence have distinct embeddings? I think the authors were setting up to explore some questions about the meaningfulness in the \"hierarchy\" of the tree, especially with the controlled sentence dataset, but then I never saw these really come to fruition. Especially the talk of short/relative/long-nested partitions -- where are these discussed? Fig. 5 is mentioned (L241) but Fig 5 is fluff.\n\nL33: \"and its neuronal substrate.\" What? It's just a model.\n\nL36: \"to settle\"? Though the sentence is bizarre regardless\n\nL24-L253, improper citation formats almost everywhere\n\n\"According to linguistics, sentences can be described as linear sequences of words connected by a dependency tree\", there isn't a \"linguistics\" -- there are many competing syntactic frameworks and heated debate as to the pros/cons of each framework, but at the end of the day, these are merely formalisms\n\nL83: Squared euclidean distances (between two word embeddings) cannot trivially represent the presence\n84 of dependency relations and their types and directions simultaneously.\n\nWhy not? If these are represented in separate subspaces, why is it not possible to represent these three concepts in a vector space?"
}
] | |
wzof7Y66xs | Hierarchical Selective Classification | Deploying deep neural networks for risk-sensitive tasks necessitates an uncertainty estimation mechanism. This paper introduces *hierarchical selective classification*, extending selective classification to a hierarchical setting. Our approach leverages the inherent structure of class relationships, enabling models to reduce the specificity of their predictions when faced with uncertainty. In this paper, we first formalize hierarchical risk and coverage, and introduce hierarchical risk-coverage curves. Next, we develop algorithms for hierarchical selective classification (which we refer to as "inference rules"), and propose an efficient algorithm that guarantees a target accuracy constraint with high probability. Lastly, we conduct extensive empirical studies on over a thousand ImageNet classifiers, revealing that training regimes such as CLIP, pretraining on ImageNet21k and knowledge distillation boost hierarchical selective performance. | https://openreview.net/pdf/c1f9ac8af6d8f4be4a306ca805ddb73cdadf977f.pdf | [
{
"confidence": 3,
"rating": 6,
"review_id": "NrxnE8rFWs",
"review_text": "The authors propose hierarchical selective classification, a method that selects the hierarchical granularity of its prediction based on uncertainty.\n\n* The paper is well-written, and the proposed method is quite intuitive.\n* I like the idea that if uncertain, it makes sense to predict at a higher level of granularity.\n* The theoretical results & statements are sound.\n* Extensive experimental results showing the superiority of the proposed method to DARTS and a non-hierarchical baseline.\n* Applicability to pre-trained models\n\n* My biggest uncertainty is the similarity of this work to conformal prediction. To me, it seems that this method is very similar to conformal prediction, where the set of possible prediction sets is restricted via this pre-defined hierarchy. While, as far as I know, it has not been explored, it decreases the perceived novelty. \n* A weakness of the setting rather than the method is that it assumes the knowledge of the underlying hierarchy. As such, the applicability is somewhat limited. The paper would benefit from a way to unsupervisedly learn this hierarchy, e.g. based on classes whose predicted probabilities are positively correlated.\n* As also touched upon in the concluding remarks, the method is post-hoc rather than being optimized during training, thus, likely not performing up to the highest possible level.\n* Minor: Line 158-159 is a worded badly, similar to \"... thus, we do A. Unlike others that do A, we do A+B\".\n\n* Could the authors please comment on the similarity to conformal prediction? \n* As such, how is [1] related and/or different?\n\n[1] Tyagi, Chhavi, and Wenge Guo. \"Multi-label Classification under Uncertainty: A Tree-based Conformal Prediction Approach.\" Conformal and Probabilistic Prediction with Applications. PMLR, 2023."
},
{
"confidence": 4,
"rating": 5,
"review_id": "JiOwUK4W8Q",
"review_text": "The paper introduces a hierarchical selective classification technique that incorporates hierarchical risk and coverage. The authors additionally proposed an algorithm that guarantees target accuracy. Experimental results demonstrate the method's effectiveness.\n\nHierarchical selective classification is a new area and therefore the current method is one of the first techniques to deal with such problem. Its application to critical settings can be substantial.\n\n•\tThe need of a prior tree among classes can limit its usage for complex scenarios. The construction of such tree can be a non-trivial step for the applicability of the approach. \n\n•\tThe main contribution looks an extension of previous methods for the hierarchical case.\n\n•\tRegarding results like in Table 2, would be possible to calibrate the coverage (as done on selective networks) for fair comparison? \n\n•\tHave the authors thought about the building the hierarchical tree structure as a pre-processing step? I asked that because such prior is key for wide applicability. \n\n•\tA well know problem of selective approaches, exposed on [1] is that given the non-differentiability of selection mechanism the binary function g is replaced by a relaxed function g: X → [0, 1], that way not performing selection during training, but instead assigning a soft instance weight to each training sample. The same effect is observed in the proposed method? \n\n[1] Gumbel-Softmax Selective Networks, https://arxiv.org/pdf/2211.10564."
},
{
"confidence": 3,
"rating": 6,
"review_id": "mEzuDOqGZG",
"review_text": "The paper introduces a new framework for selective classification called hierarchical selective classification. In a setting where a hierarchy in the classification task is present, the authors devise a selection strategy that considers confidence at different levels of the classification hierarchy. Extensive experimental analysis is performed over 1115 ImageNet classifiers.\n\nThe main strengths of the paper are:\n\n1. the idea of applying selective classification in a hierarchical setting is novel;\n2. the theoretical analysis relies on conformal prediction, which guarantees the soundness of the results;\n3. the proposed framework can impact high-risk settings, as shown in the healthcare example.\n\nOverall, I think the paper is solid. My main concern is that the empirical evaluation could be improved, especially regarding motivations and attention to detail. \nA few examples:\n* I do not fully understand why the authors focus so much on showing how different training regimes affect HSC performance. I guess this improves the overall predictive performance of the (hierarchical) classifier, which is expected to impact the HSC task positively.\n* As the authors correctly claim, the training regimes were not optimized for hierarchical selective classification. Despite the clear computation-wise motivation, I argue that including regimes optimized for HSC would make the empirical evaluation sounder.\n* a few lines are off: for instance, I would argue that line 279, i.e.,\n>CLIP achieves an exceptional improvement, surpassing 40%\n>\n does not match what is shown in Figure 4 (which shows an improvement below 40%).\n\nI have a few questions/remarks regarding the paper.\n\n* Q1. I think the authors are not discussing an implicit (and, in my opinion, quite relevant) assumption of their strategy, i.e. the errors at different levels of the hierarchy are assumed to be the same. However, I argue this is not exactly the case in real life. For example, failing to distinguish a golden retriever from a labrador differs from failing to distinguish a dog from a spider. Can the authors elaborate on this point?\n* Q2. Can the authors discuss the points I highlighted as the main weakness?"
},
{
"confidence": 3,
"rating": 6,
"review_id": "lXDdGY3vsq",
"review_text": "The paper proposes an extension of selective classification following a class hierarchy to reduce the specificity of model prediction when there is a high uncertainty. In particular, if the prediction confidence of a class is smaller than a predefined threshold, the proposed algorithm would proceed towards a higher class level in the hierarchical structure (the parent node), until the confidence of the considering node exceeds that threshold. The paper also formulises hierarchical risk and coverage, so that the area under curve can be used as a metric to benchmark different selective classification methods. An extensive number of pretrained classifiers on ImageNet dataset are then used to evaluate the proposed method and show promising results. The paper also include a PAC-like theoretical result, so that when finding the optimal threshold, one can select appropriate hyper-parameters to achieve their desired outcome with certain confidence level.\n\nThe paper goes into details to provide an adequate background about selective classification, the definition of heirarchical risk and coverage as well as its area under curve as a metric to quantify the performance of hierarchical-based selective classification. It also links to previous studies in the same subfield. In general, the paper is well written and easy to follow.\n\nThe paper also includes a theoretical result on the guarantee of the learning algorithm when one wants to find the optimal thresholding value for their hierarchical selective classification. This simple theoretical results does strengthen the paper.\n\nThe paper also include an extensive number of experiments and ablation studies to provide insights into the newly-proposed method.\n\nThe paper relies on the setting with the following assumptions:\n- It is an inference rule. This means that the algorithm is used at test time only. If this could be even integrated into training is a plus.\n- It needs a validation set to find the optimal hyper-parameter $\\theta$, or the threshold (partly mentioned in the conclusion). It is understandable because there is no training involve here, so there is a need for that. However, in some cases, there may not be additional data available.\n\nCould the authors clarify if it can also be integrated into training a whole model to perform hierarchical selective classification?"
}
] | |
wz2KvvEk44 | Focus On What Matters: Separated Models For Visual-Based RL Generalization | A primary challenge for visual-based Reinforcement Learning (RL) is to generalize effectively across unseen environments. Although previous studies have explored different auxiliary tasks to enhance generalization, few adopt image reconstruction due to concerns about exacerbating overfitting to task-irrelevant features during training. Perceiving the pre-eminence of image reconstruction in representation learning, we propose SMG (\blue{S}eparated \blue{M}odels for \blue{G}eneralization), a novel approach that exploits image reconstruction for generalization. SMG introduces two model branches to extract task-relevant and task-irrelevant representations separately from visual observations via cooperatively reconstruction. Built upon this architecture, we further emphasize the importance of task-relevant features for generalization. Specifically, SMG incorporates two additional consistency losses to guide the agent's focus toward task-relevant areas across different scenarios, thereby achieving free from overfitting. Extensive experiments in DMC demonstrate the SOTA performance of SMG in generalization, particularly excelling in video-background settings. Evaluations on robotic manipulation tasks further confirm the robustness of SMG in real-world applications. Source code is available at \url{https://anonymous.4open.science/r/SMG/}. | https://openreview.net/pdf/4038405ec2477f5290f6738f4c80053e969f1bfe.pdf | [
{
"confidence": 4,
"rating": 5,
"review_id": "u7SuIhCH92",
"review_text": "Visual-based Reinforcement Learning (RL) often fails to generalize across unseen environments. This work proposes SMG (Separated Models for Generalization) to improve the generalization in VRL by introducing two models to separately extract task-relevant and task-irrelevant representations through image reconstruction. Specifically, SMG proposes two additional consistency losses on relevant features, improving generalization. Extensive experiments, including video-hard DMC, color-hard DMC, and manipulation tasks, show SMG excels in diverse settings and tasks, demonstrating robust performance.\n\n- Separating foreground and background for reconstruction makes sense for improving the generalization in VRL.\n\n- Extensive experiments in various experimental settings demonstrate the effectiveness of SMG.\n\n- The learned mask looks very effective (Fig. 3 and Fig. 7).\n\n- Distinguishing between controllable and uncontrollable parts for learning a mask model has been widely discussed in the community, like TIA [1], Denoised MDP [2], ISO-Dream [3] and so on. Although I appreciate authors' efforts to discuss its difference against TIA (appendix E.2), I think the novelty of learning mask models to distinguish noise from the environment is limited. Nevertheless, I believe that this paper has made contributions in applying mask models to the field of visual RL generalization.\n\n- I'm curious about the performance of the proposed method in some more challenging settings, like RL-Vigen [4].\n\n- As there are many losses, it is better to add a detailed pseudo code about how to calculate all these losses, which can make the paper more readable.\n\n- This proposed SGM is considered to be seamlessly combined with any existing off-policy RL algorithms. As the experiments mainly consider SAC as the RL backbone, I'm curious about its performance with other methods, like DrQ or SVEA.\n\n- The related work part only discusses observation generalization in RL and some other types of generalization also should be discussed, like dynamic generalization [5,6] and task generalization [7,8].\n\nOverall, I lean toward boardline of this work. I will participate in subsequent discussions and would like to adjust my scores, especially for the response to my concerns about experiments.\n\n[1] Learning Task Informed Abstractions\n\n[2] Denoised MDPs: Learning World Models Better Than the World Itself\n\n[3] Iso-Dream: Isolating and Leveraging Noncontrollable Visual Dynamics in World Models\n\n[4] RL-ViGen: A Reinforcement Learning Benchmark for Visual Generalization\n\n[5] Context-aware dynamics model for generalization in model-based reinforcement learning\n\n[6] Why generalization in rl is difficult: Epistemic pomdps and implicit partial observability\n\n[7] Zero-shot task generalization with multi-task deep reinforcement learning\n\n[8] Task Aware Dreamer for Task Generalization in Reinforcement Learning\n\n- How do you determine the hyperparameter $\\rho$? In Fig.9, this paper shows different results about $\\rho$ in walker-walk. Are there more results to show the relationship between SGM and $\\rho$?\n\n- Why do reconstruction-based methods benefit generalization? Are there any explanations?\n\n\n---------------\n\n**After reading the authors' response and other reviewers' comments, I have raised my scores from 4 to 5.**"
},
{
"confidence": 4,
"rating": 7,
"review_id": "xc1ivg0InA",
"review_text": "This paper presents a novel method that utilizes two model branches to extract task-relevant and task-irrelevant representations separately from visual observations, aiming to enhance the zero-shot generalization ability of RL agents. The approach introduces four additional loss terms and two consistency losses to guide the agent's focus towards task-relevant areas across different scenarios. The proposed method can be seamlessly integrated into existing standard off-policy RL algorithms as a plug-and-play module. Experimental results demonstrate the effectiveness of the proposed model on two environments, surpassing previous benchmarks such as SAC and DrQ.\n\n1. This paper is clearly written and easy to follow.\n2. Based on the separated models architecture, this paper proposes multiple effective loss functions to focus on task-relevant features in visual-based RL generalization.\n3. The authors provide detailed validations on the DMC environment and robotic manipulation tasks. They demonstrate the advantages of the proposed loss terms across multiple tasks in DMC (Table 3) and showcase the state-of-the-art performance of SMG (Table 1, 2).\n\n1. While the paper compares the performance with model-free RL methods, it would be beneficial to also include a comparison with model-based RL methods. Previous works such as DreamerPro [1], Iso-Dream [2], and Denoised-MDP [3] have addressed visual distractions to enhance the generalization ability of RL agents.\n\n[1] Dreamerpro: Reconstruction-free model-based reinforcement learning with prototypical representations.\n\n[2] Iso-Dream: Isolating Noncontrollable Visual Dynamics in World Models.\n\n[3] Denoised mdps: Learning world models better than the world itself.\n\n2. The paper lacks sufficient discussion and analysis of its limitations. \n3. The serial numbers in some figures appear to be somewhat disorganized.\n\nPlease see the weaknesses section."
},
{
"confidence": 4,
"rating": 5,
"review_id": "15tbehfjAC",
"review_text": "This paper presents a novel approach called SMG (Separated Models for Generalization) to improve generalization in visual-based reinforcement learning (RL). The approach works by using separate foreground and background encoders/decoders and employing a mask to isolate task-relevant regions. In addition, it also applies four additional losses(mask ratio, background, Q-value and empowerment losses) to to enhance the model’s ability to distinguish between two types of representations. To make the learned models generalize to different visual styles, it introduces attribution augmentation and consistency losses. The authors position this as a plug-and-play method that can enhance existing RL algorithms' generalization capabilities.\n\nExperiments show SMG outperforms baseline methods, particularly in video-background settings where it maintains performance even with significant visual changes. Ablation studies validate the importance of each component.\n\nThe main contributions are:\n\n- SMG: A separated model architecture with two branches to extract task-relevant and task-irrelevant representations from visual observations.\n\n- Two consistency losses to guide the agent's focus on task-relevant areas across different scenarios.\n\n- Strong performance on DMControl benchmark tasks, especially in video-background settings.\n\nThis paper has several strengths:\n\n- SMG achieves state-of-the-art performance on the DMControl Generalization Benchmark, particularly excelling in the challenging video-background settings. This demonstrates the practical effectiveness of the approach.\n\n- SMG is a plug-and-play method that can enhance existing RL algorithms' generalization capabilities. It is designed to be easily integrated with existing off-policy RL algorithms, enhancing its practical value and potential for wide adoption.\n\n- This paper includes detailed ablation studies that validate the importance of each component in the SMG architecture, providing insights into the method's workings.\n\n- This paper is well-written. And it also provides clear visualizations of the reconstruction process, helping readers understand how SMG extracts and utilizes task-relevant features.\n\nThis paper has several weaknesses:\n\n- My major concern is the overclaim made by this paper. While it claims to address the generalization gap in visual-based reinforcement learning, the method proposed primarily tackles scenarios where only the backgrounds differ. However, visual generalization challenges are more diverse and include variations such as different lighting conditions and textures, which are common in real-world robotics applications. These scenarios appear to be overlooked in this paper.\n\n- SMG introduces a lot of loss terms and associated hyperparameters, which could complicate tuning in practical applications.\n - Specifically, the mask ratio $\\rho$ appears to be crucial for performance, as it is the sole factor preventing the model from classifying everything as foreground. Given that $\\rho$ represents the ratio between the foreground and the entire image, it likely necessitates per-task tuning, which could prove to be challenging and not scalable.\n\n- The foreground consistency loss, as discussed in Section 3.3, heavily depends on the predicted mask to construct the augmented observation. During the initial stages of training, this process relies on potentially inaccurate mask predictions and attributions. Although the authors describe this as a bootstrapping process, further analysis regarding its stability and potential failure modes would be beneficial.\n\n- The paper could be strengthened by considering a broader range of baselines. For example:\n - Recent studies [1] suggest that visual encoders pre-trained on large-scale image datasets can improve the visual robustness of a policy. This paper does not make any comparisons with visual pre-training methods.\n - Large vision foundation models like SAM [2] could potentially be utilized to provide supervision for generating foreground masks. Would this approach be more effective than training a mask predictor from scratch?\n\n\n- The additional computation overhead introduced by the extra modules is concerning.\n - The architecture, which involves separate models, essentially doubles the number of parameters compared to baseline methods. Although the authors argue that the performance improvements are due to the novel architecture rather than the increased number of parameters, this could still be problematic for practical applications with limited computational resources.\n - Training time: The reported wall time for SMG is significantly longer than that of the baseline methods (22 hours versus 8-13 hours for 500,000 steps).\n\n[1] Hansen, Nicklas, et al. \"On pre-training for visuo-motor control: Revisiting a learning-from-scratch baseline.\" arXiv preprint arXiv:2212.05749 (2022).\n\n[2] Kirillov, Alexander, et al. \"Segment anything.\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\n\nSee the \"Weaknesses\" section. Some additional questions are noted below:\n\n- The terminology used for \"foreground\" and \"background\" is somewhat confusing. To clarify, \"foreground\" actually refers to the task-relevant parts of the image, while \"background\" refers to the task-irrelevant parts, correct?\n- The necessity for background reconstruction is unclear. The authors claim that \"improving the precision of background prediction can consequently enhance the foreground as well,\" but a more detailed explanation of this assertion would be beneficial."
},
{
"confidence": 3,
"rating": 7,
"review_id": "wOjwxilLJq",
"review_text": "The authors propose a novel objective to improve robustness of the visual encoder in RL to background noise and to color perturbations. First, the authors split the visual encoder into two models: background encoder/decoder and foreground encoder/decoder. The proposed training objective contains multiple components: \n- overall reconstruction loss that combines outputs of the background and the foreground decoders modulated by a mask;\n- mask ratio loss that prevents the foreground mask from taking up too much of the image;\n- background reconstruction loss that uses the learned mask to generate a new data sample;\n- q-value loss that makes the foreground representation capture value information;\n- empowerment loss that makes the foreground representations capture the agent actions;\n- foreground and q-value consistency losses that make sure that changing the background (using the learned mask) doesn't change the foreground features and q-values\n\nThe method is tested on DMC generalization benchmark and on robotic manipulation tasks.\n\n- The method performs really well with various distractors;\n- The idea of re-using the learned masks for augmentations is interesting and, as far as I can tell, novel;\n\n- The writing is a bit sloppy, with many typos and confusing sentences;\n- The resulting objective is too complex and has too many terms;\n- No comparison to TIA, although the presented method is quite similar. Was that because you only compare to model-free methods?\n\nTypos (some of them, I didn't write down all of them, please run a spell checker on the text):\nline 13: achieving free from overfitting : not clear what this means\nline 38: further strengths -> further strengthens\nline 100: focused in -> focused on\nline 536: Comparision -> comparison\n\n- In your objectives, you're maximizing the mutual information between foreground representations of two consecutive states and the action that was taken between them. Have you tried minimizing MI between background representations and actions and or rewards? This could be done with information bottleneck method for example. If you haven't tried this, do you this can help?"
}
] | |
wyYsCI3K7U | LoRANN: Low-Rank Matrix Factorization for Approximate Nearest Neighbor Search | Approximate nearest neighbor (ANN) search is a key component in many modern machine learning pipelines; recent use cases include retrieval-augmented generation (RAG) and vector databases. Clustering-based ANN algorithms, that use score computation methods based on product quantization (PQ), are often used in industrial-scale applications due to their scalability and suitability for distributed and disk-based implementations. However, they have slower query times than the leading graph-based ANN algorithms. In this work, we propose a new supervised score computation method based on the observation that inner product approximation is a multivariate (multi-output) regression problem that can be solved efficiently by reduced-rank regression. Our experiments show that on modern high-dimensional data sets, the proposed reduced-rank regression (RRR) method is superior to PQ in both query latency and memory usage. We also introduce LoRANN, a clustering-based ANN library that leverages the proposed score computation method. LoRANN is competitive with the leading graph-based algorithms and outperforms the state-of-the-art GPU ANN methods on high-dimensional data sets. | https://openreview.net/pdf/818415f345eae75fa997bddb07da900aca844fb4.pdf | [
{
"confidence": 5,
"rating": 7,
"review_id": "hYN2ViwRzs",
"review_text": "This paper investigates approximate nearest neighbor (ANN) search, where, given a collection $\\mathcal{X}$ of points in $\\mathbb{R}^d$, the task is to find the top $k$ data points that are closest to a query point $q$ according to some similarity or dissimilarity measure (denoted by $\\delta(\\cdot, \\cdot)$), such as inner product. There are many classes of algorithms in existence[1], with this particular work falling into the clustering-based paradigm.\n\nIn clustering-based (aka Inverted File or IVF) ANN search, $\\mathcal{X}$ is partitioned into a set number of clusters, often using a geometric clustering algorithm such as (spherical) KMeans, with every cluster represented using some sketch of the cluster such as its mean. When presented with $q$, the algorithm first identifies $\\texttt{nprobe}$ clusters to search by ranking the clusters according to the distance between $q$ and their representative points. It then computes $\\delta(q, \\cdot)$ with points within the $\\texttt{nprobe}$ clusters, and returns the top $k$ points from that set.\n\nThis work concerns the second step. Typically the computation of $\\delta(q, \\cdot)$ uses Product Quantization (PQ) to reduce memory consumption and to perform the distance computation efficiently. Instead, this work reduces the dimensionality of the data matrix within each cluster using its low-rank approximation. The key insight is that, the low-rank approximation is constrained to the space of rank $r$ matrices that predict the inner products well on a specific query distribution.\n\n\n[1] \"Foundations of Vector Retrieval\" by S. Bruch. Springer.\n\n* The proposed method relies on a very simple yet effective method for supervised dimensionality reduction in the context of ANN search\n* The paper is easy to read and arguments straightforward to follow\n* Results are encouraging\n\nPost-discussion Update: The authors have addressed my concerns around the experimental setup, and have expressed interest in adopting a more clear narrative and framing of their contributions.\n\n-----------------\n\n* Presentation:\n - I think the authors can shed quite a bit of fluff by positioning the work as I did in my summary. This work's contribution is very much in the speeding up and improving the accuracy of the score computation phase in clustering-based/IVF ANN search. Presented that way, the authors can immediately focus on the regression problem instead, and not introduce distractions such as the details of clustering, the importance of MIPS (section 2.1), and more. It'd make for a cleaner presentation of your idea, and lets your readers understand the scope of your contributions more clearly.\n - As a minor point, Theorem 1 is a vacuous statement. It's neither necessary to explain the findings of the paper, nor is it insightful enough to birth new research directions. Perhaps you can move it entirely to the appendix if you insist on including it in the work.\n - It must be noted that the method presented in this work is supervised. That is a critical differentiating factor between LoRANN and existing methods such as PQ and Scann.\n\n* Methodology: One of the interesting insights that led to Scann is that not all inner products are equally important. For a quantization method to be successful, it needs to preserve the inner product between $q$ and high-ranking data points better than inner product between $q$ and low-ranking points. In your work, you model the problem as regression, and attempt to minimize the error of inner production approximation equally for all data points. What motivates this uniform weighting? Have you considered a ranking formulation of the problem rather than regression? There is a vast literature on learning-to-rank which, in fact, is very relevant to your idea, but where Scann's insight is baked into its machinery/objective.\n\n* Experiments: Because the methodology is very straightforward and the novelty is minimal, I expect a much stronger experimental evaluation of the proposed method. Here are a few points to consider:\n - Your main experiments conflate two orthogonal axes of evaluation: effect of clustering vs effect of score computation; this I believe stems from the way you present your work. Your contribution, as I noted above, is to the score computation phase of IVF-based ANN search. To evaluate your contributions fairly against SOTA IVF methods, you must partition the data once. Given this fixed set of partitions, you can directly compare the efficacy of LoRANN against PQ and Scann's quantization protocol. By running each method independently as you do now, such that each produces its own partitioning of the data separately, you run the risk of conflating the effect of clustering on IVF's accuracy with the effect of the specific choice of dimensionality reduction/quantization. As it stands, I cannot deduce the exact reason why your method should work better.\n - You are also comparing a supervised method that adapts to a query distribution, with unsupervised baselines. Not only is it not a fair comparison, your results are also not informative. It is not surprising that your method does well: you give it an unfair advantage (as confirmed by Figure 1 - left) by finding a matrix that can predict inner products on *a specific query distribution*. A more reasonable experiment would be to (a) compare a variant of LoRANN that's trained on the data points only (i.e., without training queries) with other IVF methods, and (b) incorporating the query distribution into Scann (its objective can use information about the query distribution). There are other methods that can use a query set to improve quantization ([2,3] are a couple of examples).\n - As a very simple baseline, consider partitioning the data using centroids obtained from a partitioning of queries!\n - Frankly, a comparison with graph methods is nice, but is rather tangential. I encourage you to contrast your method with other IVF methods first, focus your discussion to justifying your proposal against SOTA IVF methods, and then conclude your work with a comparison with graph methods for completeness.\n\n\n[2] \"Query-Aware Quantization for Maximum Inner Product Search\" by Zhang et al. AAAI 2023.\n \n[3] \"A Learning-to-Rank Formulation of Clustering-Based Approximate Nearest Neighbor Search\" by Vecchiato et al. SIGIR 2024.\n\nMy questions mainly concern your experimental evaluation:\n* Setup: What's the training set used to train LoRANN? You have kindly given statistics about each dataset, but sadly did not include any information about the size of the query sets.\n* Figure 1: What is the size of the initial candidate set in the right figure, where reranking is enabled? It is important to know this because the size of the initial set can explain the small difference between the different curves (e.g., if you retrieve a very large set followed by re-ranking, the accuracy of each method pre-reranking becomes less and less important)\n* PQ is obviously sensitive to the bitrate, a hyper-parameter. Can you elaborate how LoRANN holds up against PQ as you sweep the rank parameter and PQ's code size, in terms of speed and memory usage?"
},
{
"confidence": 3,
"rating": 5,
"review_id": "7hkuCLAse0",
"review_text": "This paper introduces a new method for the nearest neighbor search problem. Leveraging the low-rank assumption, the authors combine low-rank matrix factorization, clustering, and quantization to enhance the speed of nearest neighbor search. The authors conducted extensive experiments to demonstrate the advantages of their method over numerous baselines.\n\n1. The authors conducted extensive experiments to compare their methods with other baselines.\n2. The method proposed by the authors is easy to follow and implement.\n\n1. It seems that all the techniques mentioned in this paper have already known to be useful for nearest neighbor search.\n2. As shown in Figure 2, all the components contribute to the final results. I don't see any reason why any component applied there is unique to the new algorithm. For example, the clustering and 8-bit quantization techniques appear to be applicable to any existing nearest neighbor search algorithm or library. Thus, I question whether it is fair to employ too many techniques when comparing with other standard nearest neighbor search libraries.\n\n1. Regarding the low-rank approximation, I don't understand why this method is fundamentally different from first performing dimension reduction on the dataset and then applying any standard nearest neighbor search algorithm."
},
{
"confidence": 1,
"rating": 8,
"review_id": "1BMtdAST72",
"review_text": "The paper describes a method for computing approximate nearest neighbors in\nhigh dimensions. Computing nearest neighbors is a classical problem in\ncomputational geometry, with applications in many areas of computer science.\nThe classical solutions in low dimensions do not generalize to high dimensions.\nThe approach in the paper has two main ideas: the first is performing k-means\nclustering, computing nearest neighbors on the means, and then computing \nmore accurate nearest neighbors inside the cluster.\nThe second is reducing the computation in each cluster to multivariate\nregression which can be solved approximately by low rank matrix factorization.\n\nThe result appears to be very useful in many applications.\n\nUnfortunately I am not an expert in this field and cannot comment on how this\nresult compares to the current state of the art.\n\nN/A"
},
{
"confidence": 3,
"rating": 6,
"review_id": "wJ3ZNB3B3q",
"review_text": "The paper presents LoRANN, a novel algorithm for Approximate Nearest Neighbor (ANN) search that leverages low-rank matrix factorization and k-means clustering. The core idea is to approximate the ordinary least squares solution of the inner product computation via reduced-rank regression. The authors also introduce a quantized 8-bit version of LoRANN, which is memory efficient and performs well on high-dimensional data. The experiments demonstrate that LoRANN outperforms existing methods on both CPU and GPU.\n\nThe authors provide extensive experimental results, reporting that their method outperforms leading product quantization-based algorithms and has faster query times than graph-based methods at certain recall levels.\n\nThere exists room for improvement in the visual presentation in this paper. Additionally, it is best to keep the starting or ending points consistent to better compare all methods.\n\n At different recall levels, LoRANN is sometimes faster, and sometimes slower compared to other methods (GLASS, CAGRA). The authors should analyze the reasons that lead to this phenomenon\n\nDoes LoRANN provide any theoretical guarantees on approximation quality or search time?"
}
] | |
ww62xltEfB | A provable control of sensitivity of neural networks through a direct parameterization of the overall bi-Lipschitzness | While neural networks can enjoy an outstanding flexibility and exhibit unprecedented performance, the mechanism behind their behavior is still not well-understood. To tackle this fundamental challenge, researchers have tried to restrict and manipulate some of their properties in order to gain new insights and better control on them. Especially, throughout the past few years, the concept of *bi-Lipschitzness* has been proved as a beneficial inductive bias in many areas. However, due to its complexity, the design and control of bi-Lipschitz architectures are falling behind, and a model that is precisely designed for bi-Lipschitzness realizing a direct and simple control of the constants along with solid theoretical analysis is lacking. In this work, we investigate and propose a novel framework for bi-Lipschitzness that can achieve such a clear and tight control based on convex neural networks and the Legendre-Fenchel duality. Its desirable properties are illustrated with concrete experiments to illustrate its broad range of applications. | https://openreview.net/pdf/3b2ddda7e4b50b265094c5f987886a84be0b3958.pdf | [
{
"confidence": 3,
"rating": 6,
"review_id": "QCWTSBhfci",
"review_text": "This paper investigates and proposes a novel bi-Lipschitz neural network architecture. This architecture provides a simple, direct and tight control of the Lipschitz and inverse Lipschitz constants through the use of two parameters, the ideal minimum, equipped with theoretical guarantees. To devise their architecture the authors exploit convex neural networks and the Legendre-Fenchel duality. The authors also propose a variant of their bi-Lipschitz architecture that is more scalable by exploiting partially input convex neural networks. Finally, the authors propose a set of experiments to showcase the utility of our model in concrete machine learning applications, namely, uncertainty estimation and monotone problem settings and show that it can improve previous methods.\n\n- The paper is well written. After writing a clearly structured related work (with an extensive background and related work proposed in Appendix A), the authors propose their new design and explicitly explain how the forward pass of their network is computed as well as the expressivity and how the backpropagation can be done. \n- The authors acknowledge that the computational cost of their approach can pose serious limitation and propose to overcome this problem with partially input convex neural networks.\n\n- It would be interesting of the authors could provide experiments with both their architectures with respect to computational cost, and highlight time of training etc.\n\nNA."
},
{
"confidence": 3,
"rating": 6,
"review_id": "5WRSfjcRzm",
"review_text": "This paper proposes a novel neural network architecture called BLNN (Bi-Lipschitz Neural Network) that allows direct control and parameterization of the overall bi-Lipschitzness of the network. The main contributions include: i) a framework that allows tight control of Lipschitz and inverse Lipschitz constants of networks via using convex neural networks and the Legendre-Fenchel transformation, ii) comprehensive theoretical analysis, iii) empirical evaluation showing the nice performance of BLNN on tasks like function fitting, out-of-distribution detection, and monotone regression.\n\n**Originality:**\n\nThe paper presents a novel approach to constructing bi-Lipschitz neural networks that is distinctly different from existing methods. The use of convex neural networks and Legendre-Fenchel transformation to directly parameterize overall bi-Lipschitzness is quite novel. The extension (e.g. partially bi-Lipschitz networks, etc) is also new.\n\n\n**Quality:**\n\nThe quality of the paper is good. The authors provide detailed proofs and analyses for their key claims, including the bi-Lipschitz properties of their construction and the expressive power of the resulting networks. The experiments cover various scenarios, from simple function fitting to uncertainty estimation and monotone regression. The results are quite competitive.\n\n\n**Clarity:**\n\nThe paper is generally well-structured and clearly written. However, given the technical nature and the length of the paper, understanding the paper fully is still a tough task.\n\n**Significance:**\n\nThe paper's contributions are significant in its solid theoretical developments. The significance is further underscored by the improved performance on tasks like out-of-distribution detection and monotone function learning. In conclusion, this paper presents a novel approach to an important problem in deep learning theory and practice.\n\n1. Computational Complexity: A detailed analysis of time and space complexity compared to traditional networks can be helpful.\n\n2. Scalability and Practical Implications: There's insufficient exploration of how the method scales to very large networks or complex datasets (e.g. TinyImageNet).\n\n3. Hyperparameter Sensitivity: More discussions on this issue will be beneficial.\n\n4. The paper could be more explicit about scenarios where the theoretical guarantees might not hold, and could explore potential extensions to other network architectures beyond feedforward networks.\n\n1. How does the proposed method perform on larger, more complex datasets like TinyImageNet or ImageNet?\n\n2. Can the authors clarify the computational complexity of their approach?\n\n3. Can the authors provide a more comprehensive study on hyperparameter sensitivity?\n\n4. Can the authors comment on other network structures (e.g. implicit models, DEQs, etc)?"
},
{
"confidence": 4,
"rating": 7,
"review_id": "wDIICu5jAD",
"review_text": "This paper proposes to control the bi-Lipschitzness of a neural-network by parameterizing the output by the Legendre-Fenchel-Dual. This involves parameterizing a strongly convex function and computing the minimum of that function in the forward pass. Several benchmarks are studied in simple regression tasks and uncertainty quantification.\n\n-The framework is interesting because it parameterizes bi-Lipschitz networks in a way that is not layer-wise and instead takes advantage of the Legendre-Fenchel transform LFT / convex conjugate of parameterized strongly-convex functions (ICNN), which only modifies the output of the output.\n\n-Computing the LFT of a given function can be costly, however the paper offers a non-asymptotic bound for the Lipschitz constant and tractable gradient.\n\n-The experimental results show a considerable improvement in tightness and regularity over other Lipschitz controlled networks like spectral normalization, AOL and Sandwich layers on small regression tasks. In particular BiLipNet behaves a lot better when the Lipschitz constant is overestimated in existing parameterizations.\n\n-Computing the LFT seems to be quite expensive, hence why the experiments are only on simple 2d problems and fashion-MNIST. For this reason I'm doubtful that it will be used for any large-scale network training pipelines where tight Lipschitz control and estimation is challenging.\n\n-The provable approximation class is limited to alpha-strongly monotone functions and is the derivative of some function almost everywhere. Lipschitz layers like AOL, SLL and Sandwich layer are all solutions to the LipSDP framework which only requires the activations themselves to be alpha-strongly monotone for alpha >= 0 (Fazlyab et al., 2019).\n\n-Is it also necessary that BLNN is a strongly monotone function? It seems that many of the regression experiments involve monotone target functions (figure 2 and 3), but I'm not sure if that is because BLNN is not capable of representing monotone functions or just not a great representer due to your approximation theorem. If it can represent non-monotone functions, it would be interesting to see a simple regression comparison to SLL, AOL, etc. Answering this question will greatly help my evaluation.\n\n-The Lipschitz parameterization of SLL, AOL, and Sandwich layers commonly uses compositions of 1-Lipschitz layers for the application of certified robustness. How would the BLNN parameterization compare to existing 1-Lipschitz layer networks in the certified robustness setting? I’d imagine BLNN might be much more expressive than composed 1-Lipschitz layers which could have a big impact.\n\n-Have you considered amortizing the LFT computation as done in this paper? https://arxiv.org/abs/2210.12153 \n\n-I'm curious if there is any possibility of extending BLNN to convolutional layers? These settings are interesting for larger image classification problems like CIFAR10 and Imagenet."
}
] | |
wvQHQgnpGN | Solving Zero-Sum Markov Games with Continuous State via Spectral Dynamic Embedding | In this paper, we propose a provably efficient natural policy gradient algorithm called Spectral Dynamic Embedding Policy Optimization (\SDEPO) for two-player zero-sum stochastic Markov games with continuous state space and finite action space.
In the policy evaluation procedure of our algorithm, a novel kernel embedding method is employed to construct a finite-dimensional linear approximations to the state-action value function.
We explicitly analyze the approximation error in policy evaluation, and show that \SDEPO\ achieves an $\tilde{O}(\frac{1}{(1-\gamma)^3\epsilon})$ last-iterate convergence to the $\epsilon-$optimal Nash equilibrium, which is independent of the cardinality of the state space.
The complexity result matches the best-known results for global convergence of policy gradient algorithms for single agent setting.
Moreover, we also propose a practical variant of \SDEPO\ to deal with continuous action space and empirical results demonstrate the practical superiority of the proposed method. | https://openreview.net/pdf/12158a5941ae353d09451219c72139c623a117bb.pdf | [
{
"confidence": 4,
"rating": 7,
"review_id": "gCsg2TAdgK",
"review_text": "This paper studies the two-player zero-sum stochastic Markov games (2p0s-MGs) with large scale or continuous state spaces. These problems have a large cardinality and function approximation methods are needed. The paper consider a spectral dynamic embedding method and proposed SDEPO. This methods utilized the transition dynamics in the construction of the state-space value function. SDEPO is able to converge with order $1/\\epsilon$, which matches the optimal rate in single agent RL.\nTheorems are provided for the last iterate convergence of the SDEPO algorithm. The effectiveness of the algorithm has been verified in games against baseline methods.\n\nGenerally the paper is well structured, but section 3-4 should be better explained while section 5 and 6 focused on the \"practical algorithms\" is stretched quite far from the analytical results in the previous sections.\n\nThe function approximation approach for markov games is a necessity for those problems with large/infinite state cardinality. It is indeed true that the dynamics of the problem was not utilized in previous methods. This algorithm seems to be the first work addressing this.\n\nThis work adapted the spectral dynamic embedding for stochastic nonlinear control problems and proposed SDEPO, the motivation is clear and well stated.\n\nThe only evaluation for the proposed algorithm is a rate of success in playing games with baseline algorithms. To the reviewer, this seems to be very limited. For a submission with strong theoretical focus, current result fails to validate the convergence properties of the proposed algorithm.\n\nThe sections for main results are not very well-written and is a bit difficult to read, more explanation would be appreciated. Although this could be due to the page limit.\n\nAssumption 5 is in fact quite strong, a brief discussion on the impact and reasoning should be provided.\n\nSection 5 and 6 seems a bit rushed and is intended to bring out the neural networks, the prior sections discussed the setting with tabular actions, where in these sections the action space is seen as continuous and more algorithms have been added, with no analytical results. I suggest the authors focusing on the existing setting with better presentation, explanation and more experiments.\n\nAnother problem this paper did not address is what are the current existing algorithms involving dynamics and function approximation in the single agent setting. The single agent RL with function approximation literature should be somewhat addressed in general.\n\nWhat is the effect of truncation w.r.t. the later regularity condition assumptions? Does a more limiting truncation negatively impact these assumptions on the problem?\n\nWhat is the reason for the consideration of one-sided $\\epsilon$−optimal Nash equilibrium? The author stated that many existing works also consider this, but an explanation would be appreciated."
},
{
"confidence": 3,
"rating": 4,
"review_id": "aTX1g2NnQ4",
"review_text": "This paper proposes a new algorithm named Spectral Dynamic Embedding Policy Optimization (SDEPO) to solve the zero-sum Markov games with continous state and finite actions. The convergence analysis indicates that the proposed method achieves the best-known sample complexity as the case of finite-state space; this paper is the first theoretical result in handling the continuous state space with known dynamic and infinite horizon.\n\nThis paper is the first result for solving the NE of infinite-horizon two-player zero-sum Markov games with continuous state space when the dynamic is known. Moreover, this paper resents sufficient introduction on the technical backgrounds and preliminaries. All assumptions are clearly listed. Lastly, the theoretical results are verified using emprical experiments.\n\n1. The Assumption 1 is not reasonable. It says, whatever the state $s$ and the action $(a,b)$ are, the agent can move to any state $s'$ with a positive probability. Please correct me if I am wrong.\n\n2. I am confused about what is new in the Spectral Dynamic Embedding method. It seems that both Bochner and Mercer theorem are well-known. This paper simply applies them to represent the transition probability and the reward using some kernels. Then everything is the same as traditional method in RL. \n\n3. A mild comment on Assumption 3: Since the optimal policy might be deterministic, it means that $\\pi(a|s)$ is likely to be zero for some $a$. During the training, the policy $\\pi_k$ will tend to the optimal policy; the mass at non-optimal action will also approach to $0$. It means if $\\underline{c}$ is larger than $\\epsilon$, then $\\pi_k$ will never converge to the optimal action in the sense of $L_\\infty$ norm. From my understanding, the author needs to set $\\underline{c}$ to be $\\epsilon$ and it won't affect the complexity.\n\nQ1: Can the author justify the use of Assumption 1? It seems to be unrealistic in RL. \n\nQ2: This paper seems to be a simple combination of linear MDP + policy gradient. What is the novel part of this work? I feel hard to consider representing $\\mathbb{P}$ and $r$ using kernel methods as a new thing."
},
{
"confidence": 2,
"rating": 7,
"review_id": "Bg6fXGFWFy",
"review_text": "The authors introduce an innovative approach to solving 2p0s-MGs with continuous state spaces, providing both theoretical guarantees and practical improvements over existing methods. The SDEPO algorithm and its variants offer efficient and scalable solutions for complex Markov games, potentially applicable to various domains in reinforcement learning.\n\n1. This paper proposes a new Spectral Dynamic Embedding Policy Optimization algorithm that effectively addresses two-player zero-sum Markov games with continuous state space and finite action space.\n2. To handle the finite action spaces, a practical variant of SDEPO is proposed to manage continuous action spaces, with empirical results showcasing its superior performance.\n3. The complexity result of SDEPO matches the best-known results for policy gradient algorithms in the single-agent setting, proving its efficiency.\n\n1. The spectral embedding methods can be computationally intensive in practice due to the complexity of handling spectral dynamic embeddings.\n2. Why were these specific feature generation methods chosen? Is the proposed method sensitive to feature generation methods?\n3. The experiments are somewhat limited, expanding the empirical section to include more complex and diverse scenarios would significantly strengthen the paper.\n\nPlease see the weakness part."
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.