paper_id
stringlengths 10
10
| title
stringlengths 17
149
| abstract
stringlengths 468
2.59k
| pdf_url
stringlengths 71
71
| reviews
listlengths 2
7
|
---|---|---|---|---|
wsqDJHPUHN | On the Ability of Developers' Training Data Preservation of Learnware | The learnware paradigm aims to enable users to leverage numerous existing well-trained models instead of building machine learning models from scratch. In this paradigm, developers worldwide can submit their well-trained models spontaneously into a learnware dock system, and the system helps developers generate specification for each model to form a learnware. As the key component, a specification should characterize the capabilities of the model, enabling it to be adequately identified and reused, while preserving the developer's original data. Recently, the RKME (Reduced Kernel Mean Embedding) specification was proposed and most commonly utilized. This paper provides a theoretical analysis of RKME specification about its preservation ability for developer's training data. By modeling it as a geometric problem on manifolds and utilizing tools from geometric analysis, we prove that the RKME specification is able to disclose none of the developer's original data and possesses robust defense against common inference attacks, while preserving sufficient information for effective learnware identification. | https://openreview.net/pdf/ca2805a8aaa53ff63c596616b50b597d8d958618.pdf | [
{
"confidence": 2,
"rating": 7,
"review_id": "6oh9D3njlw",
"review_text": "The authors theoretically analyze the properties of the learnware paradigm. In the learnware paradigm, a model developer can provide their trained models for other developers to use. To enable re-use, along with the model the developer provides a model specification that adequately represents the model's training data. This allows developers looking for models to find those that are most useful for their tasks of interest. Importantly, this specification should preserve the privacy of the original training data of the model.\n\nWhile, the reduced kernel mean embedding specification has been proposed in the literature, a theoretical analysis that guarantees the protection of the model's training data is missing. The authors prove that RKME can simultaneously have the following three desirable properties:\n* It does not contain any of the original training data points\n* It is robust against inference attacks\n* It still preserves sufficeint information about the training data for effective use as a learnware specification.\n\nTo the best of my knowledge, the results provided by the authors are novel. While I am not an expert neither in learnware systems nor in reproducing kernels, the results provided and the tools used in the analysis are non trivial. The result should be of high significance to the learnware community, especially since disclosing a model specification may carry risk if there are no formal guarantees. In terms of writing, the authors introduce the learnware problem and their contribution in a clear manner in Sections 1 and 2. The figures presenting the trade-offs between the different choices of $m$ are also very helpful for readers who may not be able to follow the theoretical results.\n\nI think the clarity of Sections 3 and 4 can be significantly improved, so that they can be more approachable to a broader audience. \n\nFor Section 3, the authors present core results that are the basis of Theorems 3.4 and 3.5 but the connection to these theorems is not particularly clear. I would advise the authors to first explain the proof sketch and then present the key lemmas and how they connect to the proof sketch. See also the questions section for more.\n\nFor Section 4, I understand that the setting is even more complicated compared to Section 3 but providing some more intuition behind Theorem 4.2, especially the parts that are not already covered in Section 3, would also be helpful.\n\nI am a bit confused with regards to Theorem 3.4: Is Theorem 3.4 proven for the $\\delta=0$ case or is it proven for a specific $\\delta$? Intuitively, the overlap of a continuous distribution and a discrete distribution of synthetic data should be zero for $\\delta=0$ regardless of how the number of discrete synthetic points. So I feel I am missing something.\n\nAlso I am still not sure which is the $\\delta$ chosen for Theorem 3.5. Can you explain this more?\n\nI am a bit confused about the linkability privacy game. It seems like the game can be technically split in two games, one when $b=0$ and one when $b=1$. Given that the adversary knows $b$, these two subgames are completely independent. In addition, the subgame of $b=0$ is trivial because the adversary trivially knows the answer. I am thus unsure what is the value of having the $b=0$ subgame at all. I guess my question here, is why is the random $b$ introduced in the game?"
},
{
"confidence": 2,
"rating": 6,
"review_id": "rtMAmAZIE6",
"review_text": "The paper presents the \"Reduced Kernel Mean Embedding (RKME)\" specification, which represents a model's capabilities while ideally preserving the privacy of the original training data. The paper provides a theoretical analysis and proves that the RKME specification can protect the training data against common inference attacks and maintain its utility in the learnware search process.\n\n* This paper aims to resolve the crucial data privacy challenge while enabling the effective reuse of pre-trained models under the learnware setting. \n* This paper provides a comprehensive theoretical framework to prove the efficacy of RKME in preserving privacy. The proofs are detailed and robust, offering a strong theoretical foundation for the claims about data privacy and security against inference attacks.\n* The paper also discusses the practical implementation of the RKME specification in learnware systems.\n\n* The paper focuses on theoretical proofs and lacks extensive empirical evidence to support the effectiveness of the RKME specification in real-world scenarios.\n* The analysis primarily hinges on the assumption that the RKME specification works optimally with certain types of data distributions and kernel functions.\n\nI do not have particular questions as I am unfamiliar with the field."
},
{
"confidence": 2,
"rating": 6,
"review_id": "eIkZpt6Nbg",
"review_text": "The paper analyzes the data preserving properties of Learnware, wan interesting idea involving a marketplace of pretrained ML models. In Learnware, new inference tasks are matched to ML models capable of solving that task without any raw data being shared. Rather, the method leverages RKME to construct a smaller, synthetic representation of the model's distribution over inputs and outputs. In this work, the paper explores whether Learnware is secure against data privacy attacks (linkage, attribute inference) when using the Gaussian kernel and various assumptions on the data. More compact representations are shown to be harder to attack. However, this reduces model search (retrieval) quality inducing a tradeoff.\n\n+ Analysis of the ability of Learnware to resist privacy attacks against the dataset used to train the model makes the Learnware ecosystem more robust. Demonstrating the tradeoff between privacy and search (retrieval) quality is an intuitively clear result.\n\n+ The theoretical results and analyses seem novel to me, as far as I can tell. A brief search didn't turn up anything relevant. (However, this is outside my area of expertise so I'm unable to assess validity.)\n\n- The paper analyzes the privacy-preserving properties of Learnware. However, I remain unsure about the benefits of the core Learnware system. Reading through the recent references (\"Learnware: Small models do big\"), I'm left with many questions which are not really addressed in any of the papers. I don't see how Learnware is better than the existing model sharing infrastructure (model hub, data and model cards, benchmark results, open-source training and inference code). The existing ML model sharing infrastructure is widely used already and doesn't require the new user to even label any data first. Please see the questions below.\n\n- The Learnware ecosystem seems like a very niche area. Without additional details of system usage, it becomes difficult to assess the impact of contributions in this paper.\n\n- I'm not really equipped to comment on the quality of the theoretical analyses. That said, the paper could do a better job of describing how the analyses build on and fit into the larger body of work on related tasks.\n\n- Experiments exploring the tradeoff between data linkage protection and search performance would have been nice to have. Without these, I'm again left wondering if the existing ML model sharing infrastructure (which does not have this issue) is indeed better.\n\n- Is the Learnware market currently operational in the wild? Please provide additional details into which parts of the Learnware ecosystem are actually in use at this time vs hypothesized. For example, scale of daily uploads & downloads?\n\n- Please describe how the Learnware approach outperforms existing ML model-sharing infrastructure (model hubs, data cards, model cards, benchmarks, open source training and inference routines). For example, why is downloading an image segmentation model checkpoint off a model hub after reading through its data and model cards, benchmark and performance reviews, and trying it out in the online UI insufficient? How well does Learnware's \"anchor learnwares\" mechanism work in this situation compared to the approach above?\n\n- Which of the theoretical analyses or results included in this paper are novel compared to prior geometric analyses or privacy works? Please include references, if any, for the analytical techniques in the paper.\n\n- Is it possible to experimentally explore the data linkage vs search quality tradeoffs? How does the search quality degradation affect the user experience of trying to find an appropriate model?"
}
] |
wsHMb4J2o9 | The Feature Speed Formula: a flexible approach to scale hyper-parameters of deep neural networks | Deep learning succeeds by doing hierarchical feature learning, yet tuning hyper-parameters (HP) such as initialization scales, learning rates etc., only give indirect control over this behavior. In this paper, we introduce a key notion to predict and control feature learning: the angle $\theta_\ell$ between the feature updates and the backward pass (at layer index $\ell$). We show that the magnitude of feature updates after one GD step, at any training time, can be expressed via a simple and general *feature speed formula* in terms of this angle $\theta_\ell$, the loss decay, and the magnitude of the backward pass. This angle $\theta_\ell$ is controlled by the conditioning of the layer-to-layer Jacobians and at random initialization, it is determined by the spectrum of a certain kernel, which coincides with the Neural Tangent Kernel when $\ell=\text{depth}$. Given $\theta_\ell$, the feature speed formula provides us with rules to adjust HPs (scales and learning rates) so as to satisfy certain dynamical properties, such as feature learning and loss decay. We investigate the implications of our approach for ReLU MLPs and ResNets in the large width-then-depth limit. Relying on prior work, we show that in ReLU MLPs with iid initialization, the angle degenerates with depth as $\cos(\theta_\ell)=\Theta(1/\sqrt{\ell})$. In contrast, ResNets with branch scale $O(1/\sqrt{\text{depth}})$ maintain a non-degenerate angle $\cos(\theta_\ell)=\Theta(1)$. We use these insights to recover key properties of known HP scalings (such as $\mu$P), and also introduce a new HP scaling for large depth ReLU MLPs with favorable theoretical properties. | https://openreview.net/pdf/b9078a3029a8ba5f5d9f97f0236d84340733175b.pdf | [
{
"confidence": 4,
"rating": 6,
"review_id": "CYk0INQOVj",
"review_text": "The paper introduces the BFA - a novel quantity to predict and control feature learning in DNNs, as well as the feature speed formula which allows expressing the magnitude of feature updates after one GD step. The paper recovers key properties of known HP scalings, and also extends these results by introducing a new HP scaling for large depth ReLU MLPs.\n\n1. The BFA and BFK are interesting objects to study and the geometrical picture that arises (mentioned in the introduction) gives a nice intuition.\n\n2. The main results (Thms 2.1 and 3.2) are clearly stated and the proofs are straightforward. \n\n3. The contributions are clearly stated and the relation to previous work distinguishes these contributions. \n\n4. Earlier results are recovered here with a transparent derivation, but Ref [1] also provided quite an intuitive derivation, as you mentioned. \n\n\n\n\n\n[1] https://arxiv.org/abs/2310.17813\n\nDespite the strengths mentioned above, I did not give a higher score for the following reasons: \n\n1. Novelty for HP scaling: \nAs far as I can see, the main takeaway regarding HP scaling is the extension of known results, such as muP, to the limit of large width-then-depth. While this is indeed new, this is a somewhat limited contribution. \n\n2. Applicability of results: \nWhile some of the results are rather general (like Thm 2.1), some other parts of the results seem to apply only under rather limited conditions, e.g. only a single input. \n\n3. Experimental findings: \nI found issues with some of the experimental findings: I did not find a mention of what is assumed about the data: is it synthetic, random, from some known benchmark etc. Also, by inspecting Fig 2b I was not convinced that the output sensitivity is bounded away from zero. \n\n4. I feel that the paper could be made less technical and more readable by delegating some of the proofs to the Appendix and using the space for some visualizations. \n\n\n\n\ntypos: \n- Fig1 caption 1st line: witdh -> width\n\n1. in line 194 - BC condition - is this a strict equality? or is there some tolerance? \n\n2. In the Introduction you use the term \"hierarchical features\" - can you give a definition for that? \n\n3. in the BFK definition (eq. 5) - is this for multiple inputs? the NTK is defined for $x, x'$.. Is $m_v$ here the product of batch-size with input dimension to the layer?"
},
{
"confidence": 4,
"rating": 8,
"review_id": "L8DSAtaRCj",
"review_text": "The paper presents a novel perspective on infinite width and depth feature learning networks. It introduces the backward-to-feature kernel (BFK) as a central quantity determining the evolution of the intermediate layer features. The paper shows that the movement of the hidden layer features can be exactly related to an angle $\\theta_\\ell$ between the backward pass and the feature velocity, and uses insights on the scaling of the cosine of this angle with width to recover essentially all known infinite width and depth feature learning limits, as well as a novel large depth MLP limit.\n\nThe paper studies an extremely important topic in deep learning theory. Given the technically challenging nature of the study of large width and depth limits, the paper is superbly well-written and accessible. Prior papers by Yang et al and Bordelon et al have done important work in developing the study of large width and depth limits, but their derivations are either very dense or otherwise rely on non-rigorous methods to derive the scalings. This paper manages to both rigorously motivate feature learning at infinite width and depth while simultaneously making the paper short and accessible. This is a major strength and no easy feat. I commend the authors on it. \n\nBeyond this, there are several results of strong technical merit that will be of value for researchers studying infinite width and depth limits. The infinite depth MLP and scale invariant learning rate discussions are particularly interesting. The authors do a good job placing their work in context by presenting tables comparing their parameterization to others. \n\nUltimately, I believe that this paper is not only technically novel and sound, but is also a service to the community. I strongly recommend it for acceptance.\n\nThere are no technical weaknesses that I have found, and I have gone through the derivations in detail. My only comment is expository:\nIn equation 1, the definition of the forward pass $T_{\\ell}(f_{\\ell-1}, w_\\ell)$ as well as its discussion in terms of selection derivatives is quite technical and may confuse readers from outside sub-communities in machine learning. I recommend stating more clearly that this includes a simple forward pass such as $W_{\\ell} \\cdot f_{\\ell-1}$ and perhaps adding a footnote to make this first paragraph a bit more readable.\n\nAs a simple clarification, I want to confirm that the ResNet scalings found precisely reproduce those predicted by Bordelon et al and Yang et al. Are there any additional ResNet scalings that have not been studied in prior work that this paper finds? \n\nIn the second paragraph of the conclusion section \"it can only quantify feature speed for (S)DG (and does not apply to variants, a priori) and at “cut nodes” in the NN architecture, where all the signal goes through (in particular, it does not apply inside the blocks of a\n290 ResNet)\" \n\nI assume you mean \"(S)GD\". Can you please elaborate a bit more on what you mean by cut nodes? Is this like a residual block with many layers? It will be interesting if you can derive a similar feature formula for more general differentiable circuits with branching."
},
{
"confidence": 4,
"rating": 3,
"review_id": "zQ9Hz9oNCQ",
"review_text": "The authors propose a technical strategy for deriving neural net parameterizations that relies on controlling the angle between the activation gradient and the feature update. The authors derive various theoretical results about this quantity, including a formula for computing it, and some analyses in the context of MLPs and ResNets. The authors claim to use this principle to derive new parameterizations, but crucially they never test them in a real learning problem.\n\n- the authors propose an interesting notion and derive interesting analyses surrounding it\n- the parts of the math I checked seem rigorous and sound\n- the authors do a good job of connecting their work to related work\n- the ideas are quite creative\n\nI need to preface this review by saying that this feedback is intended to be constructive and to help you improve the paper. My current impression is that the paper is not ready for publication. I strongly encourage you to keep working in this direction, and I hope this feedback will be useful for that.\n\nWith that said, the main issues I see with the paper are:\n\n### **No real experimental evaluation**\n\nMy understanding is that the main practical outcome of your work and theoretical analysis is a new parameterization for training neural networks. I feel that it is really important for you to test this parameterization to check that it is useful, or at least to see what its properties are in an actual training situation. It's so easy to come by free cloud compute (e.g. Google Colab) that I can't really see a reason for not doing this.\n\nI don't feel that the experiments in Figures 1 and 2 are enough to convince me of the utility of your framework. Also I'm not sure how to reproduce these experiments. For example, what dataset did you use? What is the loss function?\n\nAs a side note, I'm also a bit doubtful that you can even train MLPs effectively beyond depth 20 or so. I read the Jelassi et al paper (https://arxiv.org/abs/2305.07810) and noticed they don't test their parameterization either. I may be wrong here, but I don't think you can hope for some engineer or experimentalist to pick up the paper and implement things for you. I think you have to be proactive here.\n\n### **Doesn't go that far beyond existing ideas**\nA lot of the paper focuses on dealing with analyzing or re-deriving existing parameterizations---e.g. muP or the 1/sqrt(L) depth scaling in ResNets. But this is not so interesting because it has already been done and there are already ways to analyze these things. What does your analysis offer that prior analyses do not? I also want to point out that concurrent works to this paper go beyond 1/sqrt(L) depth scaling rules. For example arxiv.org/abs/2405.15712 and arxiv.org/abs/2405.14813. And these papers actually experimentally test these deviations. Clearly these are concurrent works, but I just mention it to demonstrate that there is more out there.\n\n### **Paper only seems to focus on batch size one**\n\nIn my opinion, doing things only at batch size one is a bit toy, and it would be better to directly analyze larger batch sizes.\n\nPlease see the weaknesses section"
},
{
"confidence": 4,
"rating": 3,
"review_id": "gSuvCHRD0D",
"review_text": "This paper studies the feature learning speed of the layers of Neural Networks (NNs). Specifically, it proposes to measure it through the quantity *Backward-Feature Angle* (BFA), denoted by $\\theta_l$ for a layer $l$. This quantity is directly related to the layer-wise decomposition of the Neural Tangent Kernel (NTK). In practice, the BFA is measured experimentally and several properties (feature learning, signal propagation, etc.) can be related to the BFA.\n\n# Originality\n\nThis paper tackles an important problem: the relation between the optimal hyperparameters of a neural network and its architecture.\n\n# Clarity\n\nThis paper is easy to read and the statements are clear.\n\n# Originality\n\nThe BFA is closely related to the layer-wise decomposition of the NTK, which is already widely used in the NN optimization literature [1, 2, 3, 4]. Overall, the BFA does not contain any information that is not already available with previous objects.\n\n# Significance\n\nThe benefits and the properties of the BFA are still unclear.\n\nFor instance, Section 5 proposes a new scaling of the hyperparameters, that is not clearly related to the BFA. Besides, the experimental validation of this new scaling is not provided.\n\n# Quality\n\nThe contribution of this paper is unclear. The usefulness of the BFA, either theoretical or experimental, is still unclear, and the proposed hyperparameter scaling is not tested experimentally.\n\n# EDIT: References\n\n[1] Gradient descent provably optimizes over-parameterized neural networks (2018), Du et al.\n\n[2] Gradient descent finds global minima of deep neural networks (2019), Du et al.\n\n[3] A convergence theory for deep learning via over-parameterization (2019), Allen-Zhu et al.\n\n[4] Stochastic Gradient Descent Optimizes Over-parameterized Deep ReLU Networks (2020), Zou et al.\n\nHow does the \"ours\" hyperparameter scaling compare to the others (usual, muP or NTK)?"
}
] |
wsGzvhnoaX | Quantum Algorithms for Non-smooth Non-convex Optimization | This paper considers the problem for finding the $(\delta,\epsilon)$-Goldstein stationary point of Lipschitz continuous objective, which is a rich function class to cover a great number of important applications.
We construct a novel zeroth-order quantum estimator for the gradient of the smoothed surrogate.
Based on such estimator, we propose a novel quantum algorithm that achieves a query complexity of $\tilde{\mathcal{O}}(d^{3/2}\delta^{-1}\epsilon^{-3})$ on the stochastic function value oracle, where $d$ is the dimension of the problem.
We also enhance the query complexity to $\tilde{\mathcal{O}}(d^{3/2}\delta^{-1}\epsilon^{-7/3})$ by introducing a variance reduction variant.
Our findings demonstrate the clear advantages of utilizing quantum techniques for non-convex non-smooth optimization, as they outperform the optimal classical methods on the dependency of $\epsilon$ by a factor of $\epsilon^{-2/3}$. | https://openreview.net/pdf/364623854979ad37fb6af986b474c827d812af53.pdf | [
{
"confidence": 1,
"rating": 6,
"review_id": "p1CBiZGVb4",
"review_text": "This paper considers using quantum methods for stochastic optimization, using zeroth order queries. It looks like the main idea is that using quantum methods, one can summarize over finite difference calculations quickly and efficiently, to arrive at approximate subgradients efficiently; this would usually be very inefficient for classical methods. Overall, they are able to show speedup, from $O(\\epsilon^{-4})$ to $O(\\epsilon^{-3})$.\n\nThe problem is well contained and the premise is believable. \n\nThe classical optimization bits looks reasonable, and the results make sense. \n\nI skimmed through the appendix, and the classical optimization parts are reasonable.\n\nSection 3 is a bit hard to follow. The specific speedup offered by the quantum method is not entirely clear, though it is likely coming from Theorem B1. Perhaps a deeper discussion of this, and why this quantum speedup exists (e.g. is it a consequence of Deusch Josza? Can you provide a more complete argument for where the speedup appears? )\n\n(minor) Why do you say that finding a Clarke subdifferential is harder than finding a smooth differential? Generally speaking the complexities are comparable."
},
{
"confidence": 4,
"rating": 6,
"review_id": "8H1f3ExorZ",
"review_text": "This paper investigates quantum algorithms for finding the $(\\delta,\\epsilon)$-Goldstein stationary point of a potentially nonconvex and nonsmooth objective function $f$. Utilizing quantum variance reduction techniques as outlined in [42], the authors have developed a zeroth-order quantum estimator for the gradient of the smoothed surrogate of $f$. The stationary point of this smoothed surrogate is also the Goldstein stationary point of $f$ when using an appropriate smoothing parameter $\\delta$. Leveraging this zeroth-order quantum estimator, the authors propose two algorithms, QGFM and QGFM+, to find the Goldstein stationary point, achieving a quantum speedup on the order of $\\epsilon^{-2/3}$. Additionally, the QGFM+ framework adjusts the variance level during each variance reduction step, providing further acceleration to the Q-SPIDER algorithm described in [42] for smooth nonconvex optimization.\n\nThis paper initiates the study of quantum algorithms for finding Goldstein stationary points, a significant problem in continuous optimization. Additionally, the authors present an explicit construction of the quantum sampling oracle using the quantum zeroth-order oracle, including a detailed discussion on the number of qubits required.\n\nDespite the detailed implementation and calculations, the overall technical approach remains relatively straightforward. The zeroth-order quantum estimator combines the classical stochastic gradient estimator for the smoothed surrogate with the quantum variance reduction algorithm in [42]. The quantum algorithms for finding the Goldstein stationary point are obtained by replacing the classical estimators with quantum estimators. Moreover, the narrative is somewhat incomplete due to the absence of lower bound results.\n\nIs it possible to improve the $\\delta$ dependence using quantum algorithms?\n\nMinor issues:\n\n1. Consistency of big-O notation. For example, $O$ is used in line 139 and $\\mathcal{O}$ in line 183. Similarly, there are consistency issues with the quantum oracle notation, where $\\mathcal{O}$ is used in line 168 and $\\mathbf{O}$ in line 184.\n\n2. Typo on the RHS of the inequality in line 125.\n\n3. The use of dashes '-' is a bit odd. For example, the dashes in line 139, line 210, and line 251 can be removed.\n\n4. The name initials in the citation format are not precise. For example, in entry [1], it should be 'G. Arfken' instead of 'G Arfken'.\n\n5. Line 310: \"adjust\" -> \"adjusts\". Line 311: \"fixed\" -> \"fixes\"."
},
{
"confidence": 3,
"rating": 7,
"review_id": "pLsAcppKbZ",
"review_text": "This paper studies quantum algorithm for non-smooth non-convex stochastic optimization with zeroth-order oracle. It introduces an effective quantum estimator that reduces the variance compared to classical zeroth-order estimators. Upon substituting this estimator into known zeroth-order non-smooth optimizers, namely GFM and GDM+, the resulting quantum optimizer achieves improved rate $\\tilde O(d^{3/2}\\delta^{-1}\\epsilon^{-3})$ and $\\tilde O(d^{3/2}\\delta^{-1}\\epsilon^{-7/3})$ respectively for finding a $(\\delta,\\epsilon)$-Goldstein stationary point. Notably, quantum speedup improves upon the classical lower bound $\\delta^{-1}\\epsilon^{-3}$ by a factor of $\\epsilon^{2/3}$. Moreover, a modified algorithm achieves $O(\\sqrt{d}\\epsilon^{-7/3})$ for smooth optimization, improving upon the best known rate.\n\nThis paper proposes a new zeroth-order quantum estimator. This leads to new quantum algorithms that solves zeroth-order non-smooth non-convex optimization problem, which is not well studied in the literature. Moreover, the proposed algorithms show quantum speedup compared to their classical (non-quantum) counterparts. Notably, it improves over the classical lower bound of $\\Omega(\\delta^{-1}\\epsilon^{-3})$ by a factor of $\\epsilon^{2/3}$. Overall, these results represent a significant contribution to the understanding of optimization with quantum oracles. Given my expertise lies primarily in optimization and not in quantum computation, I am only able to assess the optimization-related aspects of this work.\n\nAlthough the dependence on $\\delta,\\epsilon$ is improved, the dimension dependence is suboptimal. In particular, since GFM and GFM+ are known to have suboptimal dimension dependence $d^{3/2}$, so do QGFM and QGFM+. On the other hand, as observed by Kornowsky and Shamir [1], optimizing the random smoothing $f_\\delta$ with a non-smooth optimizer, such as online-to-non-convex (o2nc) [2], eliminates this $\\sqrt{d}$ factor and achieves $O(d)$ in dimension. Hence, my intuition suggests that upon substituting the quantum estimator into o2nc and following a similar approach to Kornowsky and Shamir, the authors might be able to recover $O(d)$ (or even better) dimension dependence. \n\n[1] Kornowski, G. and Shamir, O., “An Algorithm with Optimal Dimension-Dependence for Zero-Order Nonsmooth Nonconvex Stochastic Optimization”, 2023. doi:10.48550/arXiv.2307.04504.\n\n[2] Cutkosky, A., Mehta, H., and Orabona, F., “Optimal Stochastic Non-smooth Non-convex Optimization through Online-to-Non-convex Conversion”, 2023. doi:10.48550/arXiv.2302.03775.\n\n- As someone unfamiliar with quantum computation, I have a general question: Is the proposed quantum oracle practically feasible to implement, or is it purely theoretical?\n\n- line 87: does state $|i\\rangle$ denote the $i$-th orthonormal basis of $\\mathcal{H}^m$?\n\n- line 100: what does it mean by $|\\mathbf{x}\\rangle |q\\rangle$? Is it a shorthand for tensor product?\n\n- Thm 3.4 part 1: should it be $Var(\\hat g) \\le \\hat\\sigma_1^2$ instead of $\\hat \\sigma_1$? part 2: number of queries should be $\\frac{d^{3/2}L\\\\|y-x\\\\|}{\\delta \\hat\\sigma_2}$ (i.e., currently it's missing $1/\\delta$)? Since this theorem is the main result of the quantum oracle, I encourage the authors to carefully check its correctness.\n\n also in the proof (line 471): $\\sigma_1^2$ => $\\hat\\sigma_1^2$?\n\nMinor comments:\n\n- line 98: $C_{f(x)} = f(x)$ => $C_f(x) = f(x)$?\n- Proposition 2.1: the properties of smooth surrogate $f_\\delta$ are known in [1] and [2], and Lin et. al. and Chen et. al. are restating these results in their papers. Hence, these should be more appropriate references.\n\n[1] Yousefian, F., Nedić, A., and Shanbhag, U. V., “On Stochastic Gradient and Subgradient Methods with Adaptive Steplength Sequences”, 2011. doi:10.48550/arXiv.1105.4549.\n\n[2] Duchi, J. C., Bartlett, P. L., and Wainwright, M. J., “Randomized Smoothing for Stochastic Optimization”, 2011. doi:10.48550/arXiv.1103.4296."
},
{
"confidence": 4,
"rating": 6,
"review_id": "ZjYizIVqwL",
"review_text": "This paper introduces new quantum algorithms for non-smooth non-convex optimization problems. The authors propose a quantum gradient estimator for smoothed objectives and develop the Quantum Gradient-Free Method (QGFM) and its enhanced version, QGFM+, which achieve better query complexities than their classical counterparts. These complexities demonstrate a marked quantum speedup over classical counterparts, indicating the potential of quantum computing in optimizing complex functions more efficiently. The paper also discusses the construction of quantum oracles and the application of variance reduction techniques, paving the way for future research in quantum optimization.\n\n- The paper proposed new zeroth order quantum optimization algorithms achieving better computational complexities compared to classical methods for non-smooth and non-convex optimization.\n- Technically, they construct efficient quantum gradient estimators and quantum superpositions over required distributions as a key subroutine.\n- They also proposed a quantum algorithm for non-convex smooth problems with an adaptive variance level, accelerating prior quantum algorithms to get more speedups.\n\n- The assumptions of having a quantum stochastic function value oracle may be strong. Could the authors explain more about why it is reasonable and important to have such a function oracle?\n- The technical core for quantum speedups seems to be the quantum mean value estimation procedure, which is already used in many other optimization problems and scenarios. Could the authors explain more about the technical novelty of their work?\n\nBesides the questions raised in the weakness part, I have some minor issues with the submission as follows:\n\n- In line 89, the definition of the tensor product may be a little confusing.\n- In the explicit construction of quantum sampling oracles, it seems that the time complexity of the quantum algorithm may be much larger than the query complexity, due to the sampling on the unit sphere. However, for such optimization algorithms, time complexity may be more crucial in real-world applications. Could the authors state the actual time complexity of their algorithm in terms of gate counts?"
}
] |
wqs2RMq4CW | Corruption-Robust Linear Bandits: Minimax Optimality and Gap-Dependent Misspecification | In linear bandits, how can a learner effectively learn when facing corrupted rewards? While significant work has explored this question, a holistic understanding across different adversarial models and corruption measures is lacking, as is a full characterization of the minimax regret bounds. In this work, we compare two types of corruptions commonly considered: strong corruption, where the corruption level depends on the learner’s chosen action, and weak corruption, where the corruption level does not depend on the learner’s chosen action. We provide a unified framework to analyze these corruptions. For stochastic linear bandits, we fully characterize the gap between the minimax regret under strong and weak corruptions. We also initiate the study of corrupted adversarial linear bandits, obtaining upper and lower bounds with matching dependencies on the corruption level. Next, we reveal a connection between corruption-robust learning and learning with gap-dependent misspecification—a setting first studied by Liu et al. (2023a), where the misspecification level of an action or policy is proportional to its suboptimality. We present a general reduction that enables any corruption-robust algorithm to handle gap-dependent misspecification. This allows us to recover the results of Liu et al. (2023a) in a black-box manner and significantly generalize them to settings like linear MDPs, yielding the first results for gap-dependent misspecification in reinforcement learning. However, this general reduction does not attain the optimal rate for gap-dependent misspecification. Motivated by this, we develop a specialized algorithm that achieves optimal bounds for gap-dependent misspecification in linear bandits, thus answering an open question posed by Liu et al. (2023a). | https://openreview.net/pdf/d72fd0fc0d62a924ff97b58e851197a74e7f045b.pdf | [
{
"confidence": 3,
"rating": 5,
"review_id": "eE05dTrizm",
"review_text": "This paper studies corruption-robust linear bandit optimization and characterizes the regret bound in terms of both weak and strong corruption measures. Under the stochastic setting, this paper proposes a phased elimination algorithm, and the regret bounds match the lower bound. Under the adversarial setting, the paper proposes two individual algorithms for the two corruption measures respectively. In addition, this paper studies gap-dependent misspecification setting through reduction, and discusses a use case for linear MDPs.\n\n- The regret bounds in terms of both corruption measures are provided, where the regret bound depending on $C_\\infty$ is first introduced in this paper.\n- The theoretical results are supported with detailed proof.\n- This paper is generally well-written.\n\n- The algorithms are efficient regarding regret bound, but the computational complexity is not discussed.\n- A conclusion section could be added.\n\nWhat is the computational cost of the proposed algorithms?"
},
{
"confidence": 3,
"rating": 5,
"review_id": "6RsE2wCqwx",
"review_text": "In this work, the authors characterize the problem of learning the presence of reward corruption in the linear bandit setting. They provide matching upper and lower bounds in the corrupted stochastic setting, and initiate the study on the corrupted adversarial setting, for which they obtain optimal scaling in the corruption level.\n\nNot only that, the authors prove a general reduction that efficiently handles gap-dependent misspecification with corruption-robust algorithms. They were able to show that linear MDPs with gap-dependent misspecification are efficiently learnable. While this reduction is general, interestingly they denied the possibility to obtain the tightest rate for gap-dependent misspecification. This observation leads them to develop a specialized algorithm which, in the linear bandit setting, obtains the optimal rate. According to their argument, this resolves the open problem of Liu et al. (2023a).\n\n- Interesting results\n - Deterministic algorithm cannot avoid suboptimal regret (Proposition 1)\n - Matching upper and lower bound on the stochastic setting, by just changing deterministic sampling to stochastic.\n - Solving an open problem of instance-dependent misspecified setting.\n\n- Clearly state the limitations of previous works and their technical novelties.\n - Easy to understand their contributions.\n\n- (Minor) The algorithms are not seriously different from the previous works as they mentioned, but this is just a minor point - every theoretical improvement is important. \n\n- Not clear what they tried to say on page 9\n - Why Theorem 6.2 shows that $\\rho \\leq \\frac{1}{d}$ is not optimal?\n - Impossibility result (from line 304): so basically what authors are trying to say is, that 'their' reduction is not applicable for a tighter result, right? It is not about any reduction from corruption to misspecification. \n\n- No future works.\n\n- Unclear points in the weakness section.\n\n- It would be great if authors could explain why the gap-dependent misspecification assumption (Assumption 1) is necessary.\n\n### Minor\n\n- Theorem G.1 in line 296 - is it correct?\n\n- Corollary 6.2.1 in line 209 - it seems like it is the result for the MDP..."
},
{
"confidence": 3,
"rating": 6,
"review_id": "EOjQ7aQNxC",
"review_text": "This paper studied the corrupted linear bandits. The authors propose four different metrics to evaluate the total corruption in Eq. (1). Many settings are considered in this paper. For stochastic LB, the proposed algorithm achieves a regret bound of $d\\sqrt{T}+\\sqrt{d} C_{\\infty}$. For adversarial LB, the proposed algorithm achieves a regret bound in the order of $d\\sqrt{T}+\\sqrt{d} C_{\\infty}$ or $d^{3}\\sqrt{T}+d^{5/2} C$. The authors also consider the gap-dependent misspecification, where the misspecification level of an arm $a$ can be evaluated by $\\rho$ times the gap of arm $a$.\n\nSee summary.\n\n**Weaknesses and Questions:**\n1. At lines 107-109, the authors claim that the strong adversary is equivalent to the CM viewpoint. This doesn't seem right. For regret, the strong adversary is harder than the CM viewpoint. Thus, it is unfair and wrong to compare He et al. (2022) in the same way.\n2. At line 131, adversarial linear bandits are discussed. However, no problem definition of this problem is introduced before line 131.\n3. This paper studies the fixed action set, while the previous works He et al. (2022) and Foster et al. (2020) allow the action set to be chosen by an adaptive adversary, which is much harder than this paper. Table 1 is not fair. He et al. (2022) is for the adaptive adversarial viewpoint, which is totally different from the stochastic LB. For a fixed action set, the optimal regret without $C$ should be $\\sqrt{d T \\log k}$, where $k$ is the number of arms.\n4. Assumption 1 is not very reasonable.\n\nSee weaknesses."
}
] |
wqLC4G1GN3 | Solving Inverse Problems via Diffusion Optimal Control | Existing approaches to diffusion-based inverse problem solvers frame the signal recovery task as a probabilistic sampling episode, where the solution is drawn from the desired posterior distribution. This framework suffers from several critical drawbacks, including the intractability of the conditional likelihood function, strict dependence on the score network approximation, and poor $\mathbf{x}_0$ prediction quality. We demonstrate that these limitations can be sidestepped by reframing the generative process as a discrete optimal control episode. We derive a diffusion-based optimal controller inspired by the iterative Linear Quadratic Regulator (iLQR) algorithm. This framework is fully general and able to handle any differentiable forward measurement operator, including super-resolution, inpainting, Gaussian deblurring, nonlinear deblurring, and even highly nonlinear neural classifiers. Furthermore, we show that the idealized posterior sampling equation can be recovered as a special case of our algorithm. We then evaluate our method against a selection of neural inverse problem solvers, and establish a new baseline in image reconstruction with inverse problems. | https://openreview.net/pdf/05d77a296e09e3facb8599ac339e22b4399d0782.pdf | [
{
"confidence": 3,
"rating": 5,
"review_id": "uzBjyGYbx6",
"review_text": "The paper addresses the limitations of existing diffusion-based inverse problem solvers, which typically frame signal recovery as a probabilistic sampling task. The authors propose a novel approach that redefines the generative process as a discrete optimal control task. Inspired by the iterative Linear Quadratic Regulator algorithm, this new framework named diffusion optimal control, can handle various differentiable forward measurement operators, including super-resolution, inpainting, and deblurring.\n\n1. The paper introduces a novel framework based on optimal control theory to solve diffusion-based inverse problems, moving away from the traditional probabilistic sampling approaches. This is a significant theoretical advancement.\n2. The framework addresses critical drawbacks of current methods, such as the intractability of the conditional likelihood function and dependence on score network approximations. This leads to more robust and potentially more accurate solutions.\n\n1. The method involves complex mathematical formulations and optimal control theory, which may pose challenges for implementation and understanding by practitioners who are not familiar with these concepts. The need to compute Jacobian and Hessian matrices, as well as the regularized inverses, may lead to significant computational demands, particularly in high-dimensional settings.\n\n2. Lacking of enough experiments, such as MRI reconstruction or other medical images. Including more diverse datasets and additional baseline methods would provide a more comprehensive evaluation.\n\n1. What is the purpose of injecting control vectors u_t into the reverse diffusion process, and how do they influence the terminal state of the system?\n\n2. How are the gradients V_x and Hessians V_xx of the value function used within the optimal control framework, and what is their significance?\n\n3. Please indicate what is output of Algorithm 1.\n\n4. Can you give some high level description about: How does using an adaptive optimizer for action updates improve the iLQR algorithm, and what impact does it have on the performance and efficiency of solving inverse problem tasks?"
},
{
"confidence": 3,
"rating": 6,
"review_id": "h6DcTUtfHv",
"review_text": "This paper proposes diffusion optimal control that solves inverse problems via posterior sampling by combining the power of a pre-trained unconditional diffusion model and the iterative Linear Quadratic Regulator algorithm to produce optimal controls that steer the reverse diffusion process to correctly recover the original signal. \nThe framework is general and able to handle any differentiable forward measurement operator and establishes a new baseline in image reconstruction.\n\n* The idea of augmenting the reverse diffusion sampling process with a perturbation control is quite novel and general for arbitrary cost functions, although the paper focuses specifically on the cost for posterior sampling.\n* The writing is generally good (see more comments regarding writing in questions). It is concise and to the point with good intuitions provided.\n* Efforts (e.g. low-rank approximations) have been made to bring down the computational cost in iLQR for the high-dimensional image setting.\n* The empirical performance of the proposed method is strong and establishes new state-of-the-art results.\n\n* The runtime of the proposed method seems high and is not much discussed. iLQR is a global-in-time iterative method that could potentially require a large number of iterations to converge (and all nice things discussed in Section 4. rely on this convergence). On top of that, for each iteration, there needs to be $\\Omega(T)$ matrix solves which can be quite slow given the dimension of the images (even with techniques like low-rank approximation). It would be interesting to see ablation studies on the effect of num_iter in Algorithm 1. It would be also more convincing to report the runtime of each method in Table 1.\n* There is no analysis of the approximation error of iLQR (the first and second-order Taylor approximations) in the studied setting. Specifically, it seems to me that a lot of heavy lifting is done when the control $u_t$ is designed to be only a perturbation of the reverse diffusion step. For instance, does this imply that the value function is smoother (hence the Taylor approximation is more accurate) when parameterized by $u_t$?\n\n* What is the rough runtime of each method in Table 1?\n* Notations such as $p_t(x|y)$ are confusing to me. The subscript $t$ on $p$ suggests a family of distributions but I think that's not the case. My understanding of the randomness is the following. First, the random variable $x_0$ is drawn from the clean image distribution. Then $y = A(x_0) + \\eta$. In parallel, we also have random variables $x_t$ obtained deterministically from $x_0$ by evolving along the ODE of the forward diffusion. A better notation in my opinion is to put the subscript $t$ on $x$, like $p(x_t|y)$.\n* In (29), what is the meaning of $\\nabla_{x_t} \\log p(y|x_0)$?\n* Line 149, what do you mean by \"produces a feasible solution\"? What's the meaning of being feasible?\n* The texts around line 160 are hard to parse. What is the notation $p(x_0|x_t,x_0)$? What is the $x_0$-centered marginal?\n* Line 176, why is $\\log p(y|x_t) = \\log p(y|x_0)$ an assumption, not a consequence?\n* In the paragraph of Line 204, I'm confused about why the diffusion model can be taken to have randomly initialized weights. This does not seem to result in any meaningful application, since there is no information about the image distribution. For instance, in Figure 6, the produced result looks even worse than the input $y$.\n\nMinor comments\n* Line 82, what is $\\theta$?\n* In Section 2.3, it would be good to include the dimension of each variable. In (13), it would be good to clarify that $k, K$ all depend on $t$.\n* Algorithm 1 appears unreferenced in the main text. In the input, what is $x_T$? Is it just drawn from a Gaussian? What is the output?\n* Line 117, $\\ell_t(x_t,u_t) = 0$, not just not depend on $x_t$. It would also be good to say here what exactly is $\\ell_0$ for input/output perturbation controls instead of defining it later. \n* Theorem 4.1 is missing transitions for presenting the conclusion (29).\n* The values of $\\alpha$ in the two theorems appear to be missing a factor of $2$.\n* In (32), it should be $\\widetilde{x}_{t-1}$ on the left side of the equation.\n* Line 253, there is a parenthesis not closed."
},
{
"confidence": 4,
"rating": 7,
"review_id": "cDxOP39xPC",
"review_text": "This paper proposes a new approach to conditional generation tasks through score-based diffusion models, with a focus on inverse problems. As an alternative to using the likelihood $p(y | x_t)$ to guide the time-reversed SDE towards the posterior distribution, the authors reformulate this as an optimal control problem. Starting from the ODE-based time-reversed flow for the unconditioned prior, the authors derive a controller based on the iLQR algorithm to guide the particles towards high posterior probability regions. The authors provide theory to demonstrate that the optimal guidance coincides precisely with the desired conditional scores. They demonstrate the method on a number of benchmarks including image inpainting and other inverse problems.\n\nThe paper is well written and very clear. The method appears novel and addresses a legitimate challenge in conditional diffusion models. As the authors acknowledge: optimal control formulation of diffusions exist, but not (to my knowledge) in the context of guiding conditional diffusion models. The theoretical results provide a sound justification of the validity of the approach. The numerical results demonstrate that it is competitive in terms of accuracy compared to baseline, established methodology.\n\nThe main weakness is the sheer computational cost of the algorithm, the need to compute very expensive hessians drastically limits the practical use of this method. The authors suggest a number of low rank approximations to mitigate this, but it is unclear how much is lost by introducing them. One point of question is the interplay between the number of diffusion steps $m$ and $T$. As $m\\rightarrow \\infty$, for $T$ fixed and large we expect that the baseline conditional diffusion model will improve significantly in accuracy. Generally, I feel that the configuration of the baseline has not been explored (or if it has, it has not been reported carefully). Similarly, the authors claim that they have done equivalent budget analysis in the experiments -- I could not find the details of this: is it the case that the computational cost is the equivalent? Have the author really explored the hyper-parameter space for these methods.\n\nCan the authors provide some insight on how to choose the key parameters $(m, n, k)$? -- does the optimal control method allow substantially smaller $m$? When does one approach become more computationally effective than the other? I can imagine, when $m$ is sufficiently small, that PSLD, DPS will start to outperform this method with comparable computational cost. \n\nMinor comments: the metrics LPIP, PSNR, SSIM are reported, but at no point are these explained, or are references provided. These are well known in some communities, but not for the wider readership. Small typos around references were found."
},
{
"confidence": 3,
"rating": 5,
"review_id": "t31CKNl1by",
"review_text": "The paper uses the optimal control theory to solve the diffusion posterior sampling problem by iterative Linear Quadratic Regulator (iLQR) algorithm. The method could be utilized to solve both linear and nonlinear inverse problems. Experiments on MNIST and FFHQ demonstrate the outperformance of the proposed method.\n\n1. This paper is well-written, with a good summary of previous methods and their shortcomings. \n\n2. The proposed method that interprets the reverse diffusion process as an uncontrolled non-linear dynamical system is novel. Theoretical support is provided to verify the algorithm.\n\n1. The method is well-backed but might be computationally exhausting.\n\n2. The experiments are limited. Quantitative results on different datasets and nonlinear inverse problems are lacking.\n\n1. As shown in Algorithm 1, the method's time complexity is $O(T)$ in each iteration. Although the $T$ is relatively small $(=50)$ in the experiments, num_iters $\\times T$ would be large, e.g. $50\\times 50 = 2500$ as shown in Table 2 in the appendix. Also, the initialization of $\\{x_T'\\}$ requires $T$ NFEs (number of function evaluations). Is this correct? How about the computational efficiency of the proposed method? I would like to see a more detailed comparison of the method with other baselines like DPS in terms of time.\n\n2. More baselines need to be compared such as [1], [2], [3] and [4]. The settings in these works might be a bit different. Some settings might be different. Can you clarify the proposed method's advantage over these baselines?\n\n3. Previous methods like DPS have done extensive experiments on both linear and nonlinear inverse problems across both FFHQ and ImageNet datasets. However. the experiments in the paper seem to be somewhat limited. I have two questions about the experiments. 1) Since there are only quantitative results for linear inverse problems (note that the results in Table 1 are all linear), can you clarify the proposed method's advantages in nonlinear problems such as phase retrieval, nonlinear deblurring, and so on? 2) Can you show more results on ImageNet, which is a broader dataset that contains more than one domain, such as faces in FFHQ?\n\n[1] Zehao Dou, and Yang Song. Diffusion Posterior Sampling for Linear Inverse Problem Solving: A Filtering Perspective, ICLR 2024\n\n[2] Morteza Mardani, Jiaming Song, Jan Kautz, Arash Vahdat. A Variational Perspective on Solving Inverse Problems with Diffusion Models. ICLR 2024\n\n[3] Jiaming Song, Arash Vahdat, Morteza Mardani, Jan Kautz. Pseudoinverse-Guided Diffusion Models for Inverse Problems. ICLR 2023\n\n[4] Zhu, Y., Zhang, K., Liang, J., Cao, J., Wen, B., Timofte, R., and Van Gool, L. Denoising diffusion models for plug-and-play image restoration. CVPR 2023"
},
{
"confidence": 3,
"rating": 6,
"review_id": "gqiErJAYCq",
"review_text": "This paper tackles inverse problem via the perspective of optimal control. By treating the diffusion process (ODE) as a non-linear dynamic system and the extra guidance term as control signal, the authors manage to optimize the diffusion trajectory via the iterative Linear Quadratic Regulator (iLQR) algorithm. Several techniques are used to make the iLQR algorithm more efficient. This paper show good results on FFHQ dataset.\n\nThe idea is interesting and reasonable. Using optimal control to solve the inverse problem enables us to optimize the whole sampling trajectory and avoid the error for estimating $x_0$ via Tweedie's formula. And the results on FFHQ dataset looks good.\n\n1. High computation cost: Despite the advantages mentioned above, one obvious drawback of this method is the potential high computation cost. This includes: \n\n a. Computing and storing the Jacobian matrices, which can be of very high dimension, can be very costly. Although the authors \n further propose some techniques to reducing the cost, these methods might also bring extra approximation error as well as more hyper-parameters to tune; \n\n b. Optimizing the the whole trajectory requires evaluating the whole trajectory for many times and do iterative updates. This requires more computation. Thus, though in Table 1, the authors denoted their methods as $T=50$ and $T=20$, considering the iterative update nature over the whole trajectory, this might not be directly comparable (and might actually need more computation) to other methods, which are denoted as $T=1000$. And the authors might have to greatly reduce the timesteps to make the whole algorithm affordable, this might also bring extra approximation error.\n\n\n2. Lack of more complex dataset: Though the authors achieve good performance on FFHQ dataset, considering the human face data is relatively easy (aligned, not very multimodal), it is still not very clear to me how the proposed method can work on more complex dataset, for example, on ImageNet. From my own experience, the ImageNet data can be much harder than the FFHQ human face data in the inverse problem. And considering the approximation error introduced in iLQR algorithm, computing the Jacobian matrices as well as using less timesteps, it might raise concerning regarding whether the proposed algorithm can work well on more complex dataset.\n\n3. Minor suggestion: I think it might be better for the authors to add more introduction for the optimal control part in the main paper. Or at least give more clear introduction for the notation used in 2.3. Currently, I find it not very clear to people without much background in optimal control.\n\n1. Following my first point in weakness, can the author provide a comparison in sampling time (e.g. second, or NFE) of different methods. Only comparing diffusion timesteps are not very fair considering the proposed method needs to iteratively update over the whole trajectory for many times.\n\n2. Under different initializations, can the proposed algorithm always be able to find a good solution? And will the optimized results look same or different?"
},
{
"confidence": 3,
"rating": 4,
"review_id": "Rrw304IrPq",
"review_text": "The paper uses tools from optimal control to introduce a novel approach for solving inverse problems with diffusion models. The authors propose reframing the generative process of diffusion models as a discrete optimal control problem allowing to leverage the iterative Linear Quadratic Regulator (iLQR) algorithm. Tackling limitations of existing probabilistic sampling methods, the resulting method demonstrates promising performance for inverse problems on FFHQ, such as super-resolution, inpainting, and deblurring.\n\nWhile many connections between optimal control and diffusion models have been established, the proposed algorithm leverages variants of iLQR to provide a fresh perspective on training-free posterior sampling with diffusion models. The paper provides additional theoretical guarantees as well as multiple modifications (randomized low-rank approximations, matrix-free evaluations, and adaptive optimizers) to reduce computational costs. Finally, several ablations are presented for the proposed method.\n\n1. The claims of the paper are not sufficiently supported by experiments and/or theory (the first statements in the following are just examples---similar statements can be found throughout the paper):\n\t* \"dependence on the approximation quality of the underlying terms in the diffusion process\": only a result for a single image is provided (Fig. 6). The current paper also does not seem to provide theoretical results for such robustness as claimed in \"reconstruction performance is theoretically and empirically robust to the accuracy of the approximated prior score\".\n\t* \"its sensitivity to the temporal discretization scheme\": for the baselines, again only a result for a single image is provided (Fig. 3). Moreover, the number of steps is typically reduced to accelerate the algorithm. Accordingly, methods should be compared in terms of performance vs. runtime/flops and not the number of diffusion steps. It seems that the proposed method is significantly more expensive than competing methods (in particular, since `num_iters>=50` full simulations are used).\n\t* \"its inherent inaccuracy due to the intractability of the conditional score function\": The conditional score function remains intractable, one just obtains an approximation via iLQR, since the obtained $x_0$'s obtained from the iLQR iterations only *approximately* converge to the posterior distribution *in the limit*. Statements like \"Moreover, our model always estimates $x_0$ exactly, rather than forming an approximation $\\hat{x}_0 \\approx x_0$\" sound misleading. Using iLQRs, we simulate \"nominal\" trajectories and thus iteratively obtain an approximate candidate for $x_0$ which will be used for the refinement of the control. In a similar (however, useless) fashion one could also use, e.g., DPS to obtain an estimate of $x_0$ and then run a probability flow ODE simulation where the scores are conditioned on $x_0$ (instead of $x_t$) to have a \"method [that] produces controls that coincide precisely with the desired conditional scores\". However, the advantage of DPS lies in the fact that only a single simulation is needed.\n\t* \"on several inverse problem tasks across several datasets\": Apart from a single figure on MNIST (without metrics and for only a single baseline and task), results are only provided for FFHQ.\n\n2. Moreover, several of the mentioned limitations have been already tackled by alternative approaches to posterior sampling with diffusion models, e.g., variational approaches (https://arxiv.org/abs/2305.04391) or resampling strategies (https://arxiv.org/abs/2307.08123).\n3. Finally, the appendix could provide further details on\n\t* hyperparameter choices and optimization for the baselines.\n\t* precise assumptions for the theorems.\n\nSee \"weaknesses\" above."
}
] |
wpGJ2AX6SZ | Human Expertise in Algorithmic Prediction | We introduce a novel framework for incorporating human expertise into algorithmic predictions. Our approach leverages human judgment to distinguish inputs which are *algorithmically indistinguishable*, or "look the same" to predictive algorithms. We argue that this framing clarifies the problem of human-AI collaboration in prediction tasks, as experts often form judgments by drawing on information which is not encoded in an algorithm's training data. Algorithmic indistinguishability yields a natural test for assessing whether experts incorporate this kind of "side information", and further provides a simple but principled method for selectively incorporating human feedback into algorithmic predictions. We show that this method provably improves the performance of any feasible algorithmic predictor and precisely quantify this improvement. We find empirically that although algorithms often outperform their human counterparts *on average*, human judgment can improve algorithmic predictions on *specific* instances (which can be identified ex-ante). In an X-ray classification task, we find that this subset constitutes nearly 30% of the patient population. Our approach provides a natural way of uncovering this heterogeneity and thus enabling effective human-AI collaboration. | https://openreview.net/pdf/4f5dc6075a84c5c600343c682e95020208b5f943.pdf | [
{
"confidence": 4,
"rating": 8,
"review_id": "m5a4VS2kWi",
"review_text": "This paper introduces a new framework into algorithmic predictions. The paper asks and answers the question \"how can we incorporate human input into the prediction algorithm, which may not even be captured in the training data\"? The authors develop a method that first runs the predictor, and then runs a second predictor using the human input. The authors show that even a simple instantiation of their method can outperform existing predictors. They use the X-ray classification task as experimental datasets.\n\nThe paper is written very clearly, and offers a novel method to incorporate human input into algorithmic prediction. Both theoretical derivations and experiment results are sound. The contributions of this paper is significant, and I believe this paper deserves to be accepted in its current form.\n\nThe paper would be even more satisfying if the method is presented as a framework rather than a specific instantiation. In addition, it would be great if the authors can discuss potential ways to improve on the method they propose, and what these methods mean in the broader context of incorporating human feedback into algorithmic predictions. Nevertheless, these small weaknesses does not diminish the significance and novelty of this paper.\n\nMy main comment is that the authors should comment more about the future work and implications of this method. Furthermore, I would be interested to hear what the authors think about a related paper [1], and how these papers might be related.\n\n[1] DEFINING EXPERTISE: APPLICATIONS TO TREATMENT EFFECT ESTIMATION (https://arxiv.org/pdf/2403.00694)"
},
{
"confidence": 4,
"rating": 7,
"review_id": "zvw9f7oaQJ",
"review_text": "The paper proposes a framework to incorporate human expert knowledge in algorithmic predictions. Under this framework, the authors introduce a meta-algorithm that uses a training dataset including human expert predictions together with a multi calibrated partition of the data; a partition of the dataset into bins, where each bin contains data that are indistinguishable to the predictive model. Using the data of each bin the meta-algorithm trains a regression algorithm to predict the true label from the human expert prediction. In this way, the authors aim to leverage the human expertise, that may be more accurate than the predictive algorithm on specific instances, to achieve complimentary—to achieve higher predictive accuracy through human AI collaboration than the performance of a human expert or AI in isolation.\n\nThe paper suggests an elegant method to improve algorithmic predictions in light of human expertise, that could have significant applications such as the medical domain, where the additional information of human experts may lead them to more accurate predictions on certain instances compared ot predictive models. \n\nThe paper is very well and clearly written, nicely motivated and follows a clear structure. There is a thorough and comprehensive discussion on related work as well as a comprehensive and clearly presented experimental evaluation.\n\nSince the theoretical results of section 6 complement the ones of section 4, it would be perhaps more natural to follow them, rather than placing them after the experimental evaluation, which appears a bit odd.\n\nN/A"
},
{
"confidence": 3,
"rating": 7,
"review_id": "YxNisJxtJ3",
"review_text": "The paper first presents some theory for the modelling of how to identify when human judgements may offer a better diagnosis - through access to additional information - than machine predictions, despite the latter typically being more accurate. This is followed by exploring how to integrate the human input with the algorithmic (model) input. Subsequently, the authors present some focussed experimental results using chest x-ray interpretation that support their proposition.\n\nOriginality: carefully drawn comparison with the literature, situates and differentiates the contribution.\n\nQuality + Clarity (addressed together):\n\nClear abstract and intro with well-defined contributions. Content offers a reasonable balance between technical and intuitive. Recognition of the value of the human contribution and seeking to integrate it in decision making.\n\nThe later mathematical results (section 4) have effective accompanying interpretations (see complementary point in weaknesses).\n\nEffective, selective presentation of results: choosing one and going into detail, while two other cases in the appendices support the same observation, rather than trying to squeeze them all into the paper body. Same applies to results in section 5.2.\n\nSignificance: provides a sound framework for a particular, amenable class of collaboration problems that allows for the proper incorporation of human prediction where machine prediction could fall short.\n\nClarity: Indistinguishability and multicalibration are critical elements to the contribution; it would be helpful if the interpretation of their definitions (3.1, 3.2) went into a bit more detail for accessibility.\n\nThis reader is not succeeding in following the argument about robustness (section 6).\n\nQ1. The case studies are retrospective so both machine and human outcomes are available to use in the analysis. How would the approach work in a live situation?"
},
{
"confidence": 3,
"rating": 8,
"review_id": "JoSG2x7gGC",
"review_text": "This paper introduces a framework for joint human-AI prediction, where human experts can augment AI predictions in particular ex ante identifiable subsets.\n\nThis paper makes a lot of interesting contributions. First, its scope is broad and important: it tackles the question of how and whether human judgment can improve the predictions of any learning algorithm. That is and will remain to be a very important question in our time. It contributes a very interesting framework, rooted in algorithmic indistinguishability and multicalibration, to find subsets in which no algorithm in a user-specified class has predictive power (because they are algorithmically indistinguishable) but human experts do (because they might have more access to the instances, such as doctors examining patients). It demonstrates that using this framework, we can find subsets of instances where human experts can outperform algorithms, and thus the combination of the two can outperform either alone. It applies this to an important medical problem and in another domain of making predictions from photos of people. It even extends the framework to apply to a setting with noncompliance. The community stands to learn a lot from this paper.\n\nAs the authors mention, the framework is dependent on minimizing mean squared error only.\n\nHow might you model deicision makers with richer preferences than mean squared error?"
}
] |
woRFmNJiLp | Alignment at Pre-training! Towards Native Alignment for Arabic LLMs | The alignment of large language models (LLMs) is critical for developing effective and safe language models. Traditional approaches focus on aligning models during the instruction tuning or reinforcement learning stages, referred to in this paper as `\textit{post alignment}'. We argue that alignment during the pre-training phase, which we term 'native alignment', warrants investigation. Native alignment aims to prevent unaligned content from the beginning, rather than relying on post-hoc processing. This approach leverages extensively aligned pre-training data to enhance the effectiveness and usability of pre-trained models. Our study specifically explores the application of native alignment in the context of Arabic LLMs. We conduct comprehensive experiments and ablation studies to evaluate the impact of native alignment on model performance and alignment stability. Additionally, we release open-source Arabic LLMs that demonstrate state-of-the-art performance on various benchmarks, providing significant benefits to the Arabic LLM community. | https://openreview.net/pdf/cbd79f21b25bc68a35292ca9eb5ce3ac4d6d318c.pdf | [
{
"confidence": 4,
"rating": 6,
"review_id": "cRT8pYb66v",
"review_text": "This paper proposed a new method for LLM alignment during pre-training. The proposed method is call \"native alignment\". This method include three steps: pretrain date duplication, alignment rewriting, and model training. They trained small size alignment expert model for alignment rewriting and use the model to rewrite large-scale pre-training data. The rewriting process suppose to solve format issue, value/fairness issue, unsafe content in pre-training data. They experimented with Arabic data and LLMs. Their experiments shows that the proposed method can help LLMs be more safe and helpful.\n\n1. The paper proposed a new idea to align LLMs during pre-training. It seems an interesting topic. \n2. The paper writing is clear and well-organized.\n\n1. Lack of comparison to existing post-alignment methods. The proposed method is a \"native alignment\" during pre-training. I wonder if this method can outperform the post-alignment methods. While the author acknowledged this limitation, I still feel it is important for strengthening their claim. \n2. Need more analyses to better understand their method's potential trade-off. For example, I wonder if rewriting pre-training data undermines the LLM's capacity to understand and learn Arabic dialects. The rewriting process may convert Arabic dialects into MSA. I also wonder if the rewriting data inherited hallucinations from LLM and deteriorated the trained model. \n3. The paper needs more clarification on experiment details. For example, they exploit Arabic data to investigate their method, however, the evaluation dataset, BeaverTails dataset, is an English dataset. I wonder how they evaluate and if they translate the samples.\n\nPlease see the questions in weakness. \n* I wonder whether you continued training the LLaMA 3 model or newly initialized LLaMA-like model and trained it from scratch."
},
{
"confidence": 5,
"rating": 3,
"review_id": "3fMvJD3FdK",
"review_text": "The paper introduce a method called \"native alignment\", which is a set of procedures to create data and train an LLM to rewrite raw text into \"useful\" texts for pretraining. They apply this technique specifically for Arabic LLMs and conduct experiments to show that this pre-processing of pre-training data helps produce better Arabic LLM down the line. As bonus, they release open-source Arabic LLMs for the communities\n\n* The paper ideas are presented clearly and easy-to-understand\n\n* As a proclaimed novelty, the paper draws itself between pre-alignment and post-alignment, indicating that previous work only focus on post-alignment but not pre-alignment. However, I afraid the paper misunderstands the concept of post-alignment (RLHF) and fails make an accurate comparison. Post alignment (RLHF) is finetuning technique to train the models to reward good-vs-bad response according to human values, and train the policy models to lean on the good behavior and stay-away from the bad behaviors gradually, often with the existence of a reference model (DPO and RLHF).\n\nMeanwhile, the \"native alignment\" presented in the paper is a data-cleaning procedure, and it does not having any resemblance or contrast with \"post-alignment\". Furthermore, using LLMs or training LLMs to rewrite raw text to produce cleaner data is not new or novel, there are many techniques out there that do so, and there are abundant open-source data on huggingface which were produced in similar ways.\nThis confusion between data cleaning and alignment makes the paper less credible and the lack of novelty it the methodology itself, as a data cleaning method, is also troublesome.\n\nObviously as a result, the paper did not provide any necessary and required experimental comparisons with other data cleaning methods.\n\n* Though I do appreciate the paper's effort for Arabic community, the scope of only Arabic LLM is small and generally inconclusive, that such method is not shown to generalize to other languages, domains. Perhaps, thus, the work is really not suitable for NeurIPS but more suitable for CL-type venues\n\n* It is unclear from the writing whether the authors pretrained Llama-3 with Arabic from scratch or further finetune from Llama-3 checkpoint. In either case, there should be explanation and further ablation studies.\n\nDid the authors pretrain from scratch (with Llama-3 architecture) or from Llama-3 checkpoint"
},
{
"confidence": 4,
"rating": 7,
"review_id": "Pc4dfAbW2M",
"review_text": "This paper proposes a data augmentation pipeline which modifies the pre training data for large language models in key aspects such as formatting, values, content moderation and knowledge preservation. The resulting pipeline, termed native alignment, is applied Arabic LLMs due to the relatively small pretraining corpus available and the difference between Arabic and western culture. Experiments are conducted to test the performance on a few metrics including trustworthiness, knowledge, and Arabic localisation.\n\nThis is a well written paper targeting the important topic of llm alignment. It also addresses the relatively under explored sub question of how to improve alignment at pretraining. The resulting pipeline presents a reasonable idea, and the evaluations are clear and I find them comprehensive too. The author(s) should also be commended for their transparency regarding the limitations of the paper.\n\nAlthough this might have become the norm of recent LLM papers, I still think it is important to include a discussion of the metrics used to measure things like 'trustworthiness' and 'knowledge', as these are qualitative metrics, whereas in the paper, it seems like the authors just quoted some existing evaluation pipeline.\n\nStep 3 of the pipeline talks about training language models to act in place of the human experts. I may have missed this but I think the authors should explicate how exactly this is done in the experiment section - are the authors using already pre trained LLMs to finetune as experts? How would we know that these are aligned themselves? If we cannot trust the LLM experts and must resort to human experts, then it's unclear to me how this method should scale up. \n\nIn the experiment section the authors show that LLMs pertained on both the original pretraining data as well as native aligned data work better - how does one interpret this result? Since, if the original pretraining data contains harmful or value-misaligned data points, then it seems reasonable that the LLM does not learn from these at all."
},
{
"confidence": 4,
"rating": 7,
"review_id": "1VHCV0cLYi",
"review_text": "This paper focuses on alignment of LLMs to human preferences and suggests to shift the alignment step from instruction-tuning (post-alignment) to the earlier stage of continued pre-training (native alignment). For that end it proposes an approach to creating aligned pre-training data, consisting of three steps: (1) seed data cleanup and rewriting with humans'/LLM help, (2) training a supervised cleanup model on that seed set and (3) processing the final pre-training dataset with that cleanup model. Presented experiments show that alignment data results in higher final quality compared to unprocessed pre-training data and that the performance gain does not reach a plateau at 12B tokens, suggesting that the amount of alignment data should be limited by the budget allocated to train an LLM. Experiments are performed on Llama-3-8B and Llama-3-70B and the Arabic language.\n\n- A high-impact and efficient approach to pre-aligned model training is introduced\n\n- Two pre-aligned LLMs for Arabic are released openly based on the experiments in this paper\n\n- Related work is excellent, the paper is written very clearly and is easy to comprehend\n\n1. No direct comparison between native alignment and post-alignment is reported\n\n2. Minor text discrepancies are present:\n- rows 16-18: partial sentence \"while..\" is not finished\n- row 47: missing verb: \"LLaMA3-Tamed-8B could beneficial\" --> \"LLaMA3-Tamed-8B could be beneficial\"\n- row 326: typo: \"instruction tinning\" --> \"instruction tuning\"\n- row 150: \"pre-training\" should be called \"continued pre-training\" in this case\n\n3. The created seed data and cleanup models are not released\n\nQ1: In the description you juxtapose native alignment and post-alignment, yet there are no experiments comparing their effect directly. What is the basis for claiming that native alignment yields better results in terms of helpfulness, harmlessness or other metrics?\n\nQ2: Hypothetical question: should we as community not aspire to create top-performing models beating GPT4, not create the best models _under_ it, led by it?, more specifically, in your setup of experiments and model training, is the final result bound by GPT4's performance, or can it surpass it?, why hasn't it, according to Table 2?\n\nQ3: How much in your opinion does the choice of seed data and synthetically cleaned alignment data affect the results?, would you consider any approaches to select these sets non-randomly, either directly or via some version of active learning?\n\nQ4: Why not release your seed data, curated by GPT4?, perhaps also the cleanup models, or even the 12B set of generated alignment data?"
}
] |
woENr7FJaI | Automated Multi-level Preference for MLLMs | Current multimodal Large Language Models (MLLMs) suffer from ''hallucination'', occasionally generating responses that are not grounded in the input images. To tackle this challenge, one promising path is to utilize reinforcement learning from human feedback (RLHF), which steers MLLMs towards learning superior responses while avoiding inferior ones. We rethink the common practice of using binary preferences (*i.e.*, superior, inferior), and find that adopting multi-level preferences (*e.g.*, superior, medium, inferior) is better for two benefits: 1) It narrows the gap between adjacent levels, thereby encouraging MLLMs to discern subtle differences. 2) It further integrates cross-level comparisons (beyond adjacent-level comparisons), thus providing a broader range of comparisons with hallucination examples. To verify our viewpoint, we present the Automated Multi-level Preference (**AMP**) framework for MLLMs. To facilitate this framework, we first develop an automated dataset generation pipeline that provides high-quality multi-level preference datasets without any human annotators. Furthermore, we design the Multi-level Direct Preference Optimization (MDPO) algorithm to robustly conduct complex multi-level preference learning. Additionally, we propose a new hallucination benchmark, MRHal-Bench. Extensive experiments across public hallucination and general benchmarks, as well as our MRHal-Bench, demonstrate the effectiveness of our proposed method. Code is available at https://github.com/takomc/amp. | https://openreview.net/pdf/a5533caccb0d2513850f2e35a5cf67613481d4b0.pdf | [
{
"confidence": 4,
"rating": 6,
"review_id": "7x5aC2hbg1",
"review_text": "This paper presents the Automated Multi-level Preference (AMP) framework for improving MLLMs by addressing hallucination issues. The framework introduces a multi-level preference system for RLHF, aiming to enhance the learning process by providing more granular feedback.\n\n- The introduction of multi-level preferences rather than binary ones narrows the gap between adjacent levels, enabling MLLMs to discern subtle differences and integrate cross-level comparisons.\n- The automated pipeline for generating high-quality multi-level preference datasets without human annotators is a significant contribution, potentially reducing bias and noise while saving time and resources.\n- Extensive experiments across multiple benchmarks demonstrate the effectiveness of the proposed method.\n\n- The contribution of the paper heavily relies on the preference fine-tuning algorithm, showing limited innovation beyond this aspect.\n- The method does not demonstrate significant improvements on the LLaVA-Bench benchmark.\n- The method's performance on the adversarial tasks of the POPE benchmark is moderate, suggesting a need to reconsider the impact of MDPO on model robustness and how to balance performance and robustness.\n\nSee weaknesses."
},
{
"confidence": 5,
"rating": 5,
"review_id": "PwdwYNP5SQ",
"review_text": "In this paper, the authors develop an automated dataset generation pipeline capable of producing multi-level preference datasets without the need for human annotators. This paper introduces a novel multi-round dialogues hallucination benchmark, MRHal-Bench. Additionally, the authors design the Multi-level Direct Preference Optimization (MDPO) algorithm, which employs a specifically crafted learning objective to facilitate multi-level preference learning. Extensive experiments conducted on both the hallucination benchmark and a general benchmark demonstrate the effectiveness of this method.\n\n1. To make the labeling of multi-level preference datasets cost-effective and efficient, this paper proposes an automated dataset generation pipeline capable of producing high-quality preference datasets.\n\n2. To narrow the gap between two preference samples in DPO and make the model more easily distinguish the differences between preference data, this paper proposes a multi-level DPO algorithm that use multi-level preference data to provide a broader range of comparisons with hallucination examples.\n\n1. It is recommended to provide more quantitative information on the preference dataset generated by the automated dataset generation pipeline. For instance, the authors could use a subset of the dataset to demonstrate the similarity results compared to human annotators.\n2. In this paper, the authors conduct experiments on three hallucination benchmarks and only one general benchmark. To verify the more general applicability of the method, additional experiments are needed on general benchmarks such as TextVQA, GQA, and IconQA.\n3. In Table 1, the authors compare several MLLMs and RLHF-based MLLMs across MMHal-Bench, MRHal-Bench and LLaVA-Bench. However, the baseline model should be more up-to-date. Could you compare it with more current models such as LLaVA-v1.6, DeepSeek-VL, or MiniCPM-V?\n\n1. Assume we have 3 preference samples: A, B, C. Using the MDPO algorithm, we need to calculate the loss for AB, AC, and BC and then update the parameters. However, why do we need to calculate the loss for BC? Sample B may contain hallucinations; does this affect the model's learning of correct preferences?\n2. This paper does not enhance the visual capabilities of the model. However, in the case study, several OCR tasks and the AMP-MEG model can successfully recognize. Can the authors explain why MDPO algorithm can improve this aspect of ability?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "g6ml3w5eTT",
"review_text": "This work aims to mitigate hallucinations in Multimodal Large Language Models through preference optimization. Motivated by two limitations of binary preferences widely used in existing work, authors proposed a multi-level preference framework. The framework consists of 1) an automated dataset generation pipeline that converts each image-text pair into an image with multiple text descriptions from superior to inferior quality 2) a Multi-level Direct Preference Optimization algorithm that enumerates over all preference pairs with the standard DPO objective. Additionally, authors introduce a new hallucination benchmark, MRHal-Bench. The proposed framework has been evaluated on three benchmarks: MMHal-Bench, LLaVA-Bench, and MRHal-Bench against 5 base models and 5 preference fine-tuned model. The proposed framework achieves best state-of-the-art on MMHal-Bench and MRHal-Bench, although only improved over the second best FGAIF by a small margin. Authors also include comprehensive ablation studies on the effects of multi-level preference.\n\n* The application of multi-level preference alignment to the problem of mitigating hallucination in multimodal LLMs is novel.\n* Conduct a comprehensive comparison with existing preference fine-tuned multimodal LLMs and baselines on three benchmarks. Improve over existing methods by a small margin.\n* Provide an extensive ablation study of the multi-level preference term.\n\nAdditionally, automating of the multi-level preference data generation could be a potential strength as well, but currently lacks evaluation to justify its quality (see weakness).\n\nI would like to see authors address the following weaknesses: \n\n* **Lack intrinsic evaluation of the automated multi-level preference dataset**. The quality is only implicitly justified by the improvement on the three final benchmarks (L258-L264), which makes it unclear what are the artifacts introduced in the automated data generation. Although human or GPT-4 annotation can be inconsistent sometimes, it is still good to collect some annotations to directly assess how the generated preferences align with the degree of hallucination. Similarly, the current auto-check mechanism is ad-hoc and introduces another component, i.e., CLIP, which could introduce additional errors into the system. It would be good to conduct some evaluation on the auto-check mechanism as well. \n* **Missing comparison with rank-based preference alignment approaches**: Despite being a novel application, non-binary preference alignment has been studied both theoretically and empirically in context other than hallucination in MLLMs, for example Zhu et al. 2023 [1], Brown et al. [2], Myers et al. [3], Song et al. [4]. It would be great if this work could engage with prior literature on non-binary preference alignment, for example, discussing how does the proposed objective compare with ranking-based approach in prior work?\n* **Missing results of FGAIF on MRHal-Bench** In Table 1, FGAIF has a performance that is considerably close to the proposed methods (-0.14, +0.05) on MMHal-Bench and outperform the proposed method on LLaVA-Bench, yet it's missing results MRHal-Bench. These missing numbers could affect the comparison between the two methods.\n\nReferences:\n* [1] Zhu et al. Principled Reinforcement Learning with Human Feedback from Pairwise or K-wise Comparisons.\n* [2] Brown et al. Safe imitation learning via fast bayesian reward inference from preferences.\n* [3] Myers et al. Learning Multimodal Rewards from Rankings.\n* [4] Song et al. Preference Ranking Optimization for Human Alignment.\n\n* **Artifacts of the using responses from different model size**: Authors mentioned that inconsistent language styles can introduce biases, how does this concern justify the choice of using various responses from models of different sizes in the same model family? Responses from smaller models clearly don't just change the factual information, but also introduce more repetition and incoherence issues (for example, see Li et al. 2023 [1]). Would some simple perturbation-based methods control style and other factors better? The questions on artifacts apply to varying dataset size as well, it would be great if authors can discuss potential artifacts.\n* **Why not use KL-Divergence for penalty** Author added a penalty term to avoid degrading the quality of the superior responses in formula (6), why use an entropy term instead of the standard KL-Divergence based shift penalty term in RLHF? Won't this penalty term allows reward hacking on the penalty?\n* Minor: in formula (5), maybe the outer loop should be 0 to k-2?\n\n[1] Li et al. Contrastive Decoding: Open-ended Text Generation as Optimization."
}
] |
wnPlJNiqfA | KFNN: K-Free Nearest Neighbor For Crowdsourcing | To reduce annotation costs, it is common in crowdsourcing to collect only a few noisy labels from different crowd workers for each instance. However, the limited noisy labels restrict the performance of label integration algorithms in inferring the unknown true label for the instance. Recent works have shown that leveraging neighbor instances can help alleviate this problem. Yet, these works all assume that each instance has the same neighborhood size, which defies common sense. To address this gap, we propose a novel label integration algorithm called K-free nearest neighbor (KFNN). In KFNN, the neighborhood size of each instance is automatically determined based on its attributes and noisy labels. Specifically, KFNN initially estimates a Mahalanobis distance distribution from the attribute space to model the relationship between each instance and all classes. This distance distribution is then utilized to enhance the multiple noisy label distribution of each instance. Subsequently, a Kalman filter is designed to mitigate the impact of noise incurred by neighbor instances. Finally, KFNN determines the optimal neighborhood size by the max-margin learning. Extensive experimental results demonstrate that KFNN significantly outperforms all the other state-of-the-art algorithms and exhibits greater robustness in various crowdsourcing scenarios. | https://openreview.net/pdf/0b3a999c175feae55c108033441b1455e2a2d2d8.pdf | [
{
"confidence": 4,
"rating": 7,
"review_id": "gI7xweG3qs",
"review_text": "This paper proposes a novel algorithm, KFNN (K-free Nearest Neighbor), which is specifically designed to enhance label integration for crowdsourcing. KFNN integrates two key components named label distribution enhancement and K-free optimization, which significantly contribute to improving the effectiveness and robustness of the label integration process. The idea of automatically determining the optimal neighborhood size for each instance is particularly innovative and well-executed. The experimental results further validate the effectiveness and robustness of the proposed algorithm.\n\n1.\tThe KFNN proposed in this paper is interesting and innovative. The authors reveal the limitations of fixed neighborhood sizes in existing label integration algorithms and propose an algorithm that automatically determines the optimal neighborhood size based on instance attributes and noisy labels. This algorithm significantly improves the robustness of label integration.\n2.\tThe paper provides a solid theoretical foundation for the proposed KFNN algorithm, followed by comprehensive experimental validation. The theoretical analysis is robust and convincingly demonstrates the expected performance improvements. The experiments are well-designed and cover a wide range of datasets, both simulated and real-world, to ensure the generalizability of the results. The experimental results, including comparisons with baseline algorithms, further validate the effectiveness and robustness of the proposed algorithm.\n3.\tThe paper is well-written and clearly presents the proposed methodology and findings. The structure of the paper is logical, making it easy to follow the complex concepts introduced. The use of figures and tables to illustrate key points is effective and aids in comprehension.\n\n1. While the paper provides strong theoretical and experimental results, there is limited discussion on the computational efficiency and scalability of the proposed KFNN algorithm. I suggest moving the algorithmic flow and time complexity analysis from Appendix A to the main text.\n2. There are some repetitive sentences and structures in this paper that should be further condensed. For example, Sections 5.1 and 5.2 should be merged and the repetitive statements in them should be deleted.\n3. The experiments are already comprehensive, but analysis and discussion of the optimal neighborhood size determined by KFNN could still be added, which would help to understand how the neighborhood size should be set. Moreover, according to the results presented in Tables 1-4, KFNN is generally highly effective. However, on few datasets, KFNN does not perform as well as MV. These anomalies are valuable for identifying deficiencies in KFNN and should be further investigated and discussed.\n\nPlease refer to the Weaknesses."
},
{
"confidence": 5,
"rating": 8,
"review_id": "RESF88VyTp",
"review_text": "The paper presents a novel label integration algorithm, KFNN (K-Free Nearest Neighbor), designed to enhance the performance of crowdsourcing platforms by intelligently determining the optimal neighborhood size for each instance based on its attributes and noisy labels. The authors propose a two-component solution involving label distribution enhancement and K-free optimization, which leverages the Mahalanobis distance and a Kalman filter to mitigate noise from neighbor instances. The paper's claims are well-aligned with the theoretical and experimental results, demonstrating the effectiveness and robustness of KFNN against existing state-of-the-art algorithms in various crowdsourcing scenarios.\n\n1.\tNovel contribution to an important problem\nThe innovative approach of highlighting the limitations caused by fixed neighborhood sizes in existing label integration algorithms, and using attributes and noisy labels to determine the neighborhood size for each instance automatically, is a significant contribution to crowdsourcing. \n\n2.\tComplete and rigorous theoretical proof\nThe theoretical underpinnings are sound, with clear assumptions and proofs provided for the proposed methods. The use of the Mahalanobis distance and the Kalman filter is well-justified.\n\n3.\tGood writing quality and clarity\nThis paper is well-written and enjoyable to read. The challenges are clearly stated and the contributions are easy to capture. \n\n4.\tReproducibility\nThe paper's open data and code policy is highly appreciated, promoting research transparency. Enhancing reproducibility with clear versioning and setup instructions would be a valuable addition, showcasing a strong commitment to open scientific practices.\n\n1.\tSimulation experiment results\nThe symbol • indicates that the algorithm in the row significantly outperforms the algorithm in the corresponding column. How is \"significantly outperforms\" defined for Macro-F1 score and integration accuracy?\n\n2.\tAblation experiment results\nSince this study focuses on automatically adjusting neighborhood sizes, how does the performance of this method compare with baselines that use fixed neighborhood sizes?\n\nsee weaknesses"
},
{
"confidence": 4,
"rating": 7,
"review_id": "LwE0tdFpTM",
"review_text": "This paper proposes a novel label integration approach KFNN by adaptively determining the optimal neighborhood size. KFNN utilizes a Mahalanobis distance distribution to model the relationship between each instance and all classes. The authors also provide adequate theoretical analysis to illustrate the effectiveness of the proposed method. Experiments demonstrate that the proposed method can achieve the state-of-the-art performance on simulation and real-world dataset. The paper is well-written and easy to follow. This idea is very intuitive and effective for crowdsourcing task. The paper proves the effectiveness of introducing Mahalanobis distance distribution for crowdsourcing from the perspective of methodology, theory and experiments.\n\n1. The paper is well-written and easy to follow. The logic of the whole paper is clear.\n2. The paper’s idea is very intuitive and effective for crowdsourcing task. The authors introduce the Mahalanobis distance distribution to model the relationship between each instance and all classes. Experiments verify that the proposed method can achieve the best performance compared with SOTAs.\n3. The authors provide adequate evidences to verify the effectiveness of the proposed method from the perspective of methodology, theory and experiments on simulation and real-world datasets.\n\n1. In section 2, the authors introduce two categories of label integration algorithms. And the proposed KFNN belongs to the algorithms which leverage neighbor instance. I suggest adding some discussion about the pros and cons of these two categories of approaches.\n2. In methodology part and theoretical analysis part, the authors discuss the superiority of Mahalanobis distance compared with Euclidean distance. Can the authors verify the difference between Mahalanobis distance and Euclidean distance on this task from an experimental perspective?\n3. In Table 3 and Table 4, why some results are missing? Appropriate explanation facilitates reading of the paper.\n\nPlease refer to weakness. My biggest concern is the experiments for the comparison between Mahalanobis distance and Euclidean distance."
},
{
"confidence": 4,
"rating": 3,
"review_id": "oXg2B0S9uw",
"review_text": "This paper introduces a new algorithm for label integration called KFNN. Existing methods related to KNN produce more noisy labels; however, they fix the neighborhood size, regardless of the fact that instances close to the center of classes should have more neighbors than instances close to the boundary of classes. To tackle this problem, KFNN estimates a Mahalanobis distance distribution between each instance and all classes to enhance the multiple noisy label distribution and utilizes a Kalman filter to mitigate the impact of noise. Finally, KFNN can automatically determine the optimal neighborhood size through max-margin learning.\n\nS1. The paper studies an important problem. \nS2. A new solution is proposed to tackle the problem. \nS3. Experiments are conducted on several datasets.\n\nW1. The motivations need more enhancements. \nW2. Some technical details require more explanations. \nW3. The application scope of the proposed method in crowdsourcing is limited. \nW4. The performance improvement of the proposed method is unsatisfactory. \nW5. Experiments are conducted in a simulation environment, which can be much simpler than a real-world crowdsourcing platform.\n\nD1. The paper focuses on the KNN-related methods for label integration. However, the introduction didn’t justify the motivation of this concentration with convincing proofs. For instance, the motivation is basically explained with the sentence, “to alleviate this problem, recent works have begun to focus on leveraging neighbor instances [1, 11, 12] …”. However, there are also alternatives for label integration, so why considers KNN-related instead of the other types of solutions? Besides, there are much more studies (eg [R1]) that also target on this problem, which should be carefully discussed their pros and cons. Otherwise, the motivation looks weak.\n\nD2. From the perspective of crowdsourcing, the studied problem is closely related to “truth inference”. However, in the references, there are only two papers on this topic: [25] (published in 2016) and [26] (published in 2023). More studies, which can be easily found in Google Scholar or DBLP, should be reviewed and compared (if possible).\n\nD3. It is a little unclear how the principle of employing the same neighborhood size can impact performance. Please give more explanations.\n\nD4. In addressing the question of fusing information from the attribute space and the multiple noisy label space, this paper tends to take an average between the multiple noisy label distribution and the potential label distribution. However, it might be worth exploring the possibility of introducing a tunable parameter to achieve a more optimal balance between these two distributions, rather than relying solely on an equal (50%) average. \n\nD5. The application scope of the proposed method in crowdsourcing is limited. In my opinion, the proposed KFNN can be only used in simple and micro tasks in crowdsourcing, there are many other kinds of tasks in a real-world crowdsourcing platform, such as ranking [R2], which is not considered in the problem setting. Yet, the title, “KFNN: K-Free Nearest Neighbor For Crowdsourcing”, is a little over-claimed. At least, the paper should explicitly define the application scope. More types of crowdsourcing task can be found in existing surveys [R3, R4] on crowdsourcing.\n\nD6. The performance improvement of the proposed method is unsatisfactory. \n(1) Although the average Macro-F1 score of KFNN is better than the compared baselines, it can be notably worse than some of the baselines in certain datasets (eg MNLDP on the anneal dataset). This pattern weakens the motivation, since it’s unclear whether the limitation of existing solutions has been well addressed or not. \n(2) In Table 2, the integration accuracy of KFNN is lower than that of MNLDP. Besides, it can be also notably worse than some of the baselines in terms of the integration accuracy (eg LAGNN and LAWMV on the breast-cancer dataset). \n(3) Based on the current experimental results, the effectiveness of the proposed solution KFNN is questionable.\n\nD7. Although several datasets are conducted in the experimental study, existing work on truth inference in crowdsourcing (eg [R2, R5]) usually deploys their solution in a real-world platform, such as AMT, to verify the performance. Therefore, the setup of the experimental study can be simplifier and less practical than the real-world scenario.\n\nReferences: \n[R1] Adaptive Integration of Partial Label Learning and Negative Learning for Enhanced Noisy Label Learning. AAAI 2024. \n[R2] Xi Chen et al. Pairwise ranking aggregation in a crowdsourced setting. WSDM 2013. \n[R3] Guoliang Li et al. Crowdsourced Data Management: A Survey. IEEE TKDE 2016. \n[R4] Hector Garcia-Molina et al. Challenges in Data Crowdsourcing. IEEE TKDE 2016. \n[R5] Yudian Zheng et al. Truth Inference in Crowdsourcing: Is the Problem Solved? VLDB 2017."
}
] |
wm9JZq7RCe | An Analysis of Tokenization: Transformers under Markov Data | While there has been a large body of research attempting to circumvent tokenization for language modeling (Clark et al. 2022, Xue et al. 2022), the current consensus is that it is a necessary initial step for designing state-of-the-art performant language models. In this paper, we investigate tokenization from a theoretical point of view by studying the behavior of transformers on simple data generating processes. When trained on data drawn from certain simple $k^{\text{th}}$-order Markov processes for $k > 1$, transformers exhibit a surprising phenomenon - in the absence of tokenization, they empirically are incredibly slow or fail to learn the right distribution and predict characters according to a unigram model (Makkuva et al. 2024). With the addition of tokenization, however, we empirically observe that transformers break through this barrier and are able to model the probabilities of sequences drawn from the source near-optimally, achieving small cross-entropy loss. With this observation as starting point, we study the end-to-end cross-entropy loss achieved by transformers with and without tokenization. With the appropriate tokenization, we show that even the simplest unigram models (over tokens) learnt by transformers are able to model the probability of sequences drawn from $k^{\text{th}}$-order Markov sources near optimally. Our analysis provides a justification for the use of tokenization in practice through studying the behavior of transformers on Markovian data. | https://openreview.net/pdf/d6c78ee455f5fe6feda13258c2c22ffcc162624c.pdf | [
{
"confidence": 3,
"rating": 7,
"review_id": "xUCY5vH5zP",
"review_text": "This paper presents a study on tokenization by investigating the behavior of transformers on simple data generating processes . It shows that, in the absence of any tokenization, transformers trained on $k$th-order Markov processes predict characters according to a unigram model, which is quite problematic given how poor unigram models are at modeling Markovian data. Paradoxically, they observe that, even the simplest unigram model learnt by transformers *with the appropriate tokenization* is able to model the probability of sequences sampled from a $k$th-order Markov process.\n\n- The paper is well written, with empirical observation intermingled with theory, which I quite liked. The theory is also accompanied by a lot if intuition, insight and interpretation, which really helps drive the point home.\n\n- In section 3.2, the authors chose to focus on developing guarantees for a newly developed tokenizer, which, to my knowledge, is seldom used. It would've been maybe of greater use to the community to also, or instead, establish these guarantees for the more commonly-used tokenizers, such as BPE.\n\n- I appreciate that this is mostly a theoretical study of tokenizers, and while the observations put forward are valuable, I found myself wondering what practical takeaways this paper presents to improve current tokenizers. That is something I would love the authors to comment on.\n\nPlease see the Weaknesses section above"
},
{
"confidence": 2,
"rating": 6,
"review_id": "ZGbzbf0GLn",
"review_text": "The authors show that tokenization is a fundamental property of transformer-based models, in the sense that without it, it is hard (if not impossible) to achieve low cross-entropy loss on next-word prediction. They show that tokenization helps breaking the unigram barrier (i.e., the best loss a unigram model can achieve) and give a theoretical characterization of the information tokenizers provide in terms of statistics on token distribution.\n\nIn particular:\n\nSection 2.1 shows how models without tokenization cannot achieve the optimal cross-entropy loss, while when equipped with a tokenizer they break the unigram barrier.\n\nSection 3 studies tokenizers that assign all possible substrings (up to length r) as tokens in the dictionary and shows their theoretical optimality in learning processes ruled by k-Markov chains. A consequence is that unigram models can also do that, in the limit.\n\nOf course, this comes at the expense of the model's efficiency and potential attacks that one can run on an exponential number of tokens (i.e., the surface attack grows very large).\n\nFinally, the authors show that tokenizers can trade off the vocabulary size while maintaining low cross-entropy (i.e., they can behave like an optimal model).\n\nFinally, they extend the theoretical framework to LZW tokenizers.\n\nExperiments are conducted on tokenized vs. non-tokenized models on {k=1}-Markov models and then on some real datasets to show that tokenizers trade-off complexity and efficiency in learning an optimal representation of the characters (and their frequency) in the training distribution.\n\nThe article studies an important problem, and I think there is value in the paper.\nTo the best of my knowledge, comparing BPE to non-tokenized models is new, and the figures give some interesting insights (e.g., Figure 2).\nYour paper contains much theoretical work, contributing to its quality.\nThe results in the limit for unigram and BPE/LZW models are noticeable (Section 3 and Eq. 2).\n\nIn general, the results seem solid and are also interesting for linguists and NLP researchers. BPE and other tokenization methods find a trade-off between unigram models, as per Eq. 2, and the complexity of the resulting vocabulary (and model).\n\nOne of the main weaknesses of this work is how it is presented. \nMaybe it's me, but I found it quite hard to read. See questions.\n\nAnother concern is how theoretical results apply to real-world datasets. See questions, but Fig. 5 seems to mitigate the impact of your theoretical results.\nIn fact, for the vocabulary that grows larger, all the models have a similar value of cross entropy (for around ~50K tokens).\n\nThe article seems rushed, as there are many typos (I just listed some).\n- Line 150 “the a”\n- Line 173, “it make since” --> “sense”\n- Line 175, eq. and many others --> Eq. (it’s not wrong per-se, but you capitalize Fig, Example, etc.)\n- The Notation paragraph shouldn’t go with related works but should be in the next section.\n- Notation in 2 is a bit sloppy (this is a personal suggestion): you can use D() and E() for the decoder/encoder (and enclose them with \\mathcal).\n\nYou say that Transformers without tokenization fail at modelling simple k-order Markov models, while with BPE (and other techniques), they succeed. I would say that is simply because BPE injects statistical information in the data and splits it accordingly. BPE is \"trained\" to aggregate \"bytes\" according to their frequency in the training data, so it somehow informs a model with the stationary distribution of most-frequent contiguous characters.\nAm I missing any non-trivial observation here?\n\nThere is a reference in the Appendix to the model used (GPT-2), but nothing on the main paper. For example, I asked myself multiple times what models you used for the tokenized and non-tokenized networks.\n\nBy unigram models, do you mean those where the probability of each token approximates the inverse frequency of a character/token in the training data?\n\nIf I understand correctly, in Fig. 2 (a), models without tokenization fail at breaking the unigram barrier (so the best they can do is model the inverse character frequency). How does that connect to the relative success of character-level tokenization? There are plenty of methods that use character-level tokenization, and they probably work much better than unigrams. Is there anything I am missing here?\n\nIn Figure 2 you mention that plot (2b) has 70x less parameters, but you do not specify why (Is it to prove tokenization helps?).\nDo you use GPT-2, as mentioned in the Appendix? If so, do you use a smaller model for the figure on the left and a larger one for the one on the right?\n\nFig. 3 is hard to understand. I read it many times, but I still do not fully understand what it conveys. The heatmap goes from 0. to ~0.6, though it is unclear what the measure is (I guess it is the probability?)."
},
{
"confidence": 3,
"rating": 6,
"review_id": "3BtP2wkdy2",
"review_text": "This paper offers theoretical insights into the importance of tokenization in language models. Tokenization is ostensibly the artifact that makes training LMs not an end-to-end procedure. This design choice introduces biases, as it is not optimized for exactly the same criterion as the full model. Yet training without a tokenization step almost always leads to worse language models. This paper attempts to provide reasons based in probability theory for this phenomenon. The authors first explore a toy setting, in which transformer models are tasked with predicting distributions from kth order Markov processes. They offer a theoretical explanation for why the error of models is capped at that of a unigram model and how tokenization alleviates this issue. They then show that tokenization schemes with certain properties can achieve the optimal cross-entropy loss. The work offers some basic experimental results confirming their insights.\n\n* Tokenization is a core part NLP pipelines yet it still needs to be better understood from a theoretical perspective. The questions that this paper tries to answer are very relevant for both model interpretability and further development\n* The theory is presented in an understandable manner and results for specific popular tokenization schemes are provided.\n\n* The theory presented in this work is for a specific type of data-generating distribution (kth order Markov) and we can’t directly extrapolate these results to linguistic distributions, which do not necessarily follow such a distribution. There is minimal discussion about the relationship between kth order Markov and linguistic distributions, which leaves the reader questioning how relevant these results actually are.\n* Ultimately, the results are limited; they essentially show an expected result (the existence of an optimal unigram language model as the dictionary size grows to infinity). While some intuition can be gained from these results, the theoretical implications are limited.\n* There is minimal discussion of the empirical results and what conclusions can be drawn from them. Given how much of the theory is not directly applicable to real language modeling settings, it feels like such a discussion should be very important\n\n* In the kth-order Markov processes studied, are nth state distributions dependent only on the n-kth state? I may be misunderstanding the caption in figure 1\n* The results are applicable to all language models, not just large ones. If anything, they are arguably more relevant for smaller language models. Perhaps consider changing the title\n* How does the work differ from/build on Edelman et. al. 2024 and Makkuva et. al. 2024?"
},
{
"confidence": 3,
"rating": 7,
"review_id": "RrekiZEVk3",
"review_text": "This paper investigates the learning dynamics of unigram language models on top of tokenised vs non-tokenised data, comparing these models’ expected cross-entropy to the distribution’s entropy. The paper performs this analysis while considering different data generating distributions (mainly focusing on relatively simple markov chains), and different tokenisation methods.\n\nThis paper tackles an interesting and timely topic: how tokenisation enables language modeling. \n\nThis paper provides an interesting theoretical analysis of the effect of tokenisation on unigram language modeling.\n\nThis paper also provides a couple of empirical analyses of how unigram models perform on real data.\n\nThe paper is relatively easy to follow, even though some of the mathematical results could be spelled out a bit more clearly to make it easier for a reader to follow them.\n\nThis paper’s framing, in my opinion, significantly over-claims its results:\n* The title “Toward a Theory of Tokenization in LLMs” is very broad for the current contributions. A more appropriate title, in my opinion, would be “Analysing tokenisation’s effect on unigram distributions”, or something analogous to it. There is no “theory of tokenisation” being proposed here, but a theoretical analysis of how tokenisation affects a simple model’s cross-entropy.\n* The abstract and introduction also significantly overclaim results, with statements such as “we study the end-to-end cross-entropy loss achieved by transformers with and without tokenization” while focusing on unigram cross-entropies. Transformers may serve as motivation to this work (as they initially learn unigram statistics), but are not in fact analysed here.\n\nI think the paper would also be significantly more straightforward to read if the framing was fixed and it was clear from the start that the paper's analyses would focus on unigram models.\n\nMy current score is mostly based on my current understand that this paper overclaims its results. I'm open to increasing my score if the authors either tone down the paper contributions' rhetoric, or make a convincing argument of why the current framing is appropriate.\n\n> we study the end-to-end cross-entropy loss achieved by transformers with and without tokenization\n\nI’d argue this paper does not actually study a transformer’s cross-entropy with and without tokenization, but a unigram model’s instead. Even if transformers learn unigram distributions early on (and in some tasks are never able to learn more than that), this is still a strong over-statement in my opinion.\n\n> the models initially predict tokens according to a unigram model (in context unigrams), which delays learning the optimal solution. This phenomenon was also observed in Makkuva et al. (2024).\n\n\nThis phenomenon was previously shown by Chang et al., 2022; 2023.\n\n> Line 115. Q(t1, t2, · · · , tj ) = Q#(j) Qji=1 Qtok(ti)\n\nWhat does Q_{#}(j) represent?\n\n> Figure 2a\n\nWhat happens if models are trained for more iterations?\n\n> Figure 3\n\nI found this figure confusing. I don’t fully understand what is being presented here.\n\n#### `References`\n\n* Chang et al., 2022. Word Acquisition in Neural Language Models\n* Chang et al., 2023. Characterizing Learning Curves During Language Model Pre-Training: Learning, Forgetting, and Stability"
}
] |
wlqfOvlTQz | Reinforcement Learning with Lookahead Information | We study reinforcement learning (RL) problems in which agents observe the reward or transition realizations at their current state _before deciding which action to take_. Such observations are available in many applications, including transactions, navigation and more. When the environment is known, previous work shows that this lookahead information can drastically increase the collected reward. However, outside of specific applications, existing approaches for interacting with unknown environments are not well-adapted to these observations. In this work, we close this gap and design provably-efficient learning algorithms able to incorporate lookahead information. To achieve this, we perform planning using the empirical distribution of the reward and transition observations, in contrast to vanilla approaches that only rely on estimated expectations. We prove that our algorithms achieve tight regret versus a baseline that also has access to lookahead information -- linearly increasing the amount of collected reward compared to agents that cannot handle lookahead information. | https://openreview.net/pdf/ad7a5666a27d4242faa064f772f46ff2791265c1.pdf | [
{
"confidence": 1,
"rating": 6,
"review_id": "cCJAz5kTVu",
"review_text": "This paper introduces reinforcement learning (RL) problems where agents observe one-step lookahead information (either rewards or transitions) before choosing actions in episodic tabular MDPs. Two relevant lines of work exist: the control literature, which studies a similar lookahead concept in the continuous state-space scenario, and the RL planning community, which commonly obtains lookahead information from learned transition models. However, this paper assumes the reward/transition information to be available before selecting an action. The core contributions are:\n\n1) Formalising the look-ahead setting for the reward and transition in an episodic MDP setting.\n2) Derivation of the Bellman equations in the original space by setting up an equivalence with an equivalent new MDP.\n3) Development of two algorithms for reward (MVP-RL) and transition lookahead ( MVP-TL).\n4) First sub-linear regret bound win the lookahead setting.\n\nThis paper is the first to provide regret bound on the lookahead learning setting. This encompass a somewhat broad spectrum of problems that were independently studied such as the Canadian traveler problem and the prophet inequalities. \n\nThey paper is well written and easy to follow for non-expert in learning theory. It presents the core ideas in an understandable way in the main paper and use the appendix for technical proofs.\n\nThe paper could be strengthened by adding experimental results studying the difference in behaviour and performance between standard RL algorithm, MVP and the proposed solution MVP-RL. More specifically, I would be interested in understanding the difference in behaviour when changing the tails of the reward/transition distributions.\n\nHow applicable is the theoretical argument you used to a model-free version of MVP-RL/TL?"
},
{
"confidence": 3,
"rating": 6,
"review_id": "FV8PRErH0T",
"review_text": "The authors proposed new forms of Bellman equations for environments where the agent knows the reward or transition outcomes one step ahead (without knowing the full model).\n\nWhile previous papers (e.g., Boutilier et al. 2018) discussed utilizing lookahead information (and proved convergence), the authors claim they are the first to present regret results.\n\nWhile the theoretical contribution is clear, the authors must also provide practical validation.\n\n- Perform experimental validation to illustrate the practical performance. \n For instance, it would be necessary to check the learning curves and the resulting performance. \n The authors should also discuss the practical implementation. \n\n- H is the important parameter to be determined. Provide a practical guide line for choosing H. \n\n- can this method be extended to off-policy learning ?\n Is the method data-efficient ?"
},
{
"confidence": 3,
"rating": 5,
"review_id": "HqIO6ZNliA",
"review_text": "This manuscript proposes the RL method with lookahead information. The authors discuss two scenarios: reward lookahead and transition lookahead. Under such scenarios, the proposed method estimates the reward distribution and transition distribution, respectively. Then the monotonic value propagation skill is applied to calculate the value function. The authors show that the proposed method has strong theoretical properties and the reward regret is strongly bounded under two circumstances.\n\nThe manuscript is well organized, and the structure is clear. The authors shows very promising bound for both reward lookahead and transition lookahead scenarios.\n\nThis is a theoretical paper, however, the authors miss to deliver some numerical or empirical studies. It is suggested to add some empirical experiments, at least with simulated data. \n\nAlgorithm 1&2 shows the procedure for training, I am confused about the inference process. How to select the action give certain state in inference? The authors are suggested to give some explanations in the Algorithm 1&2.\n\nLine 150, the sentence should be \"in this way\"?\n\nThe estimated reward/transition distribution $\\hat{R}^k_h$ and $\\hat{P}^k_h$ are key components for the proposed method, it is suggested to give more details on the distribution estimation part. \n\nFor both cases, the authors proposes the bonuses, I am confused why do we need the bonus? only for calculating the variance value? However, even without the bonus value, we can also update the value function $\\bar{V}^k_h(s)$, right?"
},
{
"confidence": 3,
"rating": 7,
"review_id": "MuxkR9UsfU",
"review_text": "The paper considers the setting where the agent can see the possible next rewards and next states without assuming a prior knowledge of the environment dynamics. The predicted next rewards and next states are estimated by empirical distribution. The paper considers extending Monotonic Value Propagation to such a setting and proves that the proposed algorithms can achieve tight regret bounds.\n\n- A tight regret bound is proved for the proposed algorithm, establishing theoretical justification for lookahead information and advantages of planning in RL in general.\n- The paper does not assume known environment dynamics as in most previous works, which makes the algorithm applicable to standard RL settings. The lack of known environment dynamics may bring various challenges to planning, such as agents not relying on the lookahead information when the estimated environment dynamics are still far from the true one in the early stages. The paper shows that the lookahead information can still be very beneficial despite such challenges.\n- The paper is well-written, and the proof is easy to follow.\n\nEven though a tight regret bound has been proved, empirical experiments with examples showing how the agent uses the lookahead information will strengthen the paper.\n\nThere has been prior work in deep RL that makes use of lookahead information even when the environment dynamic is unknown. One example is the Thinker algorithm [1], which allows agents to select an imaginary action trajectory to collect n-step lookahead information before selecting an action in each step (the environment dynamics are also assumed to be unknown). The related work section should be updated to reflect this (e.g. line 73-79). However, as these works are mostly empirical without proving the regret bound, I still recommend that the paper be accepted, given its theoretical significance. \n\n[1] Chung, Stephen, Ivan Anokhin, and David Krueger. \"Thinker: learning to plan and act.\" Advances in Neural Information Processing Systems 36 (2024)."
},
{
"confidence": 4,
"rating": 5,
"review_id": "Fd9g1CgGfS",
"review_text": "The paper studies an RL problem with a special setting, called one-step lookahead, where the agent can observe the reward or the state at the next step before the current action is taken. The paper focuses on the problem with an unknown environment (transition function). The authors proposed an efficient algorithms leveraging the empirical distribution of the lookahead information and claimed that the algorithms achieve tight regret against a strong baseline.\n\n1. The paper studies an interesting RL problem where one-step lookahead information is available to the agent while the environment is unknown. \n\n2. The paper clearly presents the problem, the solution, and a comparison between the proposed algorithm and the baseline in terms of regret bound.\n\n3. The paper offers explanation of the terms in the regret bounds and justified its explanation.\n\n1. One concern is the application of such a lookahead setting. The agents during training and running needs to know what will be realized in order to make actions at the current state. Not sure what real-world scenarios this setting can be applicable to.\n\n\n2. RL with lookahead information has been investigated before from a theoretical point of view. See [R1, p64]. [R2] [R3]. [R1] discusses the lookahead in the approximation of the bellman function. [R2-R3] considers controlled lookahead where the agents decide the step of lookahead as a strategy. It is not straightforward to see in this paper how the lookahead studied in this paper different from those references. \n\n[R1] Bertsekas, Dimitri. Reinforcement learning and optimal control. Vol. 1. Athena Scientific, 2019.\n[R2] Biedenkapp, André, et al. \"TempoRL: Learning when to act.\" International Conference on Machine Learning. PMLR, 2021.\n[R3] Huang, Yunhan, Veeraruna Kavitha, and Quanyan Zhu. \"Continuous-time markov decision processes with controlled observations.\" 2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton). IEEE, 2019.\n\n\n3. It is not clear the source of the baseline mentioning in the paper. For example, \"compared to a stronger baseline that also has access to lookahead information\". The paper should includes the reference whenever the baseline is compared with the proposed solution.\n\n1. When the agents have access to both reward lookahead and transition lookahead, how would the regret bound be different?\n\n2. Why doesn't the paper present a case study that illustrates how would the agent behave differently between a normal RL setting and a lookahead setting?"
}
] |
wlcm21C4nk | Advancing Training Efficiency of Deep Spiking Neural Networks through Rate-based Backpropagation | Recent insights have revealed that rate-coding is a primary form of information representation captured by surrogate-gradient-based Backpropagation Through Time (BPTT) in training deep Spiking Neural Networks (SNNs). Motivated by these findings, we propose rate-based backpropagation, a training strategy specifically designed to exploit rate-based representations to reduce the complexity of BPTT. Our method minimizes reliance on detailed temporal derivatives by focusing on averaged dynamics, streamlining the computational graph to reduce memory and computational demands of SNNs training. We substantiate the rationality of the gradient approximation between BPTT and the proposed method through both theoretical analysis and empirical observations. Comprehensive experiments on CIFAR-10, CIFAR-100, ImageNet, and CIFAR10-DVS validate that our method achieves comparable performance to BPTT counterparts, and surpasses state-of-the-art efficient training techniques. By leveraging the inherent benefits of rate-coding, this work sets the stage for more scalable and efficient SNNs training within resource-constrained environments. | https://openreview.net/pdf/a4eb38a11be001248c145a0cd2381f9d6503b19c.pdf | [
{
"confidence": 4,
"rating": 7,
"review_id": "11AIRtSmav",
"review_text": "Recent research indicates that rate-coding is crucial for information representation in deep Spiking Neural Networks (SNNs) trained via Backpropagation Through Time (BPTT). Building on this insight, a new training strategy called rate-based backpropagation has been developed to leverage rate-based representations, reducing the complexity of BPTT. This approach focuses on averaged dynamics to simplify the computational graph, thereby lowering memory and computational requirements. Theoretical and empirical analyses demonstrate that this method closely approximates BPTT's gradient optimization, maintaining comparable performance while surpassing other efficient training techniques. This advancement is poised to enable more scalable and resource-efficient SNN training, particularly in environments with limited resources.\n\n1.\tThe paper is very well written and documented.\n2.\tThe contributions have been discussed comprehensively.\n3.\tThe experiments have been conducted on multiple benchmarks.\n\nSome important details (such as the top-level algorithm of the proposed rate-based backpropagation method and details of the experimental setup) are reported in the appendix, while, due to their importance, they should be moved to the main manuscript.\n\n1.\tCan the proposed rate-based backpropagation be implemented on existing neuromorphic chips with learning capabilities?\n2.\tLooking at the results in Fig.4, the impact of the number of timesteps on the time and memory looks constant. How have specific numbers of timesteps been selected for each dataset?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "COMieNpRAB",
"review_text": "This paper presents a novel rate-based backpropagation method for spiking neural network (SNNS) training, which effectively separates the time-dependent backpropagation (BPTT) process and thus reduces computational and memory costs. The method employs a rate-encoded approximation to capture the basic information and is validated by empirical experiments on various datasets, demonstrating that it is superior in terms of training efficiency and accuracy when compared to the traditional BPTT.\n\n1. Empirical results on multiple datasets (CIFAR-10, CIFAR-100, ImageNet, CIFAR10-DVS) support the theoretical claims and ensure accuracy while reducing memory and time costs.\n2. The paper is well-written, clearly explaining the proposed method, theoretical underpinnings, and experimental validation.\n\n1.\tIn lines 53-55, this paper mentions that the proposed method reduces training time, but there is no relevant experimental proof in the experiments section.\n\n1. In lines 223-234, the reference to 'when cosine similarity close to 1 is interpreted as a high degree of consistency in the direction of the variable', does it take into account the effects of data distribution and noise, which may also occur in the case of uneven data distribution. Can additional experiments or theories be added to rule out the effect of data distribution and noise on the hypothesis presented in lines 223-234?\n2. The approach proposed in the paper seems to be very similar to the one described in reference [1]. Although the general direction of the two is different, the core idea seems to be the same. Could you please explain the difference between your approach and the one outlined in reference [1]?\n3. In Section 5.3, in the experiments evaluating the effect of time step on accuracy and training, only one dataset, CIFAR-10, was used. Could the experiment be supplemented with experiments using other datasets to demonstrate the scalability of the proposed method for larger values of T?\n4. In the caption of Fig. 3, is the placeholder '#' missing from T{timesteps}?\n\nReference:\n[1] Bu, Tong, et al. \"Rate gradient approximation attack threats deep spiking neural networks.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023."
},
{
"confidence": 4,
"rating": 5,
"review_id": "WBvXqoWskK",
"review_text": "This work falls into the category of efficient SNN training methods. This paper proposes a reduced computational graph to reduce the memory and computational demands of SNNs training. This work has the potential to train SNNs on resource-limited devices. The paper evaluates the methods on CIFAR-10, CIFAR-100, ImageNet, and other datasets.\n\nThis paper addresses the issue of high time and memory costs in training spiking neural networks. \n\nThis paper provides solid theoretical insights into the error bound and its relation to SNN BPTT training. \n\nThe results of this work are comparable to the performance of the BPTT counterpart.\n\nNot a clear comparison of the differences with existing e-prop methods in terms of methodology. \n\nNo generalization results on hyperparameters (e.g., $\\lambda$) are presented in this work. I raise this question because most work on SNNs uses large values of $\\lambda$, but this work used 0.2 as $\\lambda$.\n\nWhy did the authors approximate the spiking rate directly with the mean over timesteps, instead of using a running mean with a decay parameter $\\lambda$, which would more closely approximate the rate in the leaky integration mode?\n\nIn Line 151, page 5, what does 𝑑 represent in $\\frac{\\partial I}{partial c} = Id$?\n\nPlease elaborate further on the differences between rateM and rateS. The authors state that 'rateM represents the multi-step training mode where T loops are embedded within layers, while rateS refers to the single-step training mode with T loops outside the layer.'\n\nRegarding robustness to $\\lambda$ In the paper, the neuronal parameter $\\lambda$ is set to 0.2. Can you provide experiments with other values of $\\lambda$, such as 0.4, 0.6, 0.8, and 1.0?\n\nI believe that the training cost in Fig. 4 should encompass not only the backward process but also the forward iteration process (which also contributes to the cost)."
},
{
"confidence": 5,
"rating": 6,
"review_id": "bUtFAKKx5c",
"review_text": "This paper proposes a rate-based SNN training method, which can effectively reduce memory and time cost during training. They proved the efficiency of the rate-based back-propagation training and demonstrate that the rate-based training outperforms other back-propagation methods.\n\nThe rate-based method achieves better performance and uses less computing resource compared with BPTT, which is impressive.\n\nThis paper is well-written and well-organized.\n\nThe novelty is weak. There are two previous works that share similar idea with this paper, since they all use rate-based backpropagation [1,2]. The author needs to briefly explain the differences between these papers.\n\nThe rate-based backpropagation is not suitable for sequential tasks.\n\n[1] Li, Yuhang, et al. \"Differentiable spike: Rethinking gradient-descent for training spiking neural networks.\" Advances in Neural Information Processing Systems (2021).\n[2] Bu, Tong, et al. \"Rate gradient approximation attack threats deep spiking neural networks.\" Computer Vision and Pattern Recognition (2023).\n\nThe authors introduce the rate-coding approximation forward propagation in Section 4.1. Is this forward propagation method also used during inference?\n\nWhat is the performance of rate$_s$ on ImageNet dataset?"
}
] |
wlLjYl0Gi6 | Efficient LLM Scheduling by Learning to Rank | In Large Language Model (LLM) inference, the output length of an LLM request is typically regarded as not known a priori. Consequently, most LLM serving systems employ a simple First-come-first-serve (FCFS) scheduling strategy, leading to Head-Of-Line (HOL) blocking and reduced throughput and service quality.
In this paper, we reexamine this assumption -- we show that, although predicting the exact generation length of each request is infeasible, it is possible to predict the relative ranks of output lengths in a batch of requests, using learning to rank. The ranking information offers valuable guidance for scheduling requests. Building on this insight, we develop a novel scheduler for LLM inference and serving that can approximate the shortest-job-first (SJF) schedule better than existing approaches. We integrate this scheduler with the state-of-the-art LLM serving system and show significant performance improvement in several important applications: 2.8x lower latency in chatbot serving and 6.5x higher throughput in synthetic data generation. Our code is available at https://github.com/hao-ai-lab/vllm-ltr.git | https://openreview.net/pdf/ef9ade264c14ae815c219f762df83610938eb101.pdf | [
{
"confidence": 4,
"rating": 6,
"review_id": "vdPGmOC8b7",
"review_text": "This paper proposes a learning-based rank predictor for scheduling LLM inference to reduce Head-of-Line (HoL) blocking issues, which significantly outperforms state-of-the-art LLM serving systems.\n\n1. This paper addresses an important question in LLM serving.\n2. This paper is easy to follow with a good presentation.\n3. The evaluation results are comprehensive and solid.\n\n1. One potential issue with preemptive scheduling for LLM inference is the accumulated unused KV cache. How do you handle them when the GPU reaches the maximum memory limit? \n\n2. How much does the ranking model (OPT) size affect the prediction and throughput performance? For example, what if I use a smaller auxiliary model (OPT-125M) for a larger LLM (LLaMA-70B)?\n\n3. How much is the performance gap between the ranking-based method and Oracle? It would be better if the authors could add such results to provide a performance upper bound.\n\nPlease see the weaknesses above."
},
{
"confidence": 3,
"rating": 6,
"review_id": "negzB006px",
"review_text": "This paper proposes an approach for optimizing scheduling in LLM serving by learning a generated token length ranking model. The authors demonstrate that understanding the relative order of generation lengths can effectively guide the scheduling process, specifically through the use of SJF/ SRTF scheduling strategies.\n\n1. The paper is well-written, making the methodology and results clear and easy to understand.\n2. The experiments are well-designed and convincingly demonstrate the benefits of the proposed approach.\n3. The proposed method has shown practical improvements when integrated with current serving techniques.\n\n1. While the approach is effective, it builds upon existing work that has already identified the benefits of SJF/SRTF scheduling for LLMs[1][2]. The novelty is somewhat limited to the application of ranking loss instead of classification loss.\n2. If we directly predict the token length, it could potentially offer advantages such as improved memory allocation and cache strategy adjustments, which are also crucial for optimizing LLM serving. In contrast, using relative order may not provide these benefits.\n3. The paper lacks a thorough discussion of some related work, such as [1][2]\n\n[1] Efficient Interactive LLM Serving with Proxy Model-based Sequence Length Prediction\n\n[2] Power-aware Deep Learning Model Serving with µ-Serve\n\nThis paper only considers the generated length, which may affect the execution time for each query. However, prompt length also influences execution time. Wouldn't it be more reasonable to also take prompt length into consideration?"
},
{
"confidence": 3,
"rating": 6,
"review_id": "YsfJfUAGp3",
"review_text": "This paper reveals the Head-of-Line (HOL) blocking problems caused by the first-come-first-serve (FCFS) scheduling strategy in LLM services. To alleviate these problems, the authors train an OPT model to generate scores for evaluating the relative text length of given prompts. Based on these scores, the authors develop a novel scheduler for LLM inference and serving. Experimental results demonstrate the effectiveness of the proposed method, significantly outperforming the baseline method.\n\n1. The proposed method is efficient and effective. Training a small language model (i.e., a 125M OPT model) is cheap, and the resulting latency gains are substantial.\n2. This paper is novel. Unlike traditional methods that predict the real generation length, predicting the relative ordering between request lengths is sufficient for ranking.\n\n1. Since the request queue Q is re-ranked after each batch of data is scored, the ranking scheduler may be sensitive to the batch size.\n\n1. Could you give a more detailed analysis of the relationship between ListMLE loss and Kendall’s Tau coefficient?\n2. Are all the last hidden states of the OPT model used to map to a score, or are only specific hidden states of a token used? Using a decoder-only model to extract features of a text seems unusual."
},
{
"confidence": 4,
"rating": 4,
"review_id": "ZKJWazxLKR",
"review_text": "The paper addresses the inefficiencies in scheduling LLM inference requests, which often use a first-come-first-serve (FCFS) strategy, leading to Head-Of-Line (HOL) blocking and reduced throughput. The authors propose a novel scheduling method based on predicting the relative ranks of output lengths in a batch of requests, rather than attempting to predict exact generation lengths. This prediction helps in approximating the shortest-job-first (SJF) schedule, which is known to minimize average latency.\n\nThe paper employs a straightforward but effective scheduling algorithm that approximates the shortest job first (SJF) strategies. This approach effectively reduces response latency and improves throughput. The authors have tackled the challenge of accurately approximating SJF. The empirical results demonstrate significant improvements in both latency and throughput, highlighting the effectiveness of their approach. The paper introduces interesting metrics to determine the relative range of output lengths. \n\nThe paper addresses a crucial issue in LLM workload scheduling. By focusing on reducing response latency and enhancing throughput, it tackles a significant problem that is highly relevant to the efficiency and performance of LLM servingsystems.\n\n- The current scheduling approach only considers output length. Would you also consider other dimensions, such as prompt length? Longer prompt lengths can consume more memory and increase token latency, impacting overall response latency and throughput. Additionally, would you consider implementing preemptive scheduling to correct any mispredictions dynamically?\n\n- Your predictor is trained using 10k traces from ShareGPT and LM-SYS. However, these traces are primarily from GPT-4 and other models. Have you considered that different models Llama3 might behave differently, with varying verbosity and output lengths even for the same prompts? If the predictor cannot be reused across different models, you might need to account for the overhead of retraining the model to maintain accuracy.\n\n- You should discuss Andes [1], which also propose a request scheduling strategy to improve quality of experience. \n[1] Andes: Defining and Enhancing Quality-of-Experience in LLM-Based Text Streaming Services \n\n- SJF scheduling inherently risks starving requests with longer response length, as these jobs can be indefinitely delayed. How do you address this issue to ensure that longer requests are also processed in a timely manner?\n\n1. Why is the improvement on dataset Sharegpt and lmsys different, as shown in table3."
}
] |
wl44W8xpc7 | Learning Infinitesimal Generators of Continuous Symmetries from Data | Exploiting symmetry inherent in data can significantly improve the sample efficiency of a learning procedure and the generalization of learned models. When data clearly reveals underlying symmetry, leveraging this symmetry can naturally inform the design of model architectures or learning strategies. Yet, in numerous real-world scenarios, identifying the specific symmetry within a given data distribution often proves ambiguous. To tackle this, some existing works learn symmetry in a data-driven manner, parameterizing and learning expected symmetry through data. However, these methods often rely on explicit knowledge, such as pre-defined Lie groups, which are typically restricted to linear or affine transformations. In this paper, we propose a novel symmetry learning algorithm based on transformations defined with one-parameter groups, continuously parameterized transformations flowing along the directions of vector fields called infinitesimal generators. Our method is built upon minimal inductive biases, encompassing not only commonly utilized symmetries rooted in Lie groups but also extending to symmetries derived from nonlinear generators. To learn these symmetries, we introduce a notion of a validity score that examine whether the transformed data is still valid for the given task. The validity score is designed to be fully differentiable and easily computable, enabling effective searches for transformations that achieve symmetries innate to the data. We apply our method mainly in two domains: image data and partial differential equations, and demonstrate its advantages. Our codes are available at \url{https://github.com/kogyeonghoon/learning-symmetry-from-scratch.git}. | https://openreview.net/pdf/e3d85d3bc33df6728f45d00227c30fe131b595d9.pdf | [
{
"confidence": 3,
"rating": 5,
"review_id": "uISaER07fo",
"review_text": "This paper proposes using neural ODEs to parameterize symmetries by viewing the ODEs flow as an element of a one-parameter group. They show that by learning the parameters of the neural ODEs, they are able to recover ground truth symmetries in image classification and PDE tasks.\n\nThe paper is easy to read. The proposed ideas are clear, and appear to be mostly novel.\n\nSee questions below.\n\n1. Line 174: Do you also need \"and for all $s\\in[-\\sigma,\\sigma]$\" in addition to the \"for all $f\\in\\mathcal{D}$\"?\n2. Section 4.1: According to your definition of a symmetry $\\vartheta$ (there exists $\\sigma>0$ such that $\\vartheta_s(f)$ is \"valid\" for all $f\\in\\mathcal{D}$ and all $s\\in[-\\sigma,\\sigma]$), it seems like solving for $\\vartheta^*$ from (4) does not guarantee that $\\vartheta^*$ is actually a symmetry, even if the optimal value of (4) is less than $C$. That is, if the optimal value of (4) is less than $C$, then you only know that the validity score is less than the threshold on average, not for all data $f$ and all transformation scales $s$. Can you please clarify this aspect?\n3. Line 217: Something is strange with the definition of $\\vartheta_{\\mathcal{X}}(f)$. In particular, what is $T_{\\mathcal{X}}$, and why does $f$ not appear in the right-hand expression? It seems to me like perhaps you meant to define $\\vartheta_{\\mathcal{X}}(f)(x) = f(\\vartheta_{\\mathcal{X}}^{-1}(x))$, and hence that you need to assume the transformation $\\vartheta_{\\mathcal{X}}$ on $\\mathcal{X}$ to be invertible. Can you please clarify whether this is a typo, or if I am missing something? Also, it is unclear what $\\vartheta_{\\mathcal{Y}}(f)$ is doing or why it is needed. Indeed, as written, you are defining two different transformations of $f$ in (8) (resulting in two different functions of $x$).\n4. Line 226: How reasonable is it to assume that you know the actual number of possible symmetries for a given task? Is it possible for you to select $N_{\\textup{sym}}$ to be too small, and then to miss out on learning some of the most important symmetries of a task?\n5. Line 232: Typo (\"orthonomral\").\n6. Line 230: In practice, are symmetries actually mutually orthogonal (as vector fields) to one another? It seems like this is not always the case per your Table 1, which shows that uniform scaling is not orthogonal to $x_1$-axis scaling. A comment on the validity of the orthogonality assumption underlying the use of this regularizer would be appreciated.\n7. Line 243: Typo (\"undersirable\").\n8. Line 246: Typo (\"Lipshictz\").\n9. Line 280: \"...the two translations are the most invariant one-parameter group...\" Can you please explain in further detail how you come to this conclusion?\n10. Table 2 and Figure 5: I suggest moving these to be centered with the rest of the text; it looks poorly placed in the right-hand margin as is, and causes Figure 5 values to be too small and hard to read.\n11. Line 322: \"We found the ground truth symmetries in all the experiments as in Figure 5...\" It looks like you did not recover the true symmetries in the cKdV setting, as there are non-negligible components coming from each of the ground truth symmetries that appear in the learned symmetries.\n12. Did you verify in your experiments that the learned \"symmetries\" are actually symmetries according to your definition using the validity score threshold? Overall, your experimental results in Figures 4, 5, and 10 seem to me like you are indeed learning orthogonal vector fields, and that sometimes the vector fields end up being equal to one of the ground truth vector fields, but other times not (turns out to be linear combination of multiple ground truths). This does not seem super convincing that you are reliably learning/recovering the ground truth symmetries.\n13. Why did you introduce (8)? I don't see this transformation being used anywhere else in the paper."
},
{
"confidence": 4,
"rating": 6,
"review_id": "MpPukWRBnG",
"review_text": "The paper pertains to the topic of data-driven symmetry discovery. The authors propose a method allowing symmetry discovery beyond pre-defined Lie groups, by learning to transform datapoints, potentially in a non-affine manner, via a learned ODE (referred to as the *one-parameter group*, where the single parameter is the time variable in the ODE), the velocity field of which is typically parametrised by an MLP. \n\nCrucially, to optimise this, the authors choose an objective - *validity score*, which is a predefined measure of the extent to which a transformed datapoint is symmetric to the input one (in their examples: for images, they use the cosine similarity between features extracted by a pretrained NN, while for PDEs, they measure the value – error – of the PDE for the transformed datapoint). Additional regularisers are used to ensure the diversity of the symmetries learned (orthogonality between different learned velocity fields) and smoothness (minimisation of an estimate of their local Lipschitz constants). Experimentally, the method is tested on image classification (CIFAR10) and PDE solving (KdV, KS, Burger’s equations) showing that known symmetries are retrieved along with additional approximate symmetries, while the learned symmetries are subsequently used for data augmentation showing competitive results to methods using pre-defined augmentations.\n\n**Significance** . The paper studies an important problem (*data-driven symmetry discovery*) in machine learning, but also physical sciences where symmetries are abundant but potentially unknown. Identifying unknown symmetries and incorporating them in downstream ML models (e.g. via augmentation) can improve generalisation, especially in low-data regimes, while additionally, it can potentially provide novel insights about the task at hand.\n\n**Novelty/Generality**\n- The presented methodology has the capacity to recover symmetries arising from *non-affine* data transformations. This is contrary to prior work, where mostly linear/affine transformations are dealt with.\n- Additionally, this method does not require making assumptions about the structure of the target group. This is common in prior art, where typically a subgroup of a predefined group is learnt.\n- The authors take advantage of well-established concepts that are underexplored by the ML community (e.g. modelling transformations via the one-parameter group) - this helps to broaden the available toolbox in the field of ML & symmetries/ equivariant ML.\n\n**Execution/Implementation**\n- Although the proposed method has multiple complicated components (NeuralODEs, difficult objective to optimise for), it is nonetheless well-executed yielding competitive results and recovering known symmetries in popular testbeds.\n\n**Applicability and scope**. Perhaps the biggest limitation of the proposed method is the *reliance on the validity score*. Although the authors claim to be able to learn symmetries by making as few assumptions as possible (see strengths), this seems to be contradicted by the need to manually design a validity score. Moreover, I have the impression that the validity score is not merely a hyperparameter, but it is decisive for the symmetries that will be learnt (basically it is the objective function of the optimisation problem). \n- For example, in the case of images, the choice seems ad hoc (closeness in the representation space of a pre-trained encoder NN). What leads the authors to believe that the features of equivalent (symmetric) images extracted from the pre-trained NN should be close? Have the authors tried to verify this assumption? I think the empirical validation is insufficient here (section 5.1.), so I am not entirely convinced.\n- In general, I do not see a generic way to define validity scores and perhaps the authors have slightly overclaimed in that respect. I would like to read the authors' viewpoints on that. For PDEs, the validity scores are indeed reasonable and generic, so perhaps, they would like to put more emphasis on this perspective.\n\nFurthermore, the authors introduce the concept of learning symmetries via the one-parameter group, claiming that it is more general than prior parametrisations that can only learn linear/affine groups. However, it is unclear what the present parameterisation can express, e.g. does it allow learning any continuous group or implicit assumptions are made here as well? \n- Additionally, could the authors discuss if it would be possible to learn finite groups with this method as well and if not, how could those be incorporated?\n\n\n**Related Work/Comparisons**. The work of Forestano et al., MLST’2023 is quite relevant to the present manuscript, with the main difference being that in that work, the transformations are parameterised by an MLP instead of a NeuralODE (the oracle used in this work seems similar to the validity score used here). Since the two works have many similarities, I think that the authors should discuss in more detail their differences and the advantages of their work (e.g. as far as I understand the MLP cannot guarantee that the transformations form a group). Note that modelling transformation via an MLP (or any NN in general) instead of a NeuralODE seems more straightforward and probably easier to train and more computationally friendly.\n\n**Experiments**. I believe some additional empirical evidence would strengthen the authors' claims.\n- Most importantly, an experimental comparison against the type of parameterisation used in Forestano et al. (MLP) should be provided, to verify if NeuralODEs are indeed a more appropriate parameterisation.\n- Moreover, baselines are mostly missing, e.g. comparing against other methods for data-driven symmetry discovery (I am not super familiar with these works, but if I am not mistaken LieGAN by Yang et al., ICML’23 is a recent example). \n- The reported results after augmenting with the learned symmetries do not seem to improve significantly compared to known/default augmentations. Can the authors discuss why this might be the case? This is important since it might undermine the necessity of the proposed approach. To be more convincing, perhaps the authors should perform experiments on problems where the symmetries are not known a priori.\n- Additionally, ablation studies seem to provide important insights but are only discussed in the appendix. I would recommend being more upfront in the main paper and discussing in the rebuttal the following: sensitivity to hyperparameters (multiple are needed: loss coefficients, $\\sigma$ and $\\tau$), the method for choosing them, the difficulty of optimisation (3 terms are used in the objective) and if all losses are optimised in a balanced manner. Similarly, for the parameter $N_sym$, which is now chosen based on prior knowledge of the number of existing symmetries.\n\n**Presentation/Exposition**. (disclaimer - this is a minor weakness) Given that the notions discussed here are probably not widely known across the ML community, I believe that the authors should aim to provide more in-depth explanations to make their work more accessible. For example,\n- Multiple group theory/symmetry concepts are discussed without definitions (group generators, Lie group, Lie algebra, Lie bracket etc.). Additional examples that are not well-contextualised include in section 2 the one-parameter group discussion, the Lie algebra of the affine group and the discussion on the PDE symmetries (Lie point symmetries etc.). Adding some references here and providing some examples for the mentioned PDE symmetries would help.\n- In section 5.2., some concepts regarding the experimental details are mentioned without appropriate explanations, while others are only mentioned in the appendix, although it appears that they are crucial for the method. Perhaps the authors should be more upfront and explanatory regarding the aforementioned.\n\n- How is the weight function in L234 defined, and how important is this for optimisation? Are different weight functions ablated?\n- It’s unclear why the stop-gradient is needed in L237. I did not find the justification fully convincing. Could the authors elaborate? What happens experimentally if one does not use stop-gradient?\n- Although intuitive, it’s unclear why the inline Eq. in L212 holds for negative $\\alpha$.\n\n**Suggestion**.\nIn case the authors do want to present their method as generic, I think a deeper experimental evaluation in data beyond PDEs would help a lot (e.g. testing on other image datasets, ablating different validity scores etc).\n\n**Minor**:\n- What if the chosen validity score is not differentiable?\n- *Notation*.\n - The notation $\\theta^V_s(x)$ is a bit dense (why use V as a superscript?). Eq (1) is a bit confusing. The inline Eq in line 83 sees clearer to me. \n - Notation in sec. 4 could be also simplified a bit or made more accessible with examples (e.g. 4.1: give an example for $\\mathcal{A}$, i.e. the set of all natural images)."
},
{
"confidence": 3,
"rating": 6,
"review_id": "JinE3zCbPw",
"review_text": "This paper proposes a symmetry learning algorithm based on transformations defined via infinitesimal generators. Using Neural ODE, an infinitesimal generator is learned that is capable of producing a sequence of transformed data through ODE integration. Validity score has been defined to check if the transformed data is valid wrt to a given task. For images, the validity score is picked to be cosine similarity while for PDEs the validity score is defined as the numerical errors in the original equations after the transformation. In addition to symmetry loss, two regularizations, orthonormality loss, and Lipschitz loss have been added, to remove trivial solutions. The authors present experiments on CIFAR10 and KdV equation and Burgers' equation in 1D for PDE.\n\n1. The paper motivates the need for learning continuous symmetries well. \n2. The idea presented in the paper is very neat and shows potential beyond the presented experiments. \n3. This approach can learn both affine and non-affine symmetries in image classification and PDE tasks as shown in the experiments section.\n\nThe discussion on compute and model parameters comparisons with baseline missing. No other method was shown as a baseline in either of the experiments. There is some ambiguity in how exactly the validity score is used and in some cases, can be defined if the given task is equivariant.\n\n1. How does the validity score take into consideration, invariant tasks vs equivariant tasks? \n2. Using cosine similarity between the extracted features for validity score, can be a problem for extreme transformations (as cosine values do not linearly change with parameter). How does this affect the learning of symmetry build on validity scores?\n3. In Table 2, it is unclear how the validity score players out in learning the symmetries. Especially for default augmentation case. Could you please elaborate on this?\n4. Total loss and loss-scale section: How is the true scale learned, if the inner product is normalized? Additionally if all the terms are affected by a data-dependent scale, shouldn't $w_{Lips}$ affect the overall weights?\n\n### Clarifications\n1. The figures can be made a little bigger. \n2. In line 16, 'whether the transformed data are still valid for the given task'. This phrasing is confusing.\n3. In line 172, what does valid mean? simply put; does it comprise invariant transformations and approximately equivariant up to the threshold C?\n4. Intuition on how to locate transformed data on grid is missing."
}
] |
wkwGedn19x | Scaling White-Box Transformers for Vision | CRATE, a white-box transformer architecture designed to learn compressed and sparse representations, offers an intriguing alternative to standard vision transformers (ViTs) due to its inherent mathematical interpretability. Despite extensive investigations into the scaling behaviors of language and vision transformers, the scalability of CRATE remains an open question which this paper aims to address.
Specifically, we propose CRATE-$\alpha$, featuring strategic yet minimal modifications to the sparse coding block in the CRATE architecture design, and a light training recipe designed to improve the scalability of CRATE.
Through extensive experiments, we demonstrate that CRATE-$\alpha$ can effectively scale with larger model sizes and datasets.
For example, our CRATE-$\alpha$-B substantially outperforms the prior best CRATE-B model accuracy on ImageNet classification by 3.7%, achieving an accuracy of 83.2%. Meanwhile, when scaling further, our CRATE-$\alpha$-L obtains an ImageNet classification accuracy of 85.1%. More notably, these model performance improvements are achieved while preserving, and potentially even enhancing the interpretability of learned CRATE models, as we demonstrate through showing that the learned token representations of increasingly larger trained CRATE-$\alpha$ models yield increasingly higher-quality unsupervised object segmentation of images. | https://openreview.net/pdf/640c6525356d48f59d0459992a3f5c3432d7955b.pdf | [
{
"confidence": 4,
"rating": 7,
"review_id": "BpLwbrZzBm",
"review_text": "This paper introduces CRATE-α, an enhanced variant of the CRATE (Coding RATE Transformer) architecture, designed to scale efficiently while maintaining mathematical interpretability. The authors address the open question of CRATE's scalability by proposing strategic modifications to the sparse coding block and a refined training recipe. Extensive experiments demonstrate CRATE-α's effectiveness, showcasing improved performance on ImageNet classification tasks compared to the original CRATE model. Notably, the CRATE-α-B model achieved an 83.2% accuracy rate, a significant improvement over the previous best CRATE-B model.\n\nThe paper presents a novel architecture, CRATE-α, that builds upon the existing CRATE model with minimal yet strategic modifications, enhancing scalability without compromising interpretability.\n\nThe authors provide a wealth of empirical evidence supporting the effectiveness of CRATE-α, including comparative results on ImageNet classification tasks and a detailed analysis of training behaviors across different model scales.\n\nA key strength is the paper's focus on maintaining the interpretability of the model, which is often a trade-off in scaling deep learning models. The authors demonstrate that CRATE-α models retain high-quality unsupervised object segmentation capabilities.\n\nThe paper includes a thorough exploration of scaling behaviors, from Base to Large to Huge model sizes, using both supervised learning on ImageNet and vision-language pre-training with contrastive learning on DataComp1B.\n\nCould the proposed architecture work well on other tasks like NLP?\n\n\nWhile the paper provides a detailed analysis of the model's performance on ImageNet, there might be a need for more discussion on how these results generalize to other datasets and real-world applications.\n\nSee the weakness."
},
{
"confidence": 4,
"rating": 6,
"review_id": "jxtodcLWcI",
"review_text": "This paper explores how to train white-box Transformers at scale for visual tasks. The authors propose a new model architecture called CRATE-$\\alpha$, which extends the sparse coding block of the original CRATE model. A series of CRATE-$\\alpha$ models were trained with varying model sizes, data sizes, and patch sizes using optimized training recipes. The main experiments focus on supervised classification and contrastive CLIP learning, with additional demonstrations of unsupervised semantic segmentation capability.\n\n**Originality:** The paper continues the white-box design philosophy of the original CRATE model while integrating advanced techniques such as overparameterized sparse coding, decoupled dictionary, and residual connections. Although some of these techniques have been previously validated, successfully combining them with a white-box Transformer is a noteworthy achievement. The integration not only works effectively but also yields commendable results.\n\n**Quality:** The paper is technically sound overall, employing rigorous notation and formula definitions to elucidate the design principles. The proposed models demonstrate significant improvements compared to the previous generation of CRATE models. Additionally, the authors are careful and honest in evaluating the weaknesses and limitations of their work.\n\n**Clarity:**\n- The paper is heavily symbolized, relying extensively on intricate mathematical formulations rather than clear diagrams and straightforward language. Although this maintains academic rigor and professionalism, it severely hampers understanding of the paper's details and the broader dissemination of the model. Incorporating corresponding illustrations to explain the three modifications and comparing them with the standard Transformer structure would be beneficial.\n- The organization of Section 4 is not concise, making it easy for readers to lose track.\n - The distinction between the paragraphs \"Dataset and Evaluation\" and \"Training & Fine-tuning\" is not well-defined, especially with the scattered descriptions of the data used.\n - The frequent interleaving of experimental setup descriptions with the presentation of experimental results disrupts the flow and coherence of the narrative.\n\n**Significance:** \n- Although CRATE-$\\alpha$ shows significant improvements over the original CRATE model, it still lags behind the state-of-the-art. For example, in the right side of Figure 1, CRATE-$\\alpha$ typically requires nearly double the training FLOPs to achieve the same accuracy as ViT. \n- If the scalability and interpretability of a white-box Transformer architecture does not offer substantial insights and improvements, practitioners might prefer models with stronger performance but lower interpretability.\n\n1. As previously mentioned, as shown on the right side of Figure 1, CRATE-$\\alpha$ usually requires approximately twice the FLOPs to reach the performance level of ViT, putting it at a noticeable disadvantage.\n \n2. How does the performance improvement of CRATE-$\\alpha$ compare to the original CRATE? Neither the CRATE models in Table 1 nor Figure 1 were pretrained on ImageNet-21K. Why was this not included for a fair comparison?\n \n3. Lines 232-233 and Figure 3 describe the model’s **training loss** as predictable. Why not the **validation loss**, which is the primary concern in scaling laws and practical applications?\n \n4. Table 2 only shows the compute requirements for the **pre-training stage**. Why does it not include the **fine-tuning** stage? Considering the total computational effort, I would like to see a comparison of the two scaling strategies: *CRATE-$\\alpha$-L/32 + CRATE-$\\alpha$-L/8* versus *CRATE-$\\alpha$-L/14 + CRATE-$\\alpha$-L/14*.\n \n5. How was the amount of training data determined? Was there a specific standard or a FLOPs constraint? For example:\n - In Section 4.1, for training models from Base to Large, both pre-training and fine-tuning were conducted for a total of **91** epochs.\n - In Section 4.1, for training models from Large to Huge, there were **2.56** billion and **512** million training samples, respectively."
},
{
"confidence": 4,
"rating": 5,
"review_id": "nLdISuoKMG",
"review_text": "This paper studies the scalability problem of white-box transformer CRATE and proposes CRATE-$\\alpha$ to enhance the scaling ability of CRATE. To be specific, the authors propose three strategic but minimal modifications for the CRATE model architecture: Overparameterized sparse coding block, Decoupled dictionary, and Residual connection. Extensive experiments across different datasets and settings demonstrate the effectiveness of the proposed approach.\n\n1. It is quite meaningful to study white-box transformers and try to increase their scalability which promises its application in potential usage.\n\n2. Comprehensive evaluation. The proposed method is validated on multiple datasets and tasks which demonstrate the scalability of CRATE-$\\alpha$.\n\n3. The presentation is clear. Overall, the paper is well-organized and the method is easy to follow.\n\n1. Performance gaps with vanilla ViT. As shown in Figure 1, CRATE-$\\alpha$ still lags behind vanilla ViT across different scales remarkably which may limit its application in real scenarios. Besides, it is suggested to compare with vanilla ViT in computational costs, number of parameters, and inference speed as well.\n\n2. According to the model configuration, the number of parameters of CRATE-$\\alpha$ is almost four times as CRATE and it is strange to consider those as the same scale models. Moreover, how do the proposed new modules contribute to the performance gain of CRATE-$\\alpha$? Is it simply because of larger models?\n\n3. Although the authors made lots of efforts in scaling CRATE to CRATE-$\\alpha$, they only spent limited space in the paper to discuss the interpretability of the proposed method. This short paragraph may not be enough to justify why the authors are motivated to study the white-box transformers.\n\nApart from the questions in weakness above, another question is:\n\nwhy the performance in dense prediction tasks is so bad?"
},
{
"confidence": 3,
"rating": 6,
"review_id": "BkVqYtV1dW",
"review_text": "This paper aims to train CRATE at a large scale for vision tasks. The contribution includes an architecture modification to the sparse coding block and a light training recipe. The new model, called CRATE-alpha, shows large improvements compared with the previous CRATE model. The experiments also show promising results on unsupervised object segmentation.\n\n- The paper presents a careful study to enhance the performance of CRATE. The paper introduces key modifications to the existing CRATE, including the sparse coding block, decoupled dictionary, and residual connection. \n- The paper investigates its scaling behavior and shows promising improvements of the newly introduced CRATE-alpha.\n- The paper presents in-depth experiments, such as the scaling analysis on ImageNet. The paper also shows improvements for semantic interpretability. \n- The figures and model architecture are well-illustrated.\n\nOverall I find the paper is well-presented and solid. Below are my minor concerns for this paper:\n- The paper is highly centered on improving CRATE. Most of the findings might not be transferable to other models. This may limit its impact to the general audience in NuerIPS community.\n- It would be interesting to further understand its potential downstream applications (not only vision but also language data)\n\nsee weakness"
}
] |
wjbTHLUSzU | TSDS: Data Selection for Task-Specific Model Finetuning | Finetuning foundation models for specific tasks is an emerging paradigm in modern machine learning. The efficacy of task-specific finetuning largely depends on the selection of appropriate training data. We present TSDS (Task-Specific Data Selection), a framework to select data for task-specific model finetuning, guided by a small but representative set of examples from the target task. To do so, we formulate data selection for task-specific finetuning as an optimization problem with a distribution alignment loss based on optimal transport to capture the discrepancy between the selected data and the target distribution. In addition, we add a regularizer to encourage the diversity of the selected data and incorporate kernel density estimation into the regularizer to reduce the negative effects of near-duplicates among the candidate data.
We connect our optimization problem to nearest neighbor search and design efficient algorithms to compute the optimal solution based on approximate nearest neighbor search techniques.
We evaluate our method on data selection for both continued pretraining and instruction tuning of language models.
We show that instruction tuning using data selected by our method with a 1\% selection ratio often outperforms using the full dataset and beats the baseline selection methods by 1.5 points in F1 score on average. | https://openreview.net/pdf/dd38be7e2aeaa0617ca80c18333fe34e51c4dcb6.pdf | [
{
"confidence": 5,
"rating": 3,
"review_id": "PMLLSprfoj",
"review_text": "This paper proposes a method for data selection in foundation model fine-tuning. The proposal contains a distribution alignment loss based on optimal transport to capture the discrepancy between the selected data and the target distribution, a regularizer to encourage the diversity of the selected data, and kernel density estimation to reduce the negative effects of near-duplicates among the candidate data. Experimental on fine-tuning the language model are reported.\n\n1. The proposal studied in this paper is interesting since how to select data, and how to improve the data quality is important for the training and fine-tuning of the foundation model.\n2. The proposed method which considers distribution discrepancy minimization, diversity, and near-duplicates, is technically sound.\n\n1. The novelty and contribution of the proposed method are limited. Data selection is important and well-studied in the machine learning community. For example, in active learning, we need to select examples to label according to some metrics; in domain adaptation, we need to select data to help the model reuse. Some widely adopted methods can be applied to the problem of data selection for the foundation model and the authors didn't provide a comprehensive study and comparison. Moreover, the techniques adopted in the proposal are also widely used techniques. \n\n2. For the experiments, the authors only conduct experiments on the language model, can the proposal be applied to other foundation models, such as the vision-language model?\n\n3. It seems that the random selection method can also achieve good performance. So I am wondering about the difficulty of the problem, maybe we can improve the performance using some trivial techniques.\n\n4. The time cost between fine-tuning with the full dataset and the selected dataset should be reported.\n\nAs discussed in the Weakness part."
},
{
"confidence": 3,
"rating": 5,
"review_id": "p1fzqZDlPz",
"review_text": "This paper formulates data selection for task-specific fine-tuning as an optimization problem based on optimal transport for distribution alignment. It proposes two KNN-based implementation methods and evaluates them on datasets for task-specific instruction fine-tuning and domain-specific continued pretraining. The experimental results demonstrate that their methods are more effective than the baseline systems (LESS and DSIR).\n\n1. The paper formulates data selection as an optimal transport problem, providing a detailed problem definition and a closed-form solution. Additionally, it proposes using Kernel Density Estimation to address the issue of near-duplicates.\n\n2. The authors introduce KNN-Uniform and KNN-KDE algorithms for data selection, showing that their performance is superior to the baseline systems in both task-specific instruction fine-tuning and domain-specific continued pretraining experimental setups.\n\n**Regarding the methodology:**\n\n1. The connection between data selection and the optimal transport problem is not clearly established. Despite mentioning it in lines 114-115 of Section 3, it remains unclear why data selection can be considered an optimal transport problem.\n\n2. Much of the paper is based on LESS, including the representation of samples and the task definition. However, there is minimal mention of LESS, making it challenging to understand without prior knowledge of LESS.\n\n3. The method still relies on M query samples for a specific task, which poses certain limitations.\n\n**Regarding the experimental section:**\n\n4. The experimental section contains too many specific settings. For instance, special settings mentioned in lines 248 and 286 make it difficult to determine how these parameters were chosen, even after reviewing the appendix.\n\n5. The two experimental setups are inconsistent in task-specific instruction fine-tuning and domain-specific continued pretraining. In Table 2, the Ratio of 0.5%-5% can be understood as the number of data samples selected. However, in Table 4, the 1K, 3K, and 10K seem to refer to the number of query samples, but there is no comparison of the number of selected samples.\n\n6. There is a lack of comparison with various baseline systems. Only one baseline system is used for comparison, and its performance differs from that reported in the original paper.\n\nSee weakness."
},
{
"confidence": 2,
"rating": 8,
"review_id": "BpXxpjjV1U",
"review_text": "This paper presents a method for data selection for task-specific model finetuning. The method relies on a small, representative sample of data from the target task to select matching, relevant data from a corresponding corpus. The method relies on framing this task as an optimization problem, utilizing an optimal-transport-based distribution alignment loss and a KDE-based regularizer to avoid oversampling (near-)duplicates. \nThe authors show this method to be highly scalable and data efficient, being competitive with, and often outperforming state-of-the-art methods for domain-specific continued pretraining and task-specific instruction tuning.\n\n- The paper rigorously presents and tests the proposed method, with a detailed theoretical motivation. \n- Sections 2-4 are well structured and introduce the method in a clear, progressive way. \n- Performance results, especially for very small sample sizes, strongly support the utility of this method.\n\n- Section 5.1: Given how different some of the performances are between llama and mistral, including other LLMs may give a more complete picture of the efficacy of this method.\n- Section 5: Efficiency claims would benefit from context. How does the 28 hour initialization time compare to other SOTA methods on this dataset? How does it scale after initialization is done when repeatedly drawing task-specific samples compared to other methods?\n\nAgain, comparing efficiency with other SOTA methods on the datasets used in this paper would be helpful in better contextualizing the performance presented."
},
{
"confidence": 3,
"rating": 5,
"review_id": "OA81V3eCbo",
"review_text": "This paper proposes task-specific training data selection for language model fine-tuning. Given a (small) set of representative examples for a task and a large set $D$ of possible training examples, the proposed method uses (regularized) optimal transport to assign a probability distribution over $D$ that matches the distribution of representative examples while also encouraging diversity among the elements of $D$ assigned a nonzero probability.\nThe authors prove that with a certain choice of regularization function, this is equivalent to (an adaptive version of) $k$-nearest neighbor selection of candidate data similar to the representative examples. Since $k$NN treats near-duplicates as distinct examples (which would decrease diversity of the selected data), the paper additionally introduces another regularization term based on kernel density estimation; the optimal transport with this regularization is a weighted $k$NN that intuitively accounts for the frequency of near-duplicates for each example.\n\n- Good data selection is an important problem given that today's models are both expensive to fine-tune and very sample-efficient *if* they are given the \"correct\" high-quality fine-tuning data [1]. Most high-performing efforts still tweak the composition of these small task-specific datasets by hand. This paper has an interesting new take on framing task-specific data selection as an optimal transport problem between representative task examples and a large pool of candidate training data.\n\n- Theorems 3.1 and 3.2 shows that with certain regularization terms, the optimal transport selection procedure is equivalent to certain variations of $k$-nearest-neighbor. This allows for efficient computation of the optimal data selection under this objective.\n\n- The proposed approach can naturally be combined with approximate nearest neighbor search methods for efficiency.\n\n- Strong empirical results showing that the proposed selection procedure can even outperform tuning with the full candidate dataset.\n\n- The experiments include standard deviations across three runs, giving a sense of how big the gains are compared to noise.\n\n[1] Zhou, Chunting, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma et al. \"LIMA: less is more for alignment.\" In Proceedings of the 37th International Conference on Neural Information Processing Systems, pp. 55006-55021. 2023.\n\n- Missing an ablation using embeddings instead of gradients, or any other distance function for the examples.\n\n- Missing several relevant data selection baselines that also encourage diversity, e.g. Deita [1], QDIT [2], and methods based on DPPs [3].\n\n- Changing the data mix changes the optimal learning rate (e.g., since it changes the scale of the loss function at initialization). The paper compares models trained on different data mixes with the same learning rate, but the fair comparison is optimal : optimal. It's not clear from the experiments whether the reported gains are due to the learning rate being more optimal for the selected data mix, especially since the metric used to select the data is based on the gradients of a model.\n\n[1] Liu, W., Zeng, W., He, K., Jiang, Y., & He, J. What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning. In The Twelfth International Conference on Learning Representations.\n\n[2] Bukharin, Alexander, and Tuo Zhao. \"Data diversity matters for robust instruction tuning.\" arXiv preprint arXiv:2311.14736 (2023).\n\n[3] Wang, P., Shen, Y., Guo, Z., Stallone, M., Kim, Y., Golland, P., & Panda, R. (2024). Diversity Measurement and Subset Selection for Instruction Tuning Datasets. arXiv preprint arXiv:2402.02318.\n\n- It might be helpful to have some (simple) intuition after L140 explaining why regularizing distance to the uniform transport encourages diversity.\n\n- If the optimal transport formulation is equivalent to a certain type of $k$NN, why not just present the method as a type of $k$NN? $k$NN has a long history in data selection going back to at least Wilson (1972). It's not clear what the optimal transport discussion buys other than added complexity.\n\n- Given the optimal transport framing, I think there should be some discussion of other (efficient) regularized optimal transport algorithms, such as Sinkhorn? [2]\n\n- If I understand L252--L254 correctly, the effective dataset size for the proposed method is actually up to 4x the reported size, because the data are resampled from the computed distribution each epoch. Does LESS (the baseline) get the same advantage? I.e., does LESS get to use 4x the data or do some kind of resampling?\n\n[1] Wilson, Dennis L. \"Asymptotic properties of nearest neighbor rules using edited data.\" IEEE Transactions on Systems, Man, and Cybernetics 3 (1972): 408-421.'\n\n[2] Cuturi, Marco. \"Sinkhorn distances: Lightspeed computation of optimal transport.\" Advances in neural information processing systems 26 (2013)."
}
] |
wiMaws0FWB | Implicit Bias of Mirror Flow on Separable Data | We examine the continuous-time counterpart of mirror descent, namely mirror flow, on classification problems which are linearly separable. Such problems are minimised ‘at infinity’ and have many possible solutions; we study which solution is preferred by the algorithm depending on the mirror potential. For exponential tailed losses and under mild assumptions on the potential, we show that the iterates converge in direction towards a $\phi_\infty$-maximum margin classifier. The function $\phi_\infty$ is the horizon function of the mirror potential and characterises its shape ‘at infinity’. When the potential is separable, a simple formula allows to compute this function. We analyse several examples of potentials and provide numerical experiments highlighting our results. | https://openreview.net/pdf/eca2a88950a4c4612135cbb0fbad857b5b6af6af.pdf | [
{
"confidence": 4,
"rating": 6,
"review_id": "Mo0ZEMbJVn",
"review_text": "In this paper, authors study the implicit bias of the mirror descent algorithm, from the perspective of the optimization trajectory of the continuous flow version. They propose the conceptions of horizon shape and horizon function $\\phi_\\infty$ to help characterize the properties of mirror flow at infinity. Since $\\phi_\\infty$ defines a norm, they prove that the mirror flow will eventually converge in direction to a max $\\phi_\\infty$-margin solution under the linear exponential-tail classification setting. Their findings contribute to a deeper understanding of the inherent characteristics of mirror descent across a wide range of potential functions.\n\n1. The result of this work is solid, containing the general class of potential functions, and the authors derive a calculation rule of $\\phi_\\infty$ for a general class of potential functions.\n2. The paper is informative and well-structured, particularly in section 3. By using the example of gradient descent within the framework established by the preceding lemmas, which is a special case of mirror descent, the authors clearly outline the reasons why mirror flow will eventually converge in direction without complicated formulas.\n\n1. Since mirror descent is not so popular in the practice of machine learning problems, there could be more discussion about the implications of their results. For example, Figure 1 is really interesting as it reveals that the mirror descent shares the same structure of implicit bias with the steepest descent [1], what is the essence of such similarities?\n2. The setting of an infinitely small learning rate, i.e., optimization flow, might be a little strong under a simple linear classification problem compared to the previous works. I suggest the authors state the technical challenges of the discrete optimization process of mirror descent.\n3. I might be wrong, but it seems not strict to apply the Bolzano–Weierstrass theorem to an uncountably infinite set at page 5, line 160 and 61.\n\n[1] Suriya Gunasekar, Jason Lee, Daniel Soudry, and Nathan Srebro. Characterizing implicit bias in terms of optimization geometry. In International Conference on Machine Learning, pages 1832–1841. PMLR, 2018.\n\n1. I'm really curious about whether this result could be extended to the q-homogenous neural networks like [1], cause it seems this result was also derived from the KKT condition of the lagrangian function.\n2. I guess there might be no minus in line 461 for the first equality.\n3. Could the author explain why they need to prove the uniqueness of the flow solution in Lemma 2, since this was not covered in the previous work, like [1][2]. Moreover, why do the authors need to prove that $\\int_0^t a(\\beta_s)ds \\to \\infty$ for Lemma 1.\n4. What is the definition of $h_\\infty$ in line 535 and $h(\\cdot)$? Moreover, could the author explain more about how to derive formula (8) ?\n\n[1] Kaifeng Lyu and Jian Li. Gradient descent maximizes the margin of homogeneous neural networks. In International Conference on Learning Representations, 2020.\n\n[2] B.Wang, Q. Meng,W. Chen, and T.-Y. Liu. The implicit bias for adaptive optimization algorithms on homogeneous\nneural networks. In International Conference on Machine Learning, pages 10849–10858. PMLR, 2021."
},
{
"confidence": 3,
"rating": 5,
"review_id": "07N7ePCKz3",
"review_text": "This paper examines the implicit bias of mirror flow (the continuous-time counterpart of mirror descent) in the context of binary classification for linearly separable data. Given that the problem has infinitely many solutions, obtained at infinity, the authors aim to identify which solution is achieved through mirror flow. Assuming an exponential tail on the loss function, the authors demonstrate that mirror flow converges directionally to a maximum margin classifier, where the margin is characterized by a horizon function of the mirror potential. This result extends many existing works, and numerical experiments are provided to verify the findings.\n\n1. The paper is well-written and well-organized. Despite the technical nature of the analysis and results, the paper is relatively easy to follow.\n2. The paper is also well-motivated. Although mirror descent is not commonly used as an algorithm for training neural networks, analyzing its convergence is valuable for understanding the implicit bias of gradient descent in various neural network architectures.\n3. I did not verify the details of the proof. However, the paper provides several motivating examples, including the quadratic potential corresponding to gradient flow, which makes the results quite convincing.\n4. The main results extend several prior works.\n\nAlthough the authors have stated that the convergence rate is left for future study, it would be beneficial to provide at least empirical evidence of the convergence rate. The authors mentioned in line 294 that the convergence rate varies across different potentials.\n\n1. In the main result, Theorem 2, the conclusion that the normalized mirror flow converges to a vector \\(\\bar{\\beta}_{\\infty}\\) is drawn even when the \\(\\phi_{\\infty}\\)-max-margin problem does not necessarily have a unique solution. Can the authors provide more insight on this? If I understand correctly, in the motivating example, \\(\\bar{\\beta}_{\\infty}\\) is a subsequence limit. If the \\(\\phi_{\\infty}\\)-max-margin problem does not have a unique solution, this result cannot be extended to the whole limit, unlike in the gradient flow case with quadratic potential.\n2. More explanation could be provided for Figure 1. In particular, it would be interesting to see the trajectories of the mirror flows on the plane, rather than only showing the limit."
},
{
"confidence": 4,
"rating": 7,
"review_id": "3v3Hdqiyj9",
"review_text": "This manuscript examines the implicit bias of mirror descent on a classification problem when the dataset is linearly separable. Assuming a coercive gradient, it demonstrates that the implicit bias is characterized by the shape of the level set of the mirror potential near infinity. Their analysis successfully recovers existing results for p-norm potentials and identifies the implicit bias of the potentials emerging in the analysis of linear neural networks. Additionally, it leaves the characterization of the implicit bias when the gradient is not coercive as an interesting open problem.\n\nI think the paper is very well-written and has a solid contribution.\n\nIt addresses an important problem, aiming to understand the implicit bias of neural networks. Prior work has shown that the dynamics of linear networks can be characterized by mirror descent, highlighting the relevance of this study.\n\nNA\n\nIn line 123, the authors suggested that the logistic loss satisfies the conditions in Assumption 1. However, it is clear that the logistic loss does not have an exponential tail. Could they clarify whether this is a mistake or if there is an underlying argument supporting their claim?"
},
{
"confidence": 2,
"rating": 7,
"review_id": "5az731BP58",
"review_text": "This paper considers the asymptotic behaviour of the mirror descent (MD) algorithm for a linear classification task. It is shown that the classifier (hyperplane orthogonal to $\\beta$) will be a max-margin classifer, where the margin is determined by some unknown horizon function $\\phi_\\infty$. This works extend prior work which consider $\\ell_p$ and homogeneous potential functions for MD, and shows this result for very general $\\phi$.\n\nThe paper makes an interesting statement about behaviour of mirror descent on classification tasks, will minimal assumptions. In doing so, it takes a big step and extends previous work to the cover general potential functions.\nThe paper is well written and the figures help with understanding the concepts of convergence.\n\n---\n*While I could understand the paper, this is not my area of research. I do not find myself fit to evaluate the paper on soundness, relevance to the sub-field and importance of contributions.*\n\n- The paper does not characterize $\\phi_\\infty$ in terms of the bregman potential $\\phi$ (and other relevant entities). \nThe main result expresses that there exists some function, that is minimized by $\\bar \\beta_\\infty$, the direction of the classifier as $t\\rightarrow \\infty$. \nI think this limits the relevance and strength of the result. For instance, this does not help with interpretability compared to the case where we can prove the optimization algorithm converging to a max-margin classifier wrt the $\\ell_2$ norm.\n\n- I am not sure about relevance and use-cases of the mirror descent algorithm with very general potentials. As far as I know, typically, a small set of norm-based or entropic (neg-ent, tsallis, etc) are used within applications of ML. So while the theorem makes an interesting statement for an optimization standpoint, I'm not sure how relevant it is for the ML community. The theorem is also not entirely relevant to the pure optimization community since it's for the specific case of linear classification with finite data.\n---\n*While I could understand the paper, this is not my area of research. I do not find myself fit to evaluate the paper on soundness, relevance to the sub-field and importance of contributions.*\n\n- Are there any classes of potential functions (other than the norms and $L$-homogeneous ones), for which $\\phi_\\infty$ may be calculated or approximated?\n\n- Beyond gradient descent, is there any work that quantifies the rates of convergence to max-margin classifiers? Is this even possible?"
}
] |
wiK6bwuxjE | MonoMAE: Enhancing Monocular 3D Detection through Depth-Aware Masked Autoencoders | Monocular 3D object detection aims for precise 3D localization and identification of objects from a single-view image. Despite its recent progress, it often struggles while handling pervasive object occlusions that tend to complicate and degrade the prediction of object dimensions, depths, and orientations. We design MonoMAE, a monocular 3D detector inspired by Masked Autoencoders that addresses the object occlusion issue by masking and reconstructing objects in the feature space. MonoMAE consists of two novel designs. The first is depth-aware masking that selectively masks certain parts of non-occluded object queries in the feature space for simulating occluded object queries for network training. It masks non-occluded object queries by balancing the masked and preserved query portions adaptively according to the depth information. The second is lightweight query completion that works with the depth-aware masking to learn to reconstruct and complete the masked object queries. With the proposed feature-space occlusion and completion, MonoMAE learns enriched 3D representations that achieve superior monocular 3D detection performance qualitatively and quantitatively for both occluded and non-occluded objects. Additionally, MonoMAE learns generalizable representations that can work well in new domains. | https://openreview.net/pdf/525c91ccdb8e051ae4ee3dc5b9a7bbac283b9be6.pdf | [
{
"confidence": 5,
"rating": 6,
"review_id": "HjirQrSA9G",
"review_text": "This paper proposes a monocular 3D detection framework inspired by Masked Autoencoders (MAE), designed to address the challenge of object occlusions in 3D object detection. It utilizes a unique depth-aware masking module that simulates occlusions by adaptively masking non-occluded object features based on depth information, coupled with a lightweight completion network that reconstructs these masked features to learn occlusion-tolerant representations. It generates training pairs of non-occluded and occluded object representations directly, enhancing its capability to handle occlusions effectively. The framework is optimized for low computational overhead during inference, as it does not require object masking at this stage.\n\n1. The proposed method outperforms the conventional methods across various datasets such as KITTI and Nuscenes. It demonstrates the effectiveness of the proposed method. Moreover, the proposed method achieves real-time inference time.\n2. An extensive ablation study is proven to demonstrate the impact of the proposed module. \n3. The idea is simple yet effective.\n\n1. The performance improvement is marginal, especially on the cross-validation in Table 6. \n2. Missing evaluation on the Waymo dataset\n\n1. Many recent 3D object detection studies have utilized the Waymo dataset for evaluation. Could you explain why your experiments were limited to KITTI and nuScenes?\n2. There appears to be a performance drop in the nuScenes dataset at distances beyond 40 meters. Could you provide insights into what causes this decline?\n3. There is a slight difference in inference time between 'Ours*' (36ms) and 'Ours' (38ms), with significant performance differences noted in Table 2. Could you elaborate on the role of the completion network (CN) given these differences?\n4. The mask ratio r varies with scale parameters and maximum depth. How sensitive is your method to changes in the mask ratio?"
},
{
"confidence": 5,
"rating": 6,
"review_id": "SLIqNwuuOC",
"review_text": "This paper introduces a novel framework for improving monocular 3D object detection, particularly in handling object occlusions. The proposed MonoMAE leverages depth-aware masking to simulate occlusions in the feature space and employs a lightweight completion network to reconstruct occluded object regions, thereby learning occlusion-tolerant representations. Experiments show that this learning stratgy helps to improve the performance of monocular 3D object detection.\n\n1. This paper is well-structured, with a clear problem statement, methodology, experiments, and ablation studies that substantiate the contributions and effectiveness of MonoMAE.\n2. This paper addresses a significant challenge in monocular 3D object detection, object occlusion, with a novel approach using depth-aware masked autoencoders.\n\n1. The reliance on depth-aware masking to simulate occlusions may not perfectly replicate natural occlusion patterns, potentially affecting the model's reconstruction accuracy. The gap between synthetically masked and naturally occluded object queries could limit the model's robustness in real-world scenarios.\n2. While this paper claims generalizability, the lack of extensive cross-dataset validation leaves the true scope of its generalization capability somewhat unproven.\n\n1. All the experimental results presented in this paper are about vehicle detection. Does MonoMAE also work for more difficult cases like pedestrain and cyclist detection? \n2. The paper suggests investigating generative approaches for simulating natural occlusion patterns. Can you elaborate on what this might entail and how it could further improve monocular 3D detection?"
},
{
"confidence": 4,
"rating": 5,
"review_id": "WuSFpi1PaF",
"review_text": "This paper applies Masked Autoencoder to 3D object detection. It distinguishes object queries into occluded and non-occluded categories, and during training, it applies depth-aware masking to the non-occluded queries and learns by completing them. At test time, the completion is applied to the occluded queries.\n\n- It achieved state-of-the-art performance on the KITTI 3D dataset.\n- The idea of interpreting occluded queries as masked queries to solve the problem is interesting.\n- The training and test times are illustrated clearly in figures.\n\n- As stated in the limitations section, occlusion at the image level and masking at the feature level of object queries are not the same. Further analysis is needed to understand the actual implications of masking in object queries.\n- If masking serves the role of occlusion at the image level, there should be no reason for the mask ratio to vary with depth, yet depth-aware masking is highly beneficial. An analysis is needed to understand why depth-aware masking works well compared to random masking.\n- In my opinion, the performance of the Non-Occluded Query Grouping classification is crucial for the framework to function properly. Although classification accuracy is provided in the supplementary material, it would be helpful to include various metrics such as precision, recall, and F1-score. If the results of the Non-Occluded Query Grouping classification are biased, it might be interesting to apply completion not only to the occluded queries but also to the non-occluded queries at test time.\n\nPlease refer to the weaknesses."
},
{
"confidence": 4,
"rating": 6,
"review_id": "ybDgkZbpLi",
"review_text": "This paper introduces MonoMAE, a novel monocular 3D object detection framework designed to improve detection performance in the presence of object occlusions. MonoMAE leverages the concept of Masked Autoencoders, treating object occlusions as natural masking and training the network to complete occluded regions. This innovative approach addresses the pervasive issue of object occlusions in monocular 3D detection, leading to superior detection performance. Extensive experiments on datasets like KITTI 3D and nuScenes show that MonoMAE outperforms state-of-the-art methods in both qualitative and quantitative measures.\n\n1. The introduction of depth-aware masking to simulate occlusions and the use of a lightweight query completion network are innovative and address a significant challenge in monocular 3D detection.\n2. MonoMAE improves detection performance without the need for additional training data or annotations, making it a practical solution for real-world applications like autonomous driving and robotics.\n3. The framework demonstrates superior performance on benchmark datasets (KITTI 3D and nuScenes), outperforming existing state-of-the-art methods in both occluded and non-occluded scenarios.\n4. MonoMAE shows strong generalization capabilities to new domains, which is critical for deploying models in diverse environments.\n\n1. In many datasets and methods, objects are not merely labeled as \"occluded\" or \"non-occluded.\" Instead, they may be assigned occlusion levels or degrees that quantify the extent to which an object is occluded. These levels provide more granularity and can influence how models are trained and evaluated. It would be beneficial to specify how occlusion levels are defined and used. Clarifying whether discrete or continuous levels are employed and how these influence the labeling, training, and evaluation processes will provide a clearer understanding of the methodology and its robustness in handling occlusions.\n2. The paper does not provide explicit details about the accuracy of the occlusion classification network or how this accuracy influences the overall 3D object detection network. This information appears to be missing.\n3. The paper does not explicitly report the performance or accuracy of the query completion network. Including a report on the performance of this network, such as quantitative results or visualization of the reconstructed queries, would be valuable. It would demonstrate whether the query completion network is learning meaningful features and contributing effectively to the overall 3D object detection performance.\n\n1. What is the accuracy of the occlusion classification network? How does the accuracy influence the whole 3D object detection network?\n2. What is the accuracy of the query completion network?"
}
] |
wiEHZSV15I | Parsimony or Capability? Decomposition Delivers Both in Long-term Time Series Forecasting | Long-term time series forecasting (LTSF) represents a critical frontier in time series analysis, characterized by extensive input sequences, as opposed to the shorter spans typical of traditional approaches. While longer sequences inherently offer richer information for enhanced predictive precision, prevailing studies often respond by escalating model complexity. These intricate models can inflate into millions of parameters, resulting in prohibitive parameter scales. Our study demonstrates, through both theoretical and empirical evidence, that decomposition is key to containing excessive model inflation while achieving uniformly superior and robust results across various datasets. Remarkably, by tailoring decomposition to the intrinsic dynamics of time series data, our proposed model outperforms existing benchmarks, using over 99\% fewer parameters than the majority of competing methods. Through this work, we aim to unleash the power of a restricted set of parameters by capitalizing on domain characteristics—a timely reminder that in the realm of LTSF, bigger is not invariably better. The code is available at \url{https://anonymous.4open.science/r/SSCNN-321D/}. | https://openreview.net/pdf/948d95efe5e6afae5d29dfdcaa09d2ce2b04a7bc.pdf | [
{
"confidence": 4,
"rating": 7,
"review_id": "ZST3gA2RiF",
"review_text": "This paper proposes a Selective Structured Components-based Neural Network for Long-term Time Series Forecasting\n\n1. This paper demonstrates originality by addressing a crucial limitation in existing SOTA methods, maintains high quality through thorough experimentation and clear presentation, offers significant advancements to the field of time series forecasting, and ensures clarity that aids in understanding and reproduction of the work.\n\n2. The motivation of this paper is intuitive and compelling. Given the large model sizes of current SOTA methods like PatchTST, the idea of using a smaller model to achieve comparable or better performance is highly attractive. \n\n3. The experiments are thorough, and the proposed method achieves state-of-the-art performance. The effectiveness of each sub-module is demonstrated through detailed ablation studies. \n\n4. The code is open source and reproducible, with a straightforward and clear usage process.\n\n1. In the experiment section, it is noted that most papers use the ETT dataset for ablation studies, likely due to its smaller size, which allows for quicker results. However, you chose the ECL and Traffic datasets instead of ETT, which is a more comprehensive and reliable approach. While this choice is commendable, there is no explanation provided for not using the ETT dataset. \n\n2.It would be more informative to report the model size directly in Table 1. Including the model size would provide a clearer comparison with other SOTA methods and highlight the efficiency of your proposed model. \n\n3.Baselines: Some MTSF models based on LLM have been widely applied [1]. If the authors can demonstrate that SSCNN has advantages in both performance and efficiency, this paper will be more convincing.\n\n4.Some extremely lightweight models have also been proven to have satisfactory performance [2] . Compared to these methods, what are the main advantages of SSCNN? \n\n[1]One Fits All:Power General Time Series Analysis by Pretrained LM\n\n[2]FITS: Modeling Time Series with 10k Parameters\n\nSee weakness"
},
{
"confidence": 4,
"rating": 6,
"review_id": "igVDERDp1U",
"review_text": "This paper identifies data decomposition as a core bottleneck in time series forecasting and proposes a novel model named SSCNN, a decomposition-based model innovatively enhanced with a selection mechanism. SSCNN is specifically designed to adeptly capture complex regularities in data while maintaining a minimal parameter scale. This paper also provides an in-depth comparison between decomposition and patching, examining both capability and parsimony. Comprehensive experiments show the superior performance of SSCNN.\n\nStrong Points:\n\n1. The insight of this paper is attractive and compelling. One of the most crucial characteristics of time series is that they can be viewed as composed of components with different natures, e.g., season, trend, and residual. However, this characteristic has been rarely utilized in related works, or it has been implemented in trivial ways. This paper identifies data decomposition as a core bottleneck in time series forecasting and proves its effectiveness. By decomposing complex data into more learnable components, SSCNN achieves state-of-the-art performance with a minimal number of parameters.\n\n2. The writing of this paper is very clear. I can easily follow the author's logic and understand their points.\n\n3. The experimental results are extensive, including overall performance results, ablation studies of each component, hyperparameter experiments, etc., which validate the effectiveness of SSCNN.\n\n4. The code is reproducible and well documented. I have successfully replicated the authors' results.\n\n5. The authors also provide an in-depth comparative analysis and experimental results between patching and decomposition, which help readers understand the advantages of SSCNN’s insights.\n\nThis paper emphasizes the importance of decomposition in long-term time series forecasting, addressing the analytical gap in feature decomposition for the first time and providing insights into its rationale for capability and parsimony compared to patching.\n\nI have some minor questions and suggestions. If the author addresses the following points, I will increase my score.\n\nWeak Points:\n\nExperimental Setting: Most works using the Time-Series-Library repository predict up to 720 steps, yet your results do not include this prediction horizon. It would be beneficial to explain why 720-step predictions were not included.\n\nFigures: I suggest the authors add more explanatory information to Figure 1 to help readers grasp the main architecture of SSCNN from the figure and its caption alone. Moreover, some font styles (italic) in Figure 1 seem different from the character styles in the main text. I recommend unifying the styles.\n\nMinor Issues: The operator $\\lfloor \\cdot \\rfloor$ is used in the paper but not explained. In Figure 3(a), if I understand correctly, “HDformer” should be replaced by “SSCNN.”\n\nFigures: The text size of the legends in the figures is too small, making them difficult to read. Adjusting the text size to be consistent with the main text would enhance the readability of the figures and improve the overall presentation quality of the paper.\n\nSee Weakness."
},
{
"confidence": 4,
"rating": 7,
"review_id": "QTBEilvyCO",
"review_text": "This paper addresses long-term time series forecasting and critiques the reliance on complex models with extensive parameters. It proposes a decomposition method specifically designed for time series dynamics, achieving better forecasting performance across various datasets. Remarkably, the new model uses over 99% fewer parameters than other methods, highlighting the efficiency of domain-specific approaches. This research calls for a move away from complexity in LTSF, showcasing the effectiveness of focused decomposition techniques rather than relying on large-scale models.\n\n1.\tThe paper is praiseworthy for its intuitive approach. It tackles a significant problem by proposing a method that matches or surpasses current state-of-the-art models like PatchTST while using a smaller model footprint. The experimental results strongly validate this approach.\n\n2.\tThe model consistently performs well under various experimental conditions, including different input window sizes and hyperparameter settings. Statistical tests demonstrate the reliability of the results across multiple initializations, strengthening the study's credibility.\n\n3.\tThe authors provide a thorough comparison between decomposition and patching in terms of effectiveness and simplicity, demonstrating the superior benefits of decomposition over patching.\n\n1.\tThe clarity of the methodology could be improved with further elaboration.\n2. The evaluation could be strengthened by including comparisons with LLM-based models, such as:\n\n [1] Jin, Ming, et al. \"Time-LLM: Time Series Forecasting by Reprogramming Large Language Models.\" The Twelfth International Conference on Learning Representations.\n\n[2] Bian, Yuxuan, et al. \"Multi-patch prediction: Adapting llms for time series representation learning.\" arXiv preprint arXiv:2402.04852 (2024).\n\n1.\tThe methodology would benefit from additional explanation regarding the structure and rationale of the Polynomial Regression layer depicted in Figure 1. If this layer represents a standard approach, including references would enhance clarity.\n\n2.\tClarifying the decision to exclude an attention mechanism from the long-term component, despite its presence in other components like seasonal, short-term, and spatial, would strengthen the methodological coherence and aid reader comprehension.\n\n3.\tFigure 1 requires clarification on several elements. Specifically, the purpose of the 4x4 blocks and addressing inconsistent text formatting (e.g., $\\mathcal{E}$) compared to the main text would improve comprehensibility."
},
{
"confidence": 4,
"rating": 7,
"review_id": "OCBTbuYeuc",
"review_text": "This study unveils a groundbreaking approach to time series forecasting, notable for its minimal parameter count. It stands as the first model to consistently outperform state-of-the-art (SOTA) techniques while remaining compact. Unlike prevalent methods such as PatchTST and iTransformer, which are powerful but cumbersome, and emerging methods like TimeMixer and SCNN, which are lightweight yet inadequate for complex tasks, this model achieves superior performance without the associated heft.\n\n1. The model consistently delivers superior accuracy compared to state-of-the-art (SOTA) methods while maintaining a minimal model size. This accomplishment distinguishes it from other methods.\n\n2. The framework unifies the ability to capture various patterns in time series data, offering a streamlined and enhanced alternative to existing models built with MLPs or Transformers.\n\n3. The authors conduct extensive experiments, showcasing the model's strong performance compared to selected SOTA models, which are sufficiently representative of the latest advancements in the field.\n\n1. There is a gap between the introduction and Section 3 regarding the decomposition of the time series into four components. The authors do not explain why these four components are sufficient. For longer sequences, is there a need for more components? Are there references that support this approach? This discussion should be included at the beginning of Section 3.\n\n2. Manually disabling the spatial component for certain datasets appears suboptimal. It would be more effective if the algorithm could automatically determine whether including the spatial component is beneficial for each dataset.\n\n3. The paper's formatting needs improvements. It seems the authors may have additional content to include. Although the figures in the methodology section are clear and informative, resizing and rearranging them could provide more space for adding valuable content to the main text.\n\nIf the authors can address the weak points, I would reconsider the score."
},
{
"confidence": 4,
"rating": 7,
"review_id": "RA1LHzsE2L",
"review_text": "Title: Parsimony or Capability? Decomposition delivers both in long term time series forecasting.\n\nLong term time series forecasting has been an important research problem which applies to different problem domains. This paper proposes a decomposition method which shows significant performance on the benchmarks with less parameters. This method been evaluated extensively on the various datasets and been competitive to existing models. With such approach models can be enhanced to adapt domain characteristics more effectively in various time series applications.\n\n1. SSCNN reduces the parameter count substantially compared to traditional models, holding onto less than 1% of the parameters while still performing well across different datasets.\n2. The model captures complex data patterns effectively using fewer parameters, utilizing a structured component-based approach with a selection mechanism to improve prediction accuracy.\n3. SSCNN excels in time series forecasting, managing diverse data types and confirming its effectiveness through thorough experimentation.\n4. SSCNN improves plain feature decomposition by incorporating a selection mechanism. This allows the model to identify fine-grained dependencies at each time step, which is essential for enhancing the accuracy of the decomposed structured components and, consequently, the overall prediction accuracy.\n5. Extensive analysis has been performed to validate the method on existing benchmarks and compared with state-of-the-art methods.\n6. Supplementary materials are satisfactory and provide explanation about the dataset and the implementation.\n\n1. Figures lack captions.\n2. Include some limitations of the model as well.\n3. second contribution and third one looks quite similar.\n\n1. Please rewrite the contributions to make them clearer (2nd and 3rd).\n2. Add descriptions in figure 2 and 3\n3. In section D) Implications of Decomposition on Spatial-Temporal Correlations, please correct the captions of figures in temporal and spatial recovery."
},
{
"confidence": 2,
"rating": 6,
"review_id": "nFHJz7uCKV",
"review_text": "The paper approaches the problem of long term time series forecasting (LTSF) using a compositional technique to reduce the model size without compromising the quality of solution. The proposed technique is a transformer based architecture with a lower number of parameters, and delivers similar performance as state of the art models for LTSF.\n\nThe limitation of existing approaches, such as data patching, is that they fail to take into account the spatio-temporal dependencies, and end up with a blow up in the number of latent variables. This results only in a very small improvement even if the model size is increased substantially. The proposed technique in the paper is based on a inference step and an extrapolation step without any information loss.\n\nThe paper evaluates the proposed approach, called SSCNN, with seven datasets, which has a combination of regular and volatile patterns. The baseline and state of the art approaches compared against include iTransformer, TimeMixer, and PatchTST. SSCNN consistently achieves the best scores, with respect to MSE and MAE. The paper also conducts ablation studies to show that each new component in the architecture is vital to the performance.\n\nThe work studies an important and hard problem in time series forecasting which is the problem of efficient and accurate long term forecasting. Compositional techniques have been successful in other areas of AI including reinforcement learning, planning, and finite state controller synthesis. So, it makes sense to apply similar ideas in the space of long term time series forecasting.\n\nWhile the high level message is presented well, I found the details of the proposed method and experiments are hard to follow. A running example with the explanation of the new layers will be useful.\n\nThe main contribution with respect to results is somewhat hard to grasp and align with the theoretical claims of the paper. Overall, I think there is room for improvement in the presentation of experimental results. I found some missing details in the experimental section that include:\n\n1. Why is SSCNN missing Figure 3(a)?\n2. What is the value of T_{out} in Figure 2?\n3. What is the forward window size in Figure 3?\n\nIn figure 2, it would be useful to move some of the methods to the appendix, and keep only the critical ones in the main body of the paper. Same is true for Figure 3. It is hard to go back and forth between figures 1, and figures2&3.\n\nMinor:\n\n1. I would suggest providing some more details about the experimental results in point 3 of the contributions (lines 80-82)\n2. Figure 3 is hard to read in print.\n3. Having only one legend for all the subplots (Figure 2(a)-(d) and Figure 3(a)-(d)) will better than repeating the legends in all subplots.\n\n1. Are there any non neural network based time series forecasting models which make use of compositionality?\n2. Do any of the introduced layers (temporal, seasonal, short-term, etc) have similarities with any existing literature? What I mean is that, is the novelty in getting the layers to work together, or, also in defining the individual layers?\n3. In order to better study the computational cost, can you share the total/average running time for each method for each dataset?\n4. Why is SSCNN missing Figure 3(a)?\n5. What is the value of T_{out} in Figure 2?\n6. What is the forward window size in Figure 3?"
}
] |
wgpmDyJgsg | Sparse-view Pose Estimation and Reconstruction via Analysis by Generative Synthesis | Inferring the 3D structure underlying a set of multi-view images typically requires solving two co-dependent tasks -- accurate 3D reconstruction requires precise camera poses, and predicting camera poses relies on (implicitly or explicitly) modeling the underlying 3D. The classical framework of analysis by synthesis casts this inference as a joint optimization seeking to explain the observed pixels, and recent instantiations learn expressive 3D representations (e.g., Neural Fields) with gradient-descent-based pose refinement of initial pose estimates. However, given a sparse set of observed views, the observations may not provide sufficient direct evidence to obtain complete and accurate 3D. Moreover, large errors in pose estimation may not be easily corrected and can further degrade the inferred 3D. To allow robust 3D reconstruction and pose estimation in this challenging setup, we propose SparseAGS, a method that adapts this analysis-by-synthesis approach by: a) including novel-view-synthesis-based generative priors in conjunction with photometric objectives to improve the quality of the inferred 3D, and b) explicitly reasoning about outliers and using a discrete search with a continuous optimization-based strategy to correct them. We validate our framework across real-world and synthetic datasets in combination with several off-the-shelf pose estimation systems as initialization. We find that it significantly improves the base systems' pose accuracy while yielding high-quality 3D reconstructions that outperform the results from current multi-view reconstruction baselines. | https://openreview.net/pdf/9deb0cefa84b633dd45b98a2c28dfa4cb9a5847d.pdf | [
{
"confidence": 5,
"rating": 6,
"review_id": "f8pa41zXMR",
"review_text": "The paper introduces a novel optimization-based method for sparse-view 3D reconstruction from unposed images. The method uses off-the-shelf pose estimator to get pose initialization, then it uses rendering loss and generative priors to optimize the pose and 3D reconstruction. In detail, the generative priors involve a multi-view SDS loss on generated novel views using Zero123. The method demonstrates satisfying results on the evaluation data, and the ablation study shows the effectiveness of each proposed technique.\n\n- Good performance. The reconstruction quality and pose estimation accuracy are satisfying.\n- The paper is well-written and is easy to follow.\n- The idea of rejecting images with large pose error is interesting.\n- The technical part of the paper is solid.\n\n- Missing baseline. For the reconstruction methods, the only baseline is LEAP, which is a feedforward method. In contrast, the proposed method is an optimization-based method, which introduces pose-processing to estimated poses. I would suggest adding baseline of SPARF [1] and using the same pose initialization. Moreover, why not comparing with UpFusion?\n- Unknown inference speed. Will the joint optimization of pose and shape be slow? Could you provide a analysis of inference time?\n- Related work. One related work is iFusion [2], which uses generative priors for pose estimation and is very relevant to the philosophy of the proposed method. Another related work is FORGE [3], which introduces pose optimization for sparse view reconstruction. Moreover, the authors should discuss the prior sparse-view reconstruction from unposed images works with more details, the authors should provide more comparison and contrast with prior work. The current discussion is too short (Line 90-92).\n- Ablation study. The ablation study is performed with the Ray Diffusion pose initialization. How will it look like using Dust3r initialization? This is important as the ablation should be performed with the best base model.\n\n\n[1] Truong, Prune, et al. \"Sparf: Neural radiance fields from sparse and noisy poses.\" CVPR 2023.\n[2] Wu, Chin-Hsuan et al. “iFusion: Inverting Diffusion for Pose-Free Reconstruction from Sparse Views.” ArXiv 2023.\n[3] Jiang, Hanwen et al. “Few-View Object Reconstruction with Unknown Categories and Camera Poses.” 3DV 2024.\n\n- The introduction spends a lot space discussing the chicken-and-egg problem of pose estimation and reconstruction. However, I don't think it is quite related to the technical part, as the proposed method still need pose initialization using off-the-shelf methods. The method doesn't provide a novel perspective regarding how to solve the chicken-and-egg problem, and using pose initialization is quite common in prior works, e.g., SPARF, FORGE, and FvOR or even traditional SfM methods. Why the authors want to emphasize this?\n- Is it possible to evaluate the outlier removal method? For example, the authors can evaluate the correlation between the removal and the pose error. If the proposed method works well, they should have strong correlations. Moreover, it will be good to provide any statistics on the outlier removal method, e.g., how many images are removed in average."
},
{
"confidence": 4,
"rating": 6,
"review_id": "Na2Dv7a3OL",
"review_text": "This paper proposes a framework for joint 3D reconstruction and pose refinement. Specifically, given estimated camera poses from off-the-shelf models, the proposed method first leverages diffusion priors and rendering loss for 3D reconstruction. The 3D reconstruction is further used to refine the current pose parameters. The 3D reconstruction and pose refinement are conducted in an alternative way. An outlier identification and correction strategy is also introduced to make full use of the given image while mitigating the adverse effect of noisy camera estimations at the same time. Experimental comparison with several pose estimation baselines shows that the proposed method can refine inaccurate pose estimation effectively.\n\n1. The paper tackles a practical problem in real-world scenarios, where ground truth camera poses are not always available.\n2. The proposed method is shown to be effective when applying to different pose estimation baselines.\n3. The proposed outlier removal and correction is effective from the ablation study results in Table 4.\n\n1. The proposed method is compared with SPARF only in the setting of using pose from different pose estimation baselines. However, it would be more convincing to also present the results using the same setting of SPARF, which adds noise into the GT camera pose. This will be a direct comparison with SPARF’s original results reported in their paper.\n2. The proposed method is compared with LEAP for 3D reconstruction results. However, the comparison is a bit unfair since LEAP does not require any initial camera poses. \n3. The description of how to effectively detect the outliers (line 212 - line 214) is not very clear. Similarly, the procedure of how to correct the outlier poses (line 223 - line 225) is not very clear either. How the MSE and LPIPS are computed and compared since there is no correspondence?\n\n1. The proposed method is evaluated on the NAVI dataset. It seems that the dataset is quite simple as shown in Fig. 3 and Fig. 4. The reviewer is wondering about the performance of the proposed method on more complex scenes?\n2. The reviewer is wondering about the separate ablation results on the outlier removal and correction."
},
{
"confidence": 4,
"rating": 6,
"review_id": "Bfe3uEMTe6",
"review_text": "This paper proposes a method for the joint reconstruction of camera poses and 3D objects given sparse input views. The core idea is to use a pose-conditioned diffusion model (Zero-123) as a prior, impose the SDS loss, and jointly optimize the poses and objects, similar to the approach in ID-pose. To improve the robustness and quality of the optimization, the authors made several modifications: (1) Using a 6 DoF pose-conditioned diffusion model instead of a 3 DoF model. (2) Adding strategies for outlier detection and correction. (Although somewhat empirical, it proves effective.)\n\nThis approach requires initial camera poses (from methods such as RelPose++, RayDiffusion, etc.) and is not capable of reconstructing poses from scratch (e.g., purely random camera poses). Experimental results demonstrate that, compared to SPARF and ID-pose, the proposed method achieves better pose estimation quality. Additionally, it provides better object reconstruction in terms of novel view synthesis quality compared to LEAP.\n\n(1) The approach is technically sound, and I believe the reported results are reproducible.\n\n(2) The reconstructed results look good and represent the state-of-the-art in object-level pose-free reconstruction.\n\n(3) The paper is well-written, making it easy to read and understand.\n\n(1) This optimization-based method requires more time compared to a feed-forward model, taking about 5-10 minutes. Additionally, the writing discussing this aspect is somewhat unclear: the paper states, “with increased inference time depending on the number of outliers.” Could this statement be more specific? How much does the time increase with the number of outliers? The correction of outliers may be time-consuming as it requires dense searches of initial camera poses.\n\n(2) (Minor) The method focuses only on object-level reconstruction, which makes the scope seem narrow.\n\n(3) The authors do not sufficiently discuss experiments in a more “standard” sparse-view setting, such as using 3 or 4 views. The reported experiments use at least 6 views, which is not a particularly small number.\n\n(1) A related work is lacking in discussion: Sun, Yujing, et al. \"Extreme Two-View Geometry From Object Poses with Diffusion Models.\" arXiv preprint arXiv:2402.02800 (2024).\n\n(2) Is the testing data included in the training set for fine-tuning the 6-DoF diffusion model?"
},
{
"confidence": 3,
"rating": 6,
"review_id": "VO6TAj1KwL",
"review_text": "This paper presents a method named MV-DreamGaussian for tackling the problem of 3D reconstruction from sparse multi-view inputs. In particular, the paper extends the DreamGaussian work to use multi-view images as the inputs and proposes a scheme to optimize the inaccurate camera poses of the multi-view images.\n\n- This paper is well written and I can follow smoothly.\n- The authors proposed a finetuned version of Zero-1-to-3 with 6 DoF camera parametrization which shows an advantage over 3 DoF camera parameterization in the original paper.\n- The proposed pose refinement scheme is novel and very effective according to the authors' experiments compared with SPARF as well as the ablation study which shows that adding the proposed pose refinement improves the pose accuracy and reconstruction quality significantly. The design of the outlier removal based on photometric error ranking and discrete search is empirical but works quite well.\n\n- This paper presents very limited novelty in the reconstruction part with a trivial extension to DreamGaussian to use multi-view images, which is already implemented in a public repository [stable-dreamfusion](https://github.com/ashawkey/stable-dreamfusion).\n- The major weakness of the paper is the lack of fair comparisons in terms of the 3D reconstruction. The authors only compared with LEAP for the 3D reconstruction. However, LEAP is a work that **does not require any pose inputs**, whereas the proposed work needs relatively good pose initialization (e.g., Dust3r) and conduct refinement on it. In addition, the underlying 3D representation is different, too: LEAP uses NeRF while the proposed work uses 3D Gaussian. I'm confused as to why the authors did not compare with SPARF for the reconstruction quality too since SPARF shares the same input setup as the proposed work. Besides, the very recent work DMV3D would also be a good method to compare with.\n\n- I'm quite curious what the reconstruction quality the method can achieve without 3D generative prior but with the proposed refinement. Namely the combination of (1) and (4) in Table 4.\n- How are the poses used for generative prior sampled in addition to the input views?\n- How are the thresholds for pose outlier removal tuned?"
}
] |
wfU2CdgmWt | Stochastic Optimal Control Matching | Stochastic optimal control, which has the goal of driving the behavior of noisy systems, is broadly applicable in science, engineering and artificial intelligence. Our work introduces Stochastic Optimal Control Matching (SOCM), a novel Iterative Diffusion Optimization (IDO) technique for stochastic optimal control that stems from the same philosophy as the conditional score matching loss for diffusion models. That is, the control is learned via a least squares problem by trying to fit a matching vector field. The training loss, which is closely connected to the cross-entropy loss, is optimized with respect to both the control function and a family of reparameterization matrices which appear in the matching vector field. The optimization with respect to the reparameterization matrices aims at minimizing the variance of the matching vector field. Experimentally, our algorithm achieves lower error than all the existing IDO techniques for stochastic optimal control for three out of four control problems, in some cases by an order of magnitude. The key idea underlying SOCM is the path-wise reparameterization trick, a novel technique that may be of independent interest. | https://openreview.net/pdf/05e54e1a0fe05f44b274a196c29c8e974d51908d.pdf | [
{
"confidence": 4,
"rating": 6,
"review_id": "kPzoZv19ow",
"review_text": "In this paper, the authors propose a novel learning algorithm, Stochastic Optimal Control Matching (SOCM), to numerically solve general formulations of Stochastic Optimal Control (SOC) problems involving affined-controlled diffusion processes. They build upon the Iterative Diffusion Optimization (IDO) [1] framework, which consists in iteratively refining a parametric controlled diffusion process by minimizing at each iteration (with stochastic gradient descent) a specific objective function with respect to the parametric control. Previous works had for instance considered the relative entropy loss, the cross-entropy loss, the log-variance loss or the moment loss as the objective function. In SOCM, it is a least-squares regression loss (with multiplicative importance weight) which aims at fitting the parametric control to a vector field which depends on a family of *reparameterization matrices* (also optimized). The design of this objective function relies on standard tools from SOC theory, as well as an original contribution, the *path-wise reparameterization trick*, to compute gradients of conditional expectations of a functional applied on a random process. The authors show that the SOCM loss can be decomposed as the sum of a bias term, that is linked to the cross-entropy loss, and variance term, which is only affected by the *reparameterization matrices*. Hence, these extra parameters can be seen as a way to reduce the variance of the SOCM loss, which motivates their introduction. Moreover, this loss has the computational advantage to avoid computing gradients of the control along the path measure (which is the main drawback of the relative-entropy loss). Finally, the authors conduct numerical experiments to compare their approach to existing designs of IDO losses. They consider four different settings ($d\\in \\{10,20\\}$) with access to the ground-truth control, which allows them to compute the $L^2$ error on the control. Using this metric, their results indicate better performance on most of the settings while maintaining a certain training stability.\n\n[1] Solving high-dimensional Hamilton–Jacobi–Bellman pdes using neural networks: perspectives from the theory of controlled diffusions and measures on path space. Nüsken et al. 2021\n\n- The paper is very well-written, it is a pleasure to read it. In particular, the authors pay attention to define their notation, introduce with clarity the SOC framework, state clear mathematical statements, recall (and prove) standard results from SOC theory, provide intuition on theoretical results (with details on the proof and meaningful comments). This is really good work.\n- The relation to prior work is clearly well established: in particular, the comparison between the SOCM loss and the previous IDO losses (presented in Section 2.2) is well highlighted. \n- This papers introduces an interesting contribution that may be applied beyond this framework (in particular, in the generative community where terms involving gradients of expectations often appear) : this is the path-wise reparameterization trick, which is proved to be decisive in the numerics.\n\n- In my opinion, the major weakness of this paper is the lack of an additional numerical experiment, which represents a \"realistic\" setting (for instance, where the expression of the control is not known). For instance (as mentioned by the authors), a significant line of recent research has considered the sampling problem via a SOC perspective, see for example [1,2,3]. I am convinced that the SOCM contribution would have more impact with additional sampling numerics comparing SOCM, relative entropy [1,2] and log-variance [3] losses, for challenging distributions (namely, multi-modal distributions in relatively high dimension). In this case, the quality of the methods would be assessed with sampling metrics. This weakness explains my current score (although I really appreciate the paper).\n- I find that the complexity/accuracy tradeoff of the IDO losses (including SOCM) is not well highlighted to me. The table provided by the authors only considers one setting. To have a full picture, it should be given for all settings.\n\n[1] Path Integral Sampler. Zhang et al. 2022.\n\n[2] Denoising Diffusion Sampler. Vargas et al. 2023\n\n[3] Improving sampling via learned diffusions. Richter et al. 2023\n\n- Have you tried to restrict the optimization of the reparameterization matrices to scalar matrices ? I have the feeling that this choice may align a low computational budget with a good expressivity.\n- Have you tried another parameterization of these matrices ? In particular, have you considered simple form such as $M_{w}(t,s)=I_d + \\gamma(s-t)\\tilde{M}_{\\tilde{w}}(s,t)$ where $w=(\\gamma, \\tilde{w})$ and $\\gamma(0)=0$ ? Otherwise, why the choice of the sigmoid ?\n- I find that the warm-start strategy for the optimization of the control is a very good idea, as it benefits from the tractability of the stochastic interpolants with the lightness of the spline formulation. However, I have the feeling that this strategy works in the presented settings since they are \"kind of Gaussian\". Do you think it may still be of interest for general SOC problems ?\n- Could you explain why the second Ornstein-Uhlenbeck setting is called 'hard' ?\n- Could you provide the results on $L^2$ error of the control without EMA, as it is presented in [1] ?\n- I am quite surprised of the relatively computational budget induced by the use of the log-variance loss, could you comment on this ?\n\n[1] Solving high-dimensional Hamilton–Jacobi–Bellman pdes using neural networks: perspectives from the theory of controlled diffusions and measures on path space. Nüsken et al. 2021"
},
{
"confidence": 3,
"rating": 5,
"review_id": "CJBNWrtFax",
"review_text": "This paper proposes a novel algorithm for approximating the solution to the Hamilton-Jacobi-Bellman (HJB) equation with a neural network control policy. Rather than backpropagating through rollouts of the dynamics, the authors develop a least-squares objective which resembles the score-matching loss used in diffusion models. However, this requires computing gradients of a diffusion process with respect to its initial condition. To address this, the authors develop a novel path-wise reparameterization trick which relies on a family of reparameterization matrices. They show how to optimize these matrices to reduce the variance of the objective estimate. They demonstrate that their method obtains a lower error with respect to the ground-truth control on toy problems, sometimes by an order of magnitude.\n\n- The proposed objective function acts as a form of variance reduction for the cross-entropy loss when solving stochastic optimal control problems.\n- The novel reparameterization trick for estimating gradients of diffusion processes with respect to its initial condition may be more broadly applicable.\n- On toy problems, their method appears to generally outperform other approaches in solving the HJB equation for the optimal controls.\n- The paper is well organized and overall written well. It provides a thorough related work section and does a good job explaining the novelty and results.\n\n- The evaluations only consider simple toy problems. Moreover, they only plot the L2 error with respect to the optimal control. However, this does not necessarily tell us about the actual task performance due to compounding errors.\n- On the Double Well system, there is not a clear advantage compared to the variance loss and adjoint method. However, the authors do discuss how their method appears more stable than the adjoint-based ablation.\n\n- How do all the methods compare in terms of actual task performance?\n- How do these methods perform on more realistic control problems?\n- Why does the proposed method not work as well on the Double Well system compared to the variance baseline?"
},
{
"confidence": 1,
"rating": 7,
"review_id": "rknhJZVyH4",
"review_text": "This paper presents stochastic optimal control matching (SOCM), which is an iterative diffusion optimization for optimal control aiming to fit a matching vector field. The authors introduce a new loss function and address the analysis and design of a learning-based control method.\n\nThe work is nicely motivated in Introduction, showing the drawbacks of traditional works. The proposed control method is supported by the uniqueness analysis of the control logic (Theorem 1) and the sophisticated design methods (Propositions 1 and 2). In the reviewer's understanding, they are technically correct.\n\nAs stated in Algorithm 2 below, reducing noise in the gradient is crucial for the presented algorithm. This weakness is addressed by Lemma 1 and extensions.\n\nAs stated in Introduction, the work is motivated by stabilizing the unstable training of conventional IDO, which comes from the non-convexity of the loss. Could the authors comment and/or perform some motivating experiments to show the stability of the training by SOCM? They can emphasize the contribution of this paper."
},
{
"confidence": 3,
"rating": 5,
"review_id": "PQTkGwx9k6",
"review_text": "**Summary**\n\nThis paper introduces Stochastic Optimal Control Matching (SOCM), a novel algorithm for solving stochastic optimal control problems. Key contributions include:\n\n1. SOCM algorithm, adapting ideas from conditional score matching in diffusion models\n2. A new \"path-wise reparameterization trick\" for gradient estimation\n3. Theoretical analysis including a bias-variance decomposition\n4. Empirical evaluation showing superior performance on 3 out of 4 benchmarks\n\nSOCM learns a control function by fitting a matching vector field via least squares, while optimizing reparameterization matrices to reduce variance. The method is currently limited to linear Gaussian models and requires knowledge of certain parameters. Experiments demonstrate SOCM's effectiveness on theoretical benchmarks, outperforming existing methods in most cases. The paper provides a solid theoretical foundation but lacks exploration of real-world applications or non-linear systems.\n\nThe paper introduces Stochastic Optimal Control Matching (SOCM), a novel algorithm for solving stochastic optimal control problems. Its originality lies in adapting ideas from conditional score matching in diffusion models to the domain of optimal control. This creative combination represents an interesting cross-pollination between two active areas of research.\n\nThe quality of the theoretical work is notable. The authors provide a comprehensive mathematical foundation for their method, including detailed proofs and a novel \"path-wise reparameterization trick\". This theoretical rigor is a significant strength of the paper.\n\nIn terms of clarity, the paper is well-structured and clearly written. The authors effectively guide the reader from the problem formulation through the theoretical development to the empirical results. The use of illustrative examples and detailed appendices aids in understanding the complex mathematical concepts presented.\n\nThe significance of this work lies in its potential to improve the efficiency of solving stochastic optimal control problems. The empirical results, showing improved performance over existing methods on multiple benchmarks, underscore the practical impact of this approach. However, the significance is somewhat limited by the current restrictions to linear Gaussian models.\n\nThe primary weakness of this paper is its limited scope and applicability. The method is currently restricted to linear Gaussian models and requires knowledge of certain model parameters. This significantly narrows its potential impact on the broader field of stochastic optimal control. The authors should discuss potential approaches to extend SOCM to more general settings, such as nonlinear or non-Gaussian systems.\n\nWhile the empirical results are promising, they are limited to theoretical benchmarks. The paper would be strengthened by including experiments on real-world problems or more complex simulated environments. This would help demonstrate the method's practical utility and potential for broader impact.\n\nThe scalability of the method is not thoroughly addressed. As the dimensionality of the problem increases, how does the computational complexity of SOCM compare to existing methods? A more detailed analysis of computational requirements and scaling properties would be valuable.\n\nThe comparison with existing methods, while showing SOCM's superior performance, could be more comprehensive. Including comparisons with the most recent state-of-the-art methods would provide a clearer picture of SOCM's relative performance in the current landscape of stochastic optimal control algorithms.\n\n1. How might SOCM be extended to handle nonlinear or non-Gaussian systems? Are there specific challenges you foresee in this extension?\n\n2. The paper focuses on theoretical benchmarks. Have you considered applying SOCM to any real-world stochastic optimal control problems? If so, what challenges did you encounter or do you anticipate?\n\n3. How does the computational complexity of SOCM scale with the dimensionality of the problem? Could you provide a more detailed comparison of computational requirements with existing methods?\n\n4. The path-wise reparameterization trick is an interesting contribution. Could you elaborate on potential applications of this technique outside of stochastic optimal control?\n\n5. The paper mentions that SOCM requires knowledge of certain model parameters. In practical scenarios where these parameters might not be known precisely, how sensitive is SOCM to parameter misspecification?\n\n6. Have you explored the performance of SOCM in settings with sparse or noisy rewards, which are common challenges in reinforcement learning?"
}
] |
weemASPtzg | Linear Causal Representation Learning from Unknown Multi-node Interventions | Despite the multifaceted recent advances in interventional causal representation learning (CRL), they primarily focus on the stylized assumption of single-node interventions. This assumption is not valid in a wide range of applications, and generally, the subset of nodes intervened in an interventional environment is *fully unknown*. This paper focuses on interventional CRL under unknown multi-node (UMN) interventional environments and establishes the first identifiability results for *general* latent causal models (parametric or nonparametric) under stochastic interventions (soft or hard) and linear transformation from the latent to observed space. Specifically, it is established that given sufficiently diverse interventional environments, (i) identifiability *up to ancestors* is possible using only *soft* interventions, and (ii) *perfect* identifiability is possible using *hard* interventions. Remarkably, these guarantees match the best-known results for more restrictive single-node interventions. Furthermore, CRL algorithms are also provided that achieve the identifiability guarantees. A central step in designing these algorithms is establishing the relationships between UMN interventional CRL and score functions associated with the statistical models of different interventional environments. Establishing these relationships also serves as constructive proof of the identifiability guarantees. | https://openreview.net/pdf/97134ff9ae5f0c497e5980844484a73aa21a38ba.pdf | [
{
"confidence": 3,
"rating": 7,
"review_id": "aXPSL6PhGh",
"review_text": "This paper studies identifiability under unknown muilti-node interventions (soft/hard), with general models (parametrtic/nonparametric) and **linear** mixing functions. This work provides both detailed proof which justifies the main theoretical statement, and a step-by-step algorithm which guides how to achieve identifiability in practice.\nOverall, I find this work serves as an important step for interventional CRL towards more realistic settings.\n\n \n\n### References\n\n[1] Burak Varıcı, Emre Acartürk, Karthikeyan Shanmugam, Abhishek Kumar, and Ali Tajer. Score- based causal representation learning with interventions. arXiv:2301.08230, 2023.\n\n[2] Burak Varıcı, Emre Acartürk, Karthikeyan Shanmugam, Abhishek Kumar, and Ali Tajer. Score- based causal representation learning: Linear and general transformations. arXiv:2402.00849, 2024.\n\n[3] Julius von Kügelgen, Michel Besserve, Wendong Liang, Luigi Gresele, Armin Kekic ́, Elias Bareinboim, David M Blei, and Bernhard Schölkopf. Nonparametric identifiability of causal rep- resentations from unknown interventions. In Proc. Advances in Neural Information Processing Systems, New Orleans, LA, December 2023.\n\nThis paper is extremely well written and clearly structured: it communicates clearly motivations, formulation, technical details, and theoretical implications. The experimental results adequately validate the theory in case of a linear causal model.\n\n1. The proposed UMNI-CRL algorithm is claimed to work with *general* non-parametric causal models; however, the simulation experiment only showed results on *linear* structural equation model. It would be great if the authors could report further experimental results on non-parametric causal models, to align with the theoretical claims. If there is a valid reason why it cannot be done, I am also very happy to hear.\n\n2. Following the previous point, since this approach requires density estimation, it might not be scalable on nonparametric models. But to be fair, this seems to be a common limitation in many interventional CRL works [1, 2, 3].\n\n3. Linearity assumption on the mixing function is restrictive, but the authors have acknowledged it and discussed possible future directions to overcome this limitation (sec. 6).\n\nSee the first point in **weakness** section. I am very happy to raise my rating if this issue is resolved."
},
{
"confidence": 2,
"rating": 7,
"review_id": "95yHyFudZf",
"review_text": "This paper advances Causal Representation Learning (CRL) by addressing the challenge of using unknown multi-node (UMN) interventions to identify latent causal variables and their structures. The authors develop a score-based CRL algorithm that leverages UMN interventions to guarantee identifiability of latent variables and their causal graphs under both hard and soft interventions, achieving perfect identifiability with hard interventions and identifiability up to ancestors with soft interventions. Their method outperforms existing single-node approaches by ensuring robust recovery of causal structures in more complex, multi-intervention environments.\n\n* Extending the causal representation learning to unknown multi-node interventions\n\n* Proofs are provided \n\n* Pseudocode is provided\n\n* Computational complexity is discussed\n\n* Limitations are clearly stated\n\n* The paper primarily focuses on causal models with linear transformations. This limits its applicability in many real scenarios\n\n* The applicability of the assumptions in real scenarios was not discussed\n\n* The method was not applied on real world-data\n\n* Can you please elaborate on the computational complexity and on why it is dominated by step 2?\n\n* Can you please discuss the applicability of the assumptions in real scenarios?\n\n* I think that adding some real world application can increase the impact of this paper. Is it possible to find such an application?"
},
{
"confidence": 4,
"rating": 8,
"review_id": "9KTqx3TsPE",
"review_text": "This work studies interventional causal representation learning, where one has access to interventional data, to identify latent causal factors and latent DAG in the unknown multi-node interventions regime. The authors consider a setting where the mixing function is linear and the latent causal model is nonparametric. Under the assumption of sufficient interventional diversity, the authors use score function arguments to show that the underlying causal factors of variation (and DAG) can be recovered (1) up to permutation and scaling from stochastic hard interventions and (2) up to ancestors from soft interventions. The authors propose a score-based framework (UMNI-CRL) and evaluate it on synthetic data generated from Erdős–Rényi random graph model.\n\n- This work provides significant results in the unknown multi-node intervention setting, which is much more realistic than the common single-node intervention regime. As opposed to other works, this work studies CRL from a more general class of multi-node interventions (stochastic hard and soft).\n- The paper is well-written, the concepts are explained well, and the theoretical identifiability results add a lot of value to the current CRL literature.\n- The use of score functions and score differences in the observation space to estimate the unmixing function, especially for the UMN setting, is a novel and interesting approach for CRL.\n- This work is the first to establish latent DAG recovery in the UMN setting under any type of multi-node intervention for arbitrary nonparametric latent causal models.\n\nAlthough the theoretical contribution of this work is strong, the empirical evaluation is quite weak compared to other works in CRL. There are only experiments for n=4 causal variables. There is also no baseline comparison of the proposed framework with other methods in the UMN setting (e.g., [1]). Also, some discussions are a bit abridged and could use more elaboration in the paper (see below for details).\n\n[1] Bing et al. “Identifying Linearly-Mixed Causal Representations from Multi-Node Interventions” CLeaR 2024.\n\n- I would like some clarification on the intervention regularity condition. Specifically, why does the additional term ensure that multi-node interventions have a different effect on different nodes? It would be good to elaborate on this condition when introduced since it is a central assumption that needs to be satisfied for the results to hold.\n- How do you obtain $\\Lambda$ in Eq. (14)? It seems that this matrix encodes the summands with the latent space score differences. However, since the distribution of the latents is unknown, how would you go about estimating $\\Lambda$ and score differences $\\Delta S_X$ in general cases of nonparametric distributions?\n- How do you learn the integer-valued vectors $\\mathbf{w}$ in Stage 2 of the algorithm? From Eq (18), it seems that $\\mathcal{W}$ is a fixed predefined set and you choose the vectors $\\mathbf{w} \\in \\mathcal{W}$ that satisfy a specific condition in the algorithm. To my understanding, this is central to recovering the approximate unmixing $\\mathbf{H}^*$ up to a combination of the rows of the true unmixing $\\mathbf{G}^{\\dagger}$. I would appreciate it if the authors could elaborate on how this procedure was done.\n- From Appendix A.8, it seems that $\\kappa$ is determined by the number of causal variables $n$. Could the authors give some more intuition on what $\\kappa$ represents in Stage 2 with respect to how the unmixing is recovered?\n- Are there any distributional assumptions on the exogenous noise in the latent additive noise causal model?\n- It seems that the UMN hard intervention result (Theorem 1) requires a latent model with additive noise. Would perfect recovery still be possible for latent models with non-additive noise under UMN hard interventions?\n- The empirical results suggest that increasing sample size improves DAG recovery, which is intuitive. However, what do the results look like as the number of causal variables scales up? Currently, the authors only show results for n=4 latent causal variables. I only offer this as a suggestion due to the short rebuttal period.\n- How would the assumptions made need to change to be applied to general mixing functions? I know that generality in one aspect of the model (i.e., general SCM) may require other aspects to take some parametric form (i.e., linear mixing) for identifiability guarantees, but do the authors have any intuition on how to achieve identifiability results for the UMN setting in a completely nonparametric setup?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "fwEEZYb5am",
"review_text": "This paper extends previous results on using score function for causal representation learning to the settings with unknown multi-node interventions. This new setting poses significant new challenges as opposed to the single node intervention case. The author first present theoretical identifiability result on hard interventions with latent additive noise model and on soft interventions. They then propose an algorithm called (UMNI)-CRL and test it on synthetic linear Gaussian dataset.\n\nThe paper is clearly written, easy to follow and with good motivations.\n\n1. The transformation from latent to observed is noiseless, which could be a limitation. \n2. Line 199 says that: “This regularity condition ensures that the effect of a multi-node intervention is not the same on different nodes”. But how realistic or neccessary is this condition? It seems like it is very possible that an intervention can cause two downstream nodes to have the same effect although these two nodes is not influenced the same by all type of interventions. \n3. The experiments are only on synthetic dataset but I don’t think that is a big issue. \n4. Some potential missing citations\n \n [1] Kumar, Abhinav, and Gaurav Sinha. \"Disentangling mixtures of unknown causal interventions.\" *Uncertainty in Artificial Intelligence*. PMLR, 2021.\n \n [2] Jiang, Yibo, and Bryon Aragam. \"Learning nonparametric latent causal graphs with unknown interventions.\" *Advances in Neural Information Processing Systems* 36 (2024).\n\n1. (UMNI)-CRL requires estimating the score function. How do you ensure a good estimate of the score function to unsure that the algorithm is useful in practice?\n2. One small question: on line 141-143, it is mentioned that if a node is not intervened on, perfect identifiability is not possible. But there are cases like A→B where I don’t need to intervene on A?"
},
{
"confidence": 3,
"rating": 7,
"review_id": "APqzS9xzbA",
"review_text": "This paper introduces new identifiability results for CRL in environments with unknown multi-node interventions. It shows that, with sufficiently diverse interventional environments, one can achieve identifiability up to ancestors using soft interventions and perfect identifiability using hard interventions. The paper also provides an algorithm with identifiability guarantees.\n\n- The paper tackles the complex and underexplored multi-node intervention setting. The established identifiability can be crucial for extending current CRL theories into more practical contexts.\n- The introduced algorithm that leverages score functions with different interventional environments is also interesting and insightful.\n- The paper is well-motivated and articulated with high clarity.\n\n- The proposed algorithm, while theoretically sound, seems computationally demanding. In fact, even a 4-node low-dimensional case requires a large number of environments and samples. The paper could benefit from a deeper discussion on the scalability of the algorithm.\n- The current evaluation of the algorithm is limited to synthetic simulations. Expanding it to more realistic datasets would substantively improve its practical significance.\n\nHow effectively does the proposed algorithm scale to more nodes and higher dimensions?"
}
] |
wduRaBDRBS | Video Token Merging for Long Video Understanding | As the scale of data and models for video understanding rapidly expand, handling long-form video input in transformer-based models presents a practical challenge. Rather than resorting to input sampling or token dropping, which may result in information loss, token merging shows promising results when used in collaboration with transformers. However, the application of token merging for long-form video processing is not trivial. We begin with the premise that token merging should not rely solely on the similarity of video tokens; the saliency of tokens should also be considered. To address this, we explore various video token merging strategies for long-form video classification, starting with a simple extension of image token merging, moving to region-concentrated merging, and finally proposing a learnable video token merging (VTM) algorithm that dynamically merges tokens based on their saliency. Extensive experimental results show that we achieve better or comparable performances on the LVU, COIN, and Breakfast datasets. Moreover, our approach significantly reduces memory costs by 84% and boosts throughput by approximately 6.89 times compared to baseline algorithms. | https://openreview.net/pdf/02ca725ea4f0f64f7c82a9d5389359bf6b97e1bf.pdf | [
{
"confidence": 3,
"rating": 5,
"review_id": "BL9xhf7PFw",
"review_text": "- This paper carries out an analysis of token merging[1] in the context of long-form video understanding and proposes learnable video token merging (VTM) to select semantics/saliency guided tokens for merging. \n- In token merging, at each layer, the tokens are divided into two sets source S and target T through uniform sampling. Tokens in S are matched to T based on similarity and merged (usually by average pooling). This paper compares this naive VTM with two other variants where selection of T is guided by informed heuristics: (1) region VTM where tokens at the center of each frame are more likely to be retained, (2) motion VTM where tokens with high motion are more likely to be retained. Through this analysis, authors argue that the strategy to select T plays an important role in the final performance.\n- Motivated by this, authors propose a learnable VTM where it first predicts a saliency score for each input token. The target set T is sampled according to the probability distribution defined by saliency score. Since this partition operation is not differentiable, authors propose a novel training architecture where a parallel auxiliary network is trained alongside. The saliency scores are used to bias the attention score of the aux network, thereby supervising the saliency prediction to focus on important tokens. Aux network can be discarded at test time.\n- Authors carry out a fair evaluation of the learnable VTM on LVU, Breakfast and COIN datasets by comparing against several baselines including ViS4mer, S5, D-sprv. Learnable VTM performs better than baselines in almost all evaluation tasks with low GPU memory usage and high throughput.\n\n[1] Daniel Bolya, Cheng-Yang Fu, Xiaoliang Dai, Peizhao Zhang, Christoph Feichtenhofer, and Judy Hoffman. Token Merging: Your ViT but faster. In ICLR, 2022.\n\n- In learnable VTM, the idea of learnable and auxiliary path is interesting. There is no way to directly supervise the token saliency prediction of the main path because partition operation is non-differentiable. Hence the attention in auxiliary path is influenced by saliency scores of the main path, which encourages the saliency prediction to focus on important tokens.\n- The evaluation is fair and consistent. The authors use the same backbone as the prior works to encode input video frames, thereby ensuring a fair evaluation.\n- The results of learnable VTM on LVU dataset and Breakfast is noticeably better than baselines with less GPU memory usage. However, on COIN dataset, it doesn't perform better than S5 baseline.\n\n### Major weaknesses\n- One of the cited contribution is the exploration of region based and motion based VTM (Section 3.3) but it seems trivial. The effectiveness of token selection is already shown in learnable VTM. In light of that, there is an unreasonable focus section 3.3 which is unnecessary.\n- Section 3.4 explains little about the details of learnable VTM, how it is trained, how the gradients flow in the presence of non-differentiable partition function, etc.\n- There are some stretched claims based on qualitative and quantitative results. For example,\n - In Line 174, authors claim that center VTM performs better than naive VTM. However, according to Table 1, the results are mixed at best.\n - In Fig 5, authors also claim that the visualization of merged tokens show saliency based merging. However, the figure doesn't support the claim. There are many merged tokens on important salient features and some background tokens are not merged.\n\n### Minor issues\n- Line 19: it should be \"into the domain of video computer vision\" as all cited papers are video learning papers.\n- Is there a difference between notation of C and D? It looks like both are used interchangably to denote token dimension.\n- Table 2: How it throughput measured? fps?\n\n- How is learnable VTM trained? From Fig 4, it looks like there are two outputs from the network. Do you apply the same loss on both outputs?\n- In Fig 4, what does the 'Merge' operation in auxiliary path do? Does it mean that the main and aux - both paths use the same target set sampled by the partition-match of the main path?"
},
{
"confidence": 3,
"rating": 4,
"review_id": "pyJ4gizUn1",
"review_text": "This paper explores various video token merging strategies in the context of long-form video classification and finally propose a learnable Video Token Merging algorithm that dynamically merges video tokens based on visual salient areas. The contributions are summarized as follow:\n1. Explore various video token merging methods including the naïve VTM, the region concentrated VTM, and the motion-based VTM.\n2. Propose the learnable video token merging algorithm, which estimates the saliency scores of each token and adaptively merge visual tokens based on those scores.\n3. The proposed algorithm achieves the best or competitive results on various datasets.\n\n1. This paper explores various video token merging methods including the naïve VTM, the region concentrated VTM, and the motion-based VTM.\n2. Compare with baseline and rule-based video token merging. The proposed learnable video token merging strategy has large improvement.\n3. The two-paths design to deal with non-differentiable problem in partitioning process is interesting.\n\n1. This paper proposes a leanable video token merging strategy. The similiar high-level idea can be found by CTS[1] in image domain. The novelty is insufficient。\n2. This paper focuses on video token merging. However, I do not observe any specific design tailored for the video domain in terms of the methodology. let alone long video.\n\n\n\n[1] Content-aware Token Sharing for Efficient Semantic Segmentation with Vision Transformers\n\n1. Due to the two-paths design, has the training time doubled?\n2. The paper tries to learn saliency scores using matrix $U_s$. How about using $\\sum{QK^T}$ in Equation 8 as saliency scores for each visual token?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "Z8ndbBvnjK",
"review_text": "The paper approaches the task of long-video understanding from token reduction perspective. Specifically, Transformer-based approaches suffers from memory bottleneck and quadratic computation complexity with increasing number of tokens, which is even more pressing with long-videos as input. The paper builds on a recently developed token merging mechanism, and proposes a learned saliency measure to modulate what tokens gets merged instead of using a random or hand-crafted saliency measure. The central hypothesis of the work is that typically techniques that use similarity as merging criteria may inadvertently lose out on salient tokens. The paper reports experiments on three conventional long-video benchmarks (LVU, Breakfast and COIN), and shows effectiveness of their approach compared to prior related works both in terms of performance and memory requirement. The paper also ablates the effectiveness of their proposed saliency measure (learned VTM) over hand-crafted measures including motion-based (using optical flow), center-biased and random schemes.\n\n- The paper is well-written with most of the information presented for ease of understanding \n- The memory requirement is lower than S4, with competitive performance which highlights the importance of token selection in the case of long-videos\n\n- Comparison to related token saliency approaches\n - The paper proposes a scheme to identify salient tokens by using a learned projection matrix $U_s \\in \\mathcal{R}^{D \\times 1}$ with $\\texttt{tanh}$ activation function\n - However, learnable token saliency methods have also been used in prior works, such as EVEREST [1], which uses a pair-wise learned saliency at feature-level (equation 2) using $\\texttt{Conv3d}$. The resulting approach was shown to be effective in the Masked Autoencoding setup\n - Having a motion-based merging scheme is a good baseline, but some variants of learnable token saliency could also be tried to gain better understanding how token saliency gets influenced by different approaches
\n\n- Role of $L_1, L_2, L_3$\n - The paper proposes to take tokens from $L_i$ consecutive frames for the $i^{th}$ VTM block\n - It seems that choosing the values of $L_i$ is quite crucial given its impact on performance and memory requirement (Table 6) that forms the central claim of the paper\n - However, the paper highlighted the contribution of token saliency more compared to the choice of $L_i$ hyperparameters\n - Did the authors experiment with a rather simplistic setup using a single VTM block and/or with all $L_i$ being 60? It would help the readers to gain better understanding of what works in long-videos \n\n\n- How saliency changes with tokens from different number of frames?\n - It seems that the saliency is being computed at each VTM block. It would be interesting to see how the saliency changes across the three VTM blocks\n - On that note, what VTM block’s saliency is being visualized in Figure 5?\n\n\n\n### Minor\n- Line 145-146: “$i$-th transformer block takes the tokens corresponding to $L_i$ frames without overlapping”\n - Confusing when $i$ is referred to as the frame number and the block number of transformer at the same time\n- Line 145-147: $j$ is not defined\n\n\n### Typos\n- Line 24-25: “so the tokenizing the”\n- Line 36: “selectio”\n- Line 114-115: “applications are mostly remained”\n- Line 117: “depedencies”\n- Line 161: “in the videos, .”\n- Line 165: “regarding less of the”\n- Line 177: “sailent”, “the the”\n- Line 306: “sailencies”\n\n\n### References\n[1] “EVEREST: Efficient Masked Video Autoencoder by Removing Redundant Spatiotemporal Tokens”. Sunil Hwang and Jaehong Yoon and Youngwan Lee and Sung Ju Hwang. ICML 2024.\n\n- Line 141: why L >= 60?\n- Is saliency projection used in all auxiliary VTM blocks?"
},
{
"confidence": 4,
"rating": 5,
"review_id": "bIoJBf543A",
"review_text": "This paper builds on Token Merging (ToMe) to improve its performance. In particular, the authors explore different ways to partition tokens so that the merging operation can lead to better performance while maintaining speed. They explore region-concentrated merging, motion-vector based merging and a learnable selector, and find that the learnable version works best. To make the network trainable, they employ a creative auxiliary path mechanism to make everything differentiable. They find that their learnable VTM obtains good results compared to baselines on long form datasets (LVU, Coin, Breakfast), and that it outperforms the other methods they introduce.\n\nThe problem this paper addresses is an important one. Videos (especially long ones) have many redundant tokens and reducing their number while maintaining performance is a crucial problem to solve in the field.\n\nThe model itself is well designed and uses a creative auxiliary path to handle a non-differentiable partitioning process. Given the premise of the paper, the model is well-designed and seems to address the issue they propose. \n\nI also appreciate the exploration of different methods, and a comparison on which worked better. This kind of analysis is often missing from papers and I am grateful for the authors for including it.\n\nI don’t really agree with the premise of the paper (and am open to a rebuttal to explain if I’m wrong here). Token Merging already explored merging for video in detail. The reason Token Merging is based on similarity is that by combining tokens that are extremely similar, the weighted average of the those tokens should produce an extremely similar result in the attention operation. This was also detailed more in the Token Merging follow-up TomeSD. If you use different criteria such as saliency (which is not really well-defined), this is no longer guaranteed, and from equation (10) it seems like the authors do not use the proportional attention scheme from ToMe (Eq 1 in the original paper). Table 8 doesn’t show the learnable method using this; it seems to just be about the pooling part rather than the attention operation.\n\nI also don’t understand the intuition behind the saliency: shouldn’t we be aiming to combine together tokens that are NOT relevant, so that the transformer can focus more on the relevant tokens, rather than averaging (and thus losing) information from the more salient / important tokens? I’d really appreciate some clarification here. From Figure 3, it doesn’t look like learnable VTM is focusing on visually important tokens: it’s picking ones from the ceiling and wall in addition to the people.\n\nMy main issue is with the evaluation. The evaluation seems not quite fair, especially when measuring memory usage and throughput. Shouldn’t it be compared to baseline merging algorithms, like the naive ToMe? My impression is that the memory usage and throughput from VTM will be exactly the same as ToMe because it uses a similar partitioning scheme and constant factor reduction, which is why it may not be included in the results, but this seems important to include for context. Furthermore, the improvement on metrics is quite small, given that the speed is the same as other merging methods. Is this expected?\n\nAlso, The paper is motivated by “long-term” video, but evaluates on 64 frames, which isn’t really long and in my view, doesn’t merit only evaluating on LVU, Breakfast and COIN. Kinetics-400 has 300 frames per video, and is a more standard benchmark for evaluating video backbones - in fact, the original ToMe paper includes experiments on those datasets, which would make for a more fair comparison. Furthermore, nothing about the method itself is specific to these longer videos. I think evaluating on more standard datasets is crucial to measuring the actual strength of the method, especially compared to baselines like Token Merging. In particular, the long-form datasets are very compressible. \n\nThe paper is not well-written and the grammar needs a lot of revision, making it hard to focus on the content of the paper itself. In addition, a lot of space is spent on methods that are not really used in the final results (center, region, motion vector) and on citing equations from preliminary works (token merging, attention). Given that a claimed contribution is an exploration of these different methods, I would also have expected more detailed ablations and experiments to understand exactly why some of the methods perform better than others.\n\nIt’s not really expected that VTM (or any merging method) should score better than full attention, as it has strictly less information. This is backed up in the Token Merging and other follow up papers. Why is it expected (as said on L160) that merging should perform better? It’s supposed to be just faster, with a minimal drop in performance.\n\nThe motion vector method requires extracting pre-computed motion vectors from the encoded video. However, those are computed for 30 FPS, and for the original video size, meaning they don’t actually apply to downsampled (64 frames, 224x224). Was this taken into account? It’s certainly not fast to re-compute these motion vectors if you’re doing random cropping or frame selection.\n\nIs it possible to know the effect on training wall-clock time from this method? This the the metric that practitioners really care about, so including this would potentially strengthen the results of the paper."
}
] |
wdGvRud1LS | Learning Cortico-Muscular Dependence through Orthonormal Decomposition of Density Ratios | The cortico-spinal neural pathway is fundamental for motor control and movement execution, and in humans it is typically studied using concurrent electroencephalography (EEG) and electromyography (EMG) recordings. However, current approaches for capturing high-level and contextual connectivity between these recordings have important limitations. Here, we present a novel application of statistical dependence estimators based on orthonormal decomposition of density ratios to model the relationship between cortical and muscle oscillations. Our method extends from traditional scalar-valued measures by learning eigenvalues, eigenfunctions, and projection spaces of density ratios from realizations of the signal, addressing the interpretability, scalability, and local temporal dependence of cortico-muscular connectivity. We experimentally demonstrate that eigenfunctions learned from cortico-muscular connectivity can accurately classify movements and subjects. Moreover, they reveal channel and temporal dependencies that confirm the activation of specific EEG channels during movement. | https://openreview.net/pdf/3bdba64a512929e72c61ef5333a70f8c11950c8b.pdf | [
{
"confidence": 4,
"rating": 7,
"review_id": "bCUJMCQqwZ",
"review_text": "The paper presents a novel approach called Functional Maximal Correlation Algorithm with Trace cost (FMCA-T) for estimating cortico-muscular dependence by leveraging orthonormal decomposition of density ratios. This method is designed to model the relationship between EEG (electroencephalography) and EMG (electromyography) signals, addressing the challenges of interpretability, scalability, and local temporal dependence in cortico-muscular connectivity. The key contributions include introducing a matrix trace cost optimization for improved stability and efficiency, demonstrating robustness against nonstationary noise and delays, and effectively capturing movement and subject information from EEG features for enhanced classification accuracy. The proposed method outperforms existing baselines, particularly in cross-subject scenarios, and provides insights into channel-level and temporal dependencies, reinforcing its potential applications in brain-computer interface development and neuromuscular disorder diagnostics.\n\n1. Innovative Method: Introduces the Functional Maximal Correlation Algorithm with Trace cost (FMCA-T), providing a novel approach for estimating cortico-muscular dependence.\n2. Improved Stability and Efficiency: Utilizes matrix trace cost optimization, which is more stable and computationally efficient compared to traditional log-determinant cost methods.\n3. Enhanced Classification Accuracy: Effectively captures movement and subject information from EEG features, significantly improving classification accuracy, especially in cross-subject scenarios\t\n4. Validation on Multiple Datasets: Validated using both simulated and real EEG-EMG datasets, confirming the method’s effectiveness and robustness.\n5. Open Data and Reproducibility: Offers open access to datasets and detailed implementation code, facilitating reproducibility and further research in the field.\n\nThe provided baselines are relatively few; future work could expand on this.\n\n1. While the paper discusses the improved stability and efficiency of the FMCA-T method, can the authors provide more detailed about the computational resources required for training and inference?"
},
{
"confidence": 4,
"rating": 5,
"review_id": "6ZKw0GUOyc",
"review_text": "The paper presents a new method to model the relationship between cortical and muscular oscillations using EEG and EMG recordings. Traditional methods like Cortico-Muscular Coherence (CMC) have limitations, so the authors propose using statistical dependence estimators to learn eigenvalues, eigenfunctions, and projection spaces. This approach improves interpretability, scalability, and local temporal dependence. Experimental results show that the method accurately classifies movements and subjects, highlighting specific EEG channel activations during movements, and demonstrates robustness against noise and delays, suggesting its potential for diagnosing neuromuscular disorders and developing brain-computer interfaces.\n\n1. The paper combines statistical dependence estimators with neural network optimization techniques. This fusion of methodologies enhances the ability to capture high-level and contextual connectivity between cortical and muscular oscillations.\n\n2. The paper provides a detailed description of the proposed methodology, including the mathematical foundations, algorithmic implementation, and practical considerations. The inclusion of eigenvalues, eigenfunctions, and projection spaces adds depth to the analysis.\n\n3. The authors conduct comprehensive experiments to validate their method. The results demonstrate the method's robustness against nonstationary noise and random delays, confirming its reliability and practical applicability.\n\n1. Mathematical and Algorithmic Complexity: The proposed method involves complex mathematical formulations and advanced statistical techniques that may be challenging for a broader audience to grasp. Simplifying some of the mathematical derivations or providing more intuitive explanations and visualizations could make the paper more accessible.\n\n2. Interpretation of Results: While the method highlights specific EEG channel activations during movements, the physiological and neuroscientific significance of these results could be further elaborated. Providing more detailed discussions on how these findings align with or differ from existing neuroscience research would enhance the interpretability and relevance of the results.\n\n3. Scalability: The scalability of the proposed method to larger datasets or longer signal durations is not thoroughly addressed. Discussing the computational complexity and providing benchmarks on how the method performs with varying data sizes would be valuable.\n\n1. The method identifies specific EEG channel activations during movements. Could you provide more detailed explanations or references to how these findings align with existing neuroscientific knowledge? What are the physiological implications of the identified activations, and how do they contribute to our understanding of cortico-muscular connectivity?\n\n2. The author mention potential applications in diagnosing neuromuscular disorders and developing brain-computer interfaces. Can you provide concrete examples or case studies where your method has been or could be applied? What specific benefits or improvements does your method offer over existing approaches in these applications?"
},
{
"confidence": 4,
"rating": 5,
"review_id": "OJ4jYW86Pk",
"review_text": "The authors apply novel but already existing (https://www.sciencedirect.com/science/article/pii/S0047259X2300074X, https://arxiv.org/pdf/2212.04631) machinery based approach on the orthonormal decomposition of density to decipher the relationship between cortical brain activity and the electromyographic signals during basic hand movements. The work is based on the publicly available dataset and the code is available. The unknown decomposition is modeled by a pair of neural networks concurrently processing EEG and EMG data in order to arrive at the internal representation for each of the modality. The internal representations are then aligned to minimize the rank of the joint covariance matrix. To guide the learning process, the authors propose a somewhat novel loss function equal to the negative trace of the canonical correlation matrix calculated using latent representations. The authors test their approach on the downstream tasks of classifying movement types in both within and across subject designs. They also apply the obtained representations to distinguish between participants based on their EEG data. The authors provide some interpretation to the obtained solution in the form of channel and temporal maps indicating the electrodes and time moments that contribute to the decoding most.\n\n1. The authors applied novel but existing methodology of orthonormal density decomposition to the EEG+EMG dataset for the first time \n2. The authors introduced a novel loss function and showed that it provides better performance in the downstream task of EEG-based classification of movement types\n3. The authors used multi subject dataset \n4. The authors attempted to provide interpretation of the obtained decision rule\n5. The authors present detailed results of their experiments in the appendix\n\n1. Several inaccuracies and lack of details in the mathematical expressions:\n1.1 line 100, last expression and additional p(z) is needed in the integral\n1.2 equation 3 - do the eigenvalues need to be normalized? Does the sum exclude the first normalized eigenvalue?\n2. I would argue against the suggested novelty of the proposed loss function as it seems like the loss function that is closely associated with the Canonical correlation analysis (CCA) (equation 4). Generally speaking, the proposed approach boils down to the CCA in the latent variable space with latents computed by means of a CNN.\n3. The authors claim that they “..design a specialized network topology to generate features for individual channels and time intervals, ensuring that the internal layers of this network quantify channel-level and temporal-level features, similar to [22–24]” - however unlike for instance the EEGnet, the authors use non-linearities in the temporal network (prior to the spatial) which in my view prevents the straightforward interpretation of the obtained representations at least using simple correlational measures. \nSee also Q.1 and 2. \n4. The authors did not validate their approach to interpreting the decision rule and obtaining spatial and temporal maps with simulated data. This needs to be done and the simulated data should contain not only the neuronal sources coupled to the simulated EMG but also the sources unrelated to the signal of interest (EMG). The authors then need to demonstrate that their methods infers the proper spatial patterns corresponding to the task-relevant simulated neuronal sources. Ideally, the obtained maps should be ranked based on their importance for the overall decoding accuracy. If this is not possible within the review cycle, the authors should significantly reduce the proportion of the manuscript dedicated to the physiological plausibility of their solution and instead describe limitations related to potentially non-physiological origin of the extracted features. \n5. It is disappointing that when interpreting the decision rule the authors did not provide information regarding the EEG frequency domain their network got tuned to during the training.\n\n1. Having significant experience in the domain of recording and analyzing electrophysiological data I founnd the obtained maps very suspicious. While EEG electrode FC1 can indeed be implicated and be coherent with EMG, I would expect other electrodes such as C3, C5 to have some significant contribution to the EEG derived latents that are maximally aligned with EMG. Instead in addition to FC1 we see the involvement of peripheral electrodes and the frontal electrodes. These electrodes often lose proper contact with the skin and become sensitive to the physical movements due to capacitive effects, when slight body displacements during the actual movement causes significant fluctuation in the electrode-skin capacitance and modulates the signals registered by EEG. The analysis of frequency response of the temporal layer (see W.5) may help to resolve this potential issue.\n\n2. In the dataset used by the authors the reference channel was located in the midline between FC1 and FC2 sensors. Such an arrangement often results in low variance of the signals located close to the reference. The spatial patterns that the authors demonstrate in Figures 5 and 11 show peaks around Fc1 and FC2. The ground electrode was located at the edge of the EEG cap between F1 and F2 and the temporal maps show them as the next best electrodes after Fc1. Could it be that within certain normalization steps the authors explicitly or implicitly divided the data by the channel variance or multiplied the data by a poorly conditioned and not properly regularized inverse covariance that the role of these electrodes got artificially inflated?\n\n3. The authors show spatial maps for several other *selective* subjects and clusters to illustrate across subject reproducibility. What about the spatial maps corresponding to the other latent channels\\clusters for SUB3? \n\n4. Why did the authors not follow the EEGNet architecture and decided to use non-linearities between the temporal and spatial processing blocks? Avoiding nonlinearities in the front end would improve interpretability and would help to make the presentation more convincing. \n\n5. How do the authors avoid the trivial training result, i.e. that the two networks will simply learn to generate similar EMG and EEG embeddings regardless of the input data?"
},
{
"confidence": 2,
"rating": 7,
"review_id": "o1dVXdpwaM",
"review_text": "This paper introduces a novel approach to analyzing cortico-muscular connectivity using statistical dependence measures based on density ratio decomposition. The authors apply a method called Functional Maximal Correlation Algorithm with Trace cost (FMCA-T) to paired EEG and EMG recordings. The key idea is to decompose the density ratio between EEG and EMG signals into eigenvalues and eigenfunctions, which can capture important contextual information that affects the EEG-EMG dependency such as type of movement or subject without having them labeled. They also use the learned eigenfunctions as feature projectors and train a classifier on top for movement type classification tasks.\n\nThe authors test their approach on simulated data (SinWav) and a real EEG-EMG dataset with 25 subjects performing 11 different upper limb movements. They compare FMCA-T against several baseline methods for dependence estimation and classification. They find that the learned eigenfunctions capture factors such as movement type and subject identity. Further, FMCA-T outperforms the baselines, for example by 10% for cross-subject classification of arm-reaching, hand-grasping and wrist-twisting.\n\n* very sophisticated method with clear motivation\n* original idea cleanly mathematically derived (as far as I can tell)\n* produces good results\n\n* classification baselines, could be stronger, e.g. by also using [EEG Conformer](https://pubmed.ncbi.nlm.nih.gov/37015413/) and [Deep4](https://onlinelibrary.wiley.com/doi/full/10.1002/hbm.23730)\n* text is very dense at times, definitely found some part hard to read, but not sure how much it can be made easier, possibly you could explain some concepts used in 2.2 in more detail in the supplementary\n\n\"In scenarios where X and Y are statistically independent, all eigenvalues are zero\" > doesn't this make the density ratio 0 then? shouldn't the density ratio be 1 if they are independent?"
}
] |
wcxHbAY8B3 | GaussianMarker: Uncertainty-Aware Copyright Protection of 3D Gaussian Splatting | 3D Gaussian Splatting (3DGS) has become a crucial method for acquiring 3D assets. To protect the copyright of these assets, digital watermarking techniques can be applied to embed ownership information discreetly within 3DGS mod- els. However, existing watermarking methods for meshes, point clouds, and implicit radiance fields cannot be directly applied to 3DGS models, as 3DGS models use explicit 3D Gaussians with distinct structures and do not rely on neural networks. Naively embedding the watermark on a pre-trained 3DGS can cause obvious distortion in rendered images. In our work, we propose an uncertainty- based method that constrains the perturbation of model parameters to achieve invisible watermarking for 3DGS. At the message decoding stage, the copyright messages can be reliably extracted from both 3D Gaussians and 2D rendered im- ages even under various forms of 3D and 2D distortions. We conduct extensive experiments on the Blender, LLFF, and MipNeRF-360 datasets to validate the effectiveness of our proposed method, demonstrating state-of-the-art performance on both message decoding accuracy and view synthesis quality. | https://openreview.net/pdf/430c524014071593c5a9851aef0295f14993f2e5.pdf | [
{
"confidence": 5,
"rating": 4,
"review_id": "rm2mF4Uohm",
"review_text": "This paper proposes GaussianMarker, a novel method for embedding invisible watermarks into 3D Gaussian Splatting (3DGS) models to protect their copyright. The key idea is to use uncertainty estimation to add imperceptible perturbations to 3D Gaussian parameters with high uncertainty. The method enables extraction of copyright messages from both the 3D Gaussian parameters and rendered 2D images, and demonstrates robustness to various 3D and 2D distortions. Experiments on multiple datasets show the effectiveness of the approach in terms of message decoding accuracy and visual quality preservation.\n\n* Timely contribution addressing copyright protection for 3D Gaussian Splatting models, an increasingly important 3D asset format\n* Clever use of uncertainty estimation to guide watermark embedding in a way that preserves visual quality\n* Demonstrates robustness to various 3D and 2D distortions/attacks\n\n* The decoder is trained per scene, rather than being a generalizable decoder. This makes the watermarking process essentially impractical for real-world use. It's not feasible for people to store a separate watermark encoder and decoder for each scene for the vast number of Gaussians distributed across the internet. Reflecting on the logic of image watermarking, a single watermark encoder and decoder can encode and decode information for any cover image, so the sender and receiver only need to jointly possess one watermark decoder. This is a more reasonable setup.\n* Experiments focus mostly on relatively simple scenes - more complex, dynamic scenes could be challenging\n* The robustness to more sophisticated attacks (e.g. adversarial perturbations) is not explored\n* Discussion of potential negative impacts of the technology could be expanded\n\nMy primary concern with this paper stems from a fundamental physical challenge: How can a digital watermark be embedded into the 3D Gaussian Splatting (3DGS) representation of a scene in such a way that it can be reliably decoded from any viewing direction?\n\nThe volume rendering process that converts 3DGS representations into 2D images is designed to produce geometrically consistent views based on the camera pose. However, the requirement to embed and extract a watermark from arbitrary viewpoints seems to conflict with this underlying principle.\n\nOne potential resolution to this contradiction could be as follows: Rather than directly encoding the watermark itself into the 3D representation, the method might embed a geometrically consistent signal that can be detected by a trained network D. This signal could then trigger the generation or retrieval of the actual watermark (be it an image, audio, or text), which has been memorized by the detector D during the training process.\n\nThis hypothesis aligns with the paper's description of F as a detector/classifier rather than a decoder. It also explains the need for a separate classification module to guide whether the detector should produce the stored watermark data.\n\nHowever, this interpretation raises several questions:\n\n* How does the method ensure that the embedded signal remains detectable across different viewing angles and rendering conditions?\nWhat is the information capacity of this approach, and how does it compare to traditional digital watermarking techniques?\n* How robust is the embedded signal to various forms of 3D transformations or edits to the 3DGS model?\n* Is there a trade-off between the strength of the embedded signal and the visual quality of the rendered images?"
},
{
"confidence": 5,
"rating": 5,
"review_id": "OPfQQsNvjO",
"review_text": "3D Gaussian Splatting(3DGS) has gradually become the mainstream method for acquiring 3D assets, which has led to a demand for copyright protection of 3DGS. In this paper, a watermarking method based on uncertainty called GaussianMarker is proposed. Firstly, 3DGS is partitioned based on uncertainty, and the watermark is only added to the model parameters with high uncertainty. Subsequently, the corresponding parameters are perturbed using both 2D and 3D watermark encoders, enabling the extraction of watermark information from rendered 2D images as well as directly from 3D model parameters. Experimental results demonstrate the robustness of the proposed GaussianMarker method against 2D and 3D distortions.\n\n1. The paper proposes a method that utilizes uncertainty to partition 3D Gaussian. By embedding watermarks specifically in the parameters with high uncertainty, the method aims to mitigate the impact on the quality of the model.\n\n2. The paper considers the extraction of watermarks in both 2D and 3D scenarios, taking into account the robustness of watermark extraction in these two contexts.\n\n1. The paper mentions that the calculation of uncertainty is related to the model parameters, and in 3D Gaussian, each point has multiple parameters such as $\\mu, R, S, c, and \\alpha$. It would be helpful if the authors could clarify which specific parameters are used in the proposed method. Additionally, the paper provides a formula for calculating model uncertainty, but it is unclear how the uncertainty of each Gaussian is computed and used for partitioning. The authors should provide further explanation or clarification on this matter.\n2. The description of the densify function $g(\\cdot)$ in the paper states that it randomly samples a new position from a distribution. According to my understanding, the original Gaussian $G_i$ should have been replaced. However, Figure 2 shows that the original Gaussian $G_i$ still exists, which is confusing to me.\n3. During the watermark embedding process, it is unclear whether the 2D and 3D watermarks are embedded into the same model parameters. It would be helpful if the authors could clarify which specific model parameters of the 3D Gaussian are used for embedding the watermarks.\n4. In the section on \"Distilling watermarking knowledge,\" the authors mention that \"the pre-trained feature from 2D space can be distilled to the 3D space.\" It is important for the authors to provide an explanation of how this is achieved.\n\n1. In the experimental section, the authors present four baseline methods. How do 3DGS with message and 3DGS with fine-tuning extract messages. \n2. Four types of 3D editing methods are listed in the experiment, which parameters of 3DGS are affected by these distortions?"
},
{
"confidence": 3,
"rating": 7,
"review_id": "Mpry2oBihw",
"review_text": "The paper presents a new method for embedding digital watermarks in 3D Gaussian Splatting (3DGS) models to protect the copyright of 3D assets. Traditional watermarking techniques for mesh, point cloud, and implicit radiance fields are not suitable for 3DGS, as they can cause distortions in rendered images. The authors propose an uncertainty-based approach that constrains perturbations to the model parameters, ensuring that watermarks remain invisible while preserving visual quality. The method allows for reliable extraction of copyright messages from both 3D Gaussians and 2D rendered images, even under various distortions.\n\n1. The proposed method ensures that the embedded watermarks do not cause significant distortions in the rendered 3D scenes or 2D images, maintaining the visual quality of the assets.\n2. The approach is designed to be robust against various forms of 3D and 2D distortions, such as noise, translation, rotation, cropping, JPEG compression, scaling, and blurring. This enhances the reliability of copyright protection.\n3. The method allows for the extraction of copyright messages from both 3D Gaussian parameters and 2D rendered images, providing multiple layers of security and verification.\n4. Extensive experiments demonstrate that the method achieves state-of-the-art performance in both message decoding accuracy and view synthesis quality.\n\nThe malicious scenarios considered are limited to traditional distortions. \\\nMore sophisticated scenarios should also be explored. \\\nFor instance, a malicious actor could fine-tune the downloaded 3DGS or use an auto-encoder to remove embedded information ([1],[2],[3]). \\\nIn such cases, how would the proposed method perform?\n\nAdditionally, a more complex scenario to consider is when a malicious actor renders Bob's 3DGS and uses it as training data to create their own 3DGS. \\\nHow would the proposed method address these advanced threats?\n\n[1] Fernandez et al., The Stable Signature: Rooting Watermarks in Latent Diffusion Models \\\n[2] Kim et al., WOUAF: Weight Modulation for User Attribution and Fingerprinting in Text-to-Image Diffusion Models \\\n[3] Zhao et al., Invisible Image Watermarks Are Provably Removable Using Generative AI\n\nPlease refer weakness."
},
{
"confidence": 4,
"rating": 5,
"review_id": "V0QxqefuEZ",
"review_text": "This paper proposes an uncertainty-based method to achieve watermarking for 3D Gaussian Splatting. Specifically, the Hessian matrix is used to estimate the parameter uncertainty. Then, the 3D Gaussians with high uncertainty are densified. The densified 3D Gaussians are trained to embed watermarking using a pre-trained 2D message decoder. After that, a 3D message decoder is trained using PointNet. Experimental results show that the proposed method achieves the best performance.\n\n1. This paper is well-written and easy to follow. \n\n2. The experimental results show that the proposed method achieves new SOTA results. \n\n3. The proposed method can decode watermarking both in 2D rendered images and 3D assets. \n\n4. An uncertainty-based method is proposed to select trainable 3D Gaussians, which is reasonable.\n\n1. One concern about this paper is its novelty. The major contribution of this paper is the introduction of uncertainty into 3D Gaussians watermarking. As the definition of uncertainty using Fisher Information comes from [42], simply using uncertainty for 3D Gaussians watermarking is quite simple and straightforward. Regarding the message decoders, they are all standard operations. HiDDeN [11] is used for the 2D message decoder, and PointNet [43] is used for the 3D message decoder. Therefore, the major contribution of the proposed method should be further justified.\n\n2. The proposed method utilizes the 3D Gaussians with high uncertainty to embed watermarking. What if an attacker also uses this feature? The attacker could first identify the 3D Gaussians (after training/fine-tuning) with high uncertainty and then only attack these 3D Gaussians using techniques such as Noise, Translation, Rotation, or Cropout. Additionally, the attacker might delete some of the identified 3D Gaussians to compromise the 3DGS assets.\n\n3. The influence of the parameter uncertainty threshold should be included in the experiments to assess the sensitivity of the uncertainty threshold on the proposed method. \n\n4. The results with different bit lengths are missing.\n\nSee Weakness."
}
] |
wcX04Wn34u | LiT: Unifying LiDAR "Languages" with LiDAR Translator | LiDAR data exhibits significant domain gaps due to variations in sensors, vehicles, and driving environments, creating “language barriers” that limit the effective use of data across domains and the scalability of LiDAR perception models. To address these challenges, we introduce the LiDAR Translator (LiT), a framework that directly translates LiDAR data across domains, enabling both cross-domain adaptation and multi-domain joint learning. LiT integrates three key components: a scene modeling module for precise foreground and background reconstruction, a LiDAR modeling module that models LiDAR rays statistically and simulates ray-drop, and a fast, hardware-accelerated ray casting engine. LiT enables state-of-the-art zero-shot and unified domain detection across diverse LiDAR datasets, marking a step toward data-driven domain unification for autonomous driving systems. Source code and demos are available at: https://yxlao.github.io/lit. | https://openreview.net/pdf/b9f11e717fae53a1228a5b9c208bb323f8080693.pdf | [
{
"confidence": 3,
"rating": 6,
"review_id": "TT7NGOSCFH",
"review_text": "In this paper, the authors propose a method to help alleviate the domain gaps among different datasets with different LiDAR sensors, which can enable zero-shot detection on a new dataset. The proposed method including Scene Modeling for foreground and background reconstruction and LiDAR Modeling with statistical and ray-drop modeling. Another contribution is that the authors also accelerate the ray casting algorithm using GPU. The authors conducted single-domain and Multi-domain unification experiments on Waymo, nuScenes, and KITTI datasets, which achieves SOTA performance compared to previous works. The authors also provide ablation studies on foreground diversity and LiDAR noise injection. In addition, the authors show the run time performance after the GPU acceleration.\n\nOriginality: The foreground and background reconstruction and LiDAR Modeling and the statistical and ray-drop modeling in LiDAR Modeling make the paper differ from previous works.\nQuality: The code is provided. The performance is evaluated on multiple datasets, and achieves SOTA performance compared to previous works, and ablation studies are good. The GPU acceleration is also good.\nClarity: The images in the paper are clear and easy to understand.\nSignificance: The paper demonstrates the potential of zero-shot detection on a new dataset by 3D reconstruction from multiple different dataset and LiDAR settings and LiDAR simulation.\n\nIn the title of the paper, the use of terms such as \"Language,\" \"Translator,\" and \"LiT\" appears to be capitalizing on the popularity of the trending terms \"LLM\", \"ViT\", and \"DiT\", potentially misleading readers.\nSECOND and PV-RCNN are relatively old detection models, it's better to have experiments on more recent models such as CenterPoint, and other SOTA models to further demonstrate the effect of domain unification on SOTA models and even achieve new SOTA results. This would significantly enhance the paper's persuasiveness and impact.\n\nIn the multi-domain unified training experiments, have you considered including comparisons with other non-reconstruction semi-supervised learning methods? (like pseudo-labels and so on)"
},
{
"confidence": 5,
"rating": 7,
"review_id": "aV9bCiR4sF",
"review_text": "To address the significant gap between different LiDAR datasets (related to sensors, environments, etc.), this paper proposes a solution that differs from the existing model-based adaptation approach. By employing a scene-reconstruction-data-simulation approach, it achieves consistent representation of different LiDAR datasets. This data-driven method partially resolves issues such as domain shift in autonomous-driving-related 3D point cloud learning.\n\n- Innovatively analogizing the domain gap between different LiDAR data to that between languages, this paper proposes a data-driven cross-sensor training method from a \"translation\" perspective.\n\n- The proposed method shows good performance across different datasets, especially in terms of the AP3D metric.\n\n- The paper is well-written with clear logic and comprehensive experiments.\n\n- Does \"foreground\" only refer to vehicles? Do pedestrians, bicycles, and similar entities fall into this category?\n\n- Similarly, in background reconstruction, is consideration limited to rigid bodies like the ground? In autonomous driving scenarios, is there no need to consider non-rigid objects such as vegetation?\n\n- In the current version, it seems that scene variations are not significant. Does this mean it's difficult to address zero-shot scenarios? For instance, if the source data are all from residential areas, is it challenging to accurately simulate point clouds from downtown areas?\n\n- How does the modeling accuracy of different foreground/background components affect the results of this paper?\n\n- Since the background is static, can it be replaced by other data sources? For example, historical high-precision drone point clouds or three-dimensional maps from scene reconstruction?"
},
{
"confidence": 5,
"rating": 5,
"review_id": "k6lvGU4CfN",
"review_text": "This paper proposed a unifying LiDAR Translator named LiT to achieve LiDAR domain adaptation. Differing from current model-driven approaches, LiT adopts a novel data-driven approach, embedding disparate LiDAR attributes into a common representation. LiT\nproposes a generalizable scene modeling and LiDAR statistical modeling. Besides, an efficient ray-casting engine is proposed to accelerate the above models. LiT also achieves efficient SoTA performance on several LiDAR datasets.\n\nS1. LiT adopts a novel data-driven approach instead of the classical model-driven approach, embedding disparate LiDAR attributes into a common representation. This research direction provides much value for real-world applications in autonomous driving industries. \n\nS2. An effective ray-casting engine is proposed to accelerate LiT on GPUs.\n\nS3. Experiments on widely used datasets demonstrate the SOTA performance of LiT.\n\nW1. This work looks like a data normalization operation, only modifying different datasets into a unified representation.\n\nW2. The authors argue that model-driven approaches will introduce considerable costs associated with customizing model structure and training data for new, specific domains. However, this work has an extra LiDAR statistical modeling, this operation also causes additional costs.\n\nW3. Table 7 shows that LiT may not avoid the problem of model-driven approaches, that is, requiring different configurations for distinct datasets.\n\nQ1. The authors argue that model-driven approaches need extra training for new domains, while the proposed LiT also needs extra LiDAR statistical modeling. Can the authors provide a detailed comparison to prove that data-driven approaches are significantly better than traditional model-driven ones?\n\nQ2. Since datasets will be unified into a common representation, why LiT needs different training hyperparameters for distinct domain adaptation tasks, as shown in Table 7? It seemed to contradict the original motivation of this paper, i.e., unifying different types of LiDAR sensors."
},
{
"confidence": 4,
"rating": 5,
"review_id": "jUByKun8dg",
"review_text": "The paper presents a novel framework designed to unify LiDAR data into a single target “language” and unified domain detection capabilities across diverse LiDAR datasets, marking a step toward domain unification for LiDAR-based autonomous driving systems. Experiments on dataset KITTI, Waymo, and nuScenes demonstrate the superiority of the proposed method in the task of single-source and multi-sources domain adaptation.\n\n1.\tThe paper is novel in introducing LiDAR Translator (LiT) to joint training across multiple datasets. LiT enables efficient state-of-the-art zero-shot and unified domain detection capabilities across diverse LiDAR datasets. \n2.\tThe paper is well-written and easy to follow, especially the part explaining the background. \n3.\tIt presents good experimental results and intuitive visualizations, convincingly demonstrating its effectiveness.\n\n1.\tThe motivation of this paper is not clear. If it is possible to accurately model the target domain data, why is there a need to translate the source domain data into the target domain data?\n2.\tAs the core component of this work, the translator requires more direct experimental validation, such as measuring the distributional differences between the translated data and the target data, rather than solely relying on verification through downstream domain adaptation tasks.\n3.\tIt lacks of comparative experiments with the latest state-of-the-art methods.\n\n1.\tHow can we ensure the accuracy of modeling the target data? Will the differences between simulated data and real data have negative impacts?\n2.\tModeling the target LiDAR data and target scenes requires a lot of prior information. When this information is unknown, how can we use the method proposed in this paper to solve the domain adaptation problem? From my understanding, the objective of domain adaptation is to address the challenge of having limited data or labels in the target domain."
}
] |
wblxm5zdkE | Real-Time Selection Under General Constraints via Predictive Inference | Real-time decision-making gets more attention in the big data era. Here, we consider the problem of sample selection in the online setting, where one encounters a possibly infinite sequence of individuals collected over time with covariate information available. The goal is to select samples of interest that are characterized by their unobserved responses until the user-specified stopping time. We derive a new decision rule that enables us to find more preferable samples that meet practical requirements by simultaneously controlling two types of general constraints: individual and interactive constraints, which include the widely utilized False Selection Rate (FSR), cost limitations, and diversity of selected samples. The key elements of our approach involve quantifying the uncertainty of response predictions via predictive inference and addressing individual and interactive constraints in a sequential manner. Theoretical and numerical results demonstrate the effectiveness of the proposed method in controlling both individual and interactive constraints. | https://openreview.net/pdf/0de67725a73ec05f810a39fe0a220b3ca19c7aa0.pdf | [
{
"confidence": 1,
"rating": 6,
"review_id": "cUZevzZyh1",
"review_text": "The paper proposes a method for online sample selection. The authors introduce the concepts of _individual_ and _interactive constraints_, and demonstrate theoretically and empirically that their method satisfies both.\n\nThe problem seems important and the formulation and approach novel. The authors provide both theoretical guarantees and empirical evidence, in both synthetic and real-world applications, of the effectiveness of their approach. The mathematical formulation seems sound, and the assumptions and theoretical results are clearly stated.\n\nI am not familiar with the FDR control literature, and had to read parts of the paper (specifically sections 2.2, 2.3 and 4) multiple times to get a gist for the logic of the method and its empirical perfomance. This is reflected in my confidence score. If the paper is accepted, I highly recommend the authors revise the paper to make it easier to follow. A flowchart to illustrate the steps of the algorithm and/or to illustrate the differences between the Oracle and Data-driven selection procedures may be helpful; a toy example could also help. I also suggest including a longer description and/or table of the benchmark methods against which the empirical performance of II-COS was compared.\n\n- Interactive constraint: Why is this defined only with respect to the correctly selected samples? In the case of, e.g., the diversity of selected samples, is the constraint not intended to represent the diversity of all selections?\n- Line 159: Should $R_t = \\left( \\sum_{i \\leq t} \\delta_i \\right) \\lor 1$?"
},
{
"confidence": 4,
"rating": 8,
"review_id": "8rr4Y83nIq",
"review_text": "In this paper, the authors quantify the uncertainty of response predictions using predictive inference; and systematically addressing individual and interactive constraints in a sequential manner. An online selection rule is developed to ensure the above two types of constraints are under control at pre-specified levels simultaneously. Simulated and real-data examples are used to evaluate and illustrate their approach in terms of both online individual and interactive criteria control.\n\nThis is a nicely, clearly written paper that develops an online selection rule that is simple yet effective. The simplicity of the online selection rule will enhance the potential for this rule to be used in real life. The authors’ claims are well supported both via theory, simulations and application to real data. The paper along with the appendix provides detailed information that allow for replicability. Very nice! I particularly appreciate the comparison with the approaches based on conformal p-values.\n\nSee questions below.\n\nSection 4.1 would be good to tell reader how many replications are used."
},
{
"confidence": 4,
"rating": 4,
"review_id": "JNANAXMwOF",
"review_text": "The paper studies online sample selection with individual and interaction \nconstraints simultaneously. Specifically, the goal is to control (variants) of \nthe false selection rate (FDR) and the expected similarity (ES) under the \nempirical bayes framework. Under distributional conditions, the proposed method \ncontrols the target quantities asymptotically. The method is evaluated on synthetic \nand real data.\n\n1. The paper is well-presented and easy to follow.\n2. The problem under consideration is of interest and relevant.\n\nPlease see my comments in the questions section.\n\n1. **The model.** In the motivating examples such as candidate selection, it appears to me \nthat we get to observe the ground truth, i.e., $Y_t$, after time step $t$.\nThis is briefly mentioned in the discussion section, but I think it is reasonable \nto use the observed $Y_t$'s to update the estimation.\n\n2. **The choice of interaction constraints.** It is not well-motivated why \nthe changing ES to (4) is reasonable. As illustrated in the simulation, \nalthough these two quantities seem to coincide as the time step goes \nto infinity, they do differ quite a lot with smaller time steps (this could happen \nwhen $m$ is small).\n\n3. What is the principle for choosing K in general? In particular, for the real data example, \nwhy is K taken to be $1\\times 10^{-3}$ in the fist example and $6\\times 10^{-3}$ in the second?\n\n4. In the numerical experiments, a fairer comparison with offline CP would be with [1], i.e., \nthresholding the p-values with BH-adjusted p-values as opposed to the fixed threshold. \n\n[1] Jin, Ying, and Emmanuel J. Candès. \"Selection by prediction with conformal p-values.\" Journal of Machine Learning Research 24.244 (2023): 1-41."
},
{
"confidence": 4,
"rating": 7,
"review_id": "Tm08iV5Net",
"review_text": "This paper introduces a framework to perform online sample selection such that the unseen outcomes are in specific target range while also optimizing for constraints like diversity that are dependent on the input covariates. The additional constraint involving input covariates can help ensure properties like the diversity of candidates when selecting individuals for interviews while also guaranteeing that most of the interviewed individuals accept the offer. The paper proposes a data-driven procedure to select the subset of candidates in an online fashion by implementing the proposed algorithm. Under reasonably weak assumptions, the paper provides theoretical guarantees on satisfying both the above constraints in online sample selection. The experiments confirm that this framework ensures low false selection rates (i.e. unseen outcomes are in a specific target range) while optimizing for the additional covariate-dependent constraints like diversity on synthetic and real data.\n\nThe paper proposes an intuitive way to incorporate covariate-dependent constraints like the diversity of candidates when performing online sample selection to optimize for metrics like false selection rates. This paper solves an important problem in online sample selection and demonstrates that the proposed method improves the covariate-dependent objective while maintaining comparable performance on false selection rates.\n\nIt would be interesting to understand the gaps between an ideal diversity profile and the profile obtained by the proposed method in Fig 3. Analysing the gap w.r.t changing g(X_i, X_j) function choice could be helpful. Would it be helpful to increase the weight of the g(X_i, X_j) term to reduce this gap and understand its implications on the satisfaction of individual constraints?\n\nIt is evident that the SAST baseline outperforms the proposed method in terms of FSR sometimes, which is understandable given there; 's a tradeoff with the interactive constraints (Table 2b, Fig 1). It would be helpful to learn if we can reduce the gap between SAST and the proposed method by balancing the tradeoff (perhaps using a tunable hyperparameter that balances the two constraints?).\n\nSee weaknesses"
}
] |
wbE0QCBWji | Constructing Semantics-Aware Adversarial Examples with a Probabilistic Perspective | We propose a probabilistic perspective on adversarial examples, allowing us to embed subjective understanding of semantics as a distribution into the process of generating adversarial examples, in a principled manner. Despite significant pixel-level modifications compared to traditional adversarial attacks, our method preserves the overall semantics of the image, making the changes difficult for humans to detect. This extensive pixel-level modification enhances our method's ability to deceive classifiers designed to defend against adversarial attacks. Our empirical findings indicate that the proposed methods achieve higher success rates in circumventing adversarial defense mechanisms, while remaining difficult for human observers to detect. | https://openreview.net/pdf/ece2805ee40a3e53c2a7cddbb3f60d6c1d2e1619.pdf | [
{
"confidence": 4,
"rating": 6,
"review_id": "sPgB7MCQM1",
"review_text": "This paper tackles the field of adversarial image generation by proposing an unrestricted attack method that can be applied to both targeted and untargeted attacks. The innovative approach considers a probabilistic perspective, treating the victim classifier and geometric constraints as distinct distributions. By drawing adversarial examples from the overlap region, the authors ensure that the semantics of the original image are preserved. The efficacy of this proposed approach is convincingly demonstrated through extensive experiments.\n\n1. I find the probabilistic approach proposed in this paper to be particularly innovative and refreshing. The motivation behind this perspective is clearly articulated, providing a solid foundation for the authors' methodology.\n2. I am impressed by the encouraging experimental results presented in this paper. The inclusion of a human annotation experiment is particularly noteworthy, as it adds an important layer of validation to the authors' claims. Moreover, the study's success in handling both transfer attacks and adversarial defense scenarios further underscores the model's robustness and effectiveness.\n\nWhile the experimental results of the proposed method show promise, I do believe there is room for improvement. Specifically, I think it would be beneficial for the authors to provide more detailed information regarding the human experiment methodology, such as how the five reviewers for the MNIST experiment were selected. Furthermore, I would suggest that the authors consider conducting a follow-up experiment where human annotators are asked to identify perturbed images in the absence of a reference image for the ImageNet experiment, which is a more realistic scenario in an attack setting.\n\nIn addition, I find it intriguing that both NCF and cAdv demonstrated higher success rates in generating adversarial examples compared to the proposed method, as shown in Table 2. This highlights some shortcoming in the proposed approach. While it is expected that NCF would generate images that can be identified as perturbed, I am more surprised that cAdv was able to create perturbed images that are highly similar to the original ones.\n\nLastly, I think it would be beneficial for the authors to explore targeted attacks on ImageNet, given the success of this approach in previous papers such as \"Towards Transferable Targeted Attack\". This could provide valuable insights into the robustness and effectiveness of the proposed method.\n\n1. How were the values of c chosen in Table 2?\n2. What are the author's thoughts on the tradeoff between choosing different values of c?\n3. How were the human annotators selected?\n4. How was the set \\tau chosen?"
},
{
"confidence": 4,
"rating": 7,
"review_id": "QACxlBXW6C",
"review_text": "This paper proposes a new type of adversarial attack, which generates adversarial examples by solving a box-constrained non-convex optimization problem. Different from the traditional norm-bounded attacks, this paper focuses on unrestricted adversarial attacks by replacing the geometrical distance measure with a semantically-aware distance measure. Specifically, The authors propose using a Langevin Monte Carlo (LMC) technique to sample adversarial examples from a probabilistic distribution. To preserve semantics, the authors use a learned energy function to guide the generation of adversarial samples. Following this, rejection sampling and refinement techniques are employed to select and improve the quality of the generated samples. Experiments show that this attack can fool classifiers while preserving the semantic information compared to baseline methods.\n\n1. This paper introduces an interesting perspective on generating adversarial examples, which is significantly different from the traditional norm-bounded adversarial attacks. \n2. This paper is theoretically sound and the proposed solution is very intuitive.\n3. It is suprising that the proposed attack can achieve a 100% success rate on an adversarially trained model. Adversarial training is often regarded as a SOTA defense method. Therefore, in my view, this work can motivate researchers in this area to design better defense methods. \n4. The proposed method can either outperform baseline methods by a notable margin or significantly improve the quality of the generated adversarial examples in terms of preserving semantic meanings.\n\n1. Selecting 20 images from each class in the MNIST test set seems to be too little. I understand that it might be infeasible for human annotators to annotate all adversarial images for the entire MNIST, so I would encourage authors to report the success rate except for human annotations using the entire MNIST. I believe this will make the results more convincing.\n2. This paper is missing ablation studies for rejection sampling and sample refinement techniques. Is it necessary to include these techniques? How would it affect the attack success rate if one of them is removed?\n3. This paper proposes a new attack method but lacks a discussion on how to defend against it. Although it is not compulsory, I am more willing to see how to defend this attack. Can you provide some intuitions on it?\n4. Standard deviations are not reported in this paper. Repeated experiments are encouraged.\n\nPlease refer to the **Weaknesses**."
},
{
"confidence": 2,
"rating": 6,
"review_id": "MifgUS5zcT",
"review_text": "This paper introduces a probabilistic framework for generating adversarial examples, focusing on maintaining the semantic integrity of the original images while implementing substantial pixel-level modifications. Unlike conventional adversarial techniques that rely heavily on minimal geometric perturbations, this approach integrates a semantic understanding into the adversarial example generation process, leveraging energy-based models and diffusion models. The core innovation lies in embedding the semantic interpretation as a probabilistic distribution, which guides the adversarial example generation. This allows for effective deception of classifiers, including those equipped with adversarial defenses, while preserving the semantic content to an extent that remains imperceptible to human observers. Empirical evaluations demonstrate that the proposed method outperforms existing techniques in terms of both effectiveness against defenses and undetectability by humans, establishing a new paradigm for constructing robust and stealthy adversarial attacks.\n\n1. The paper is clear and well-written, effectively highlighting its contributions with accessible explanations of complex ideas.\n\n2. This paper presents a new probabilistic framework for generating adversarial examples that goes beyond traditional norm-bounded methods by integrating semantic distributions. The approach is theoretically robust, with the theoretical analysis providing a solid foundation that supports the model's effectiveness and introduces innovative concepts to the field of adversarial machine learning.\n\n3. The proposed method significantly outperforms baseline methods, particularly in preserving their semantic integrity.\n\n1. The assumption in Equation 4 lacks a detailed derivation, leaving it unclear whether $x_{\\text{ori}}$ and $y_{\\text{tar}}$ need to be independent. Providing a clear derivation and clarifying this assumption would enhance the theoretical rigor of the paper.\n\n2. The training process for the diffusion models is sensitive and requires careful parameter tuning. The paper does not provide enough detail on this sensitivity or potential solutions to mitigate training instability, which impacts the robustness and reproducibility of the method.\n\n3. The paper does not report standard deviations in the performance results. Repeating the experiments is recommended to ensure the reliability and consistency of the findings.\n\nPlease address the aforementioned concerns and questions."
}
] |
waQ5X4qc3W | Stabilize the Latent Space for Image Autoregressive Modeling: A Unified Perspective | Latent-based image generative models, such as Latent Diffusion Models (LDMs) and Mask Image Models (MIMs), have achieved notable success in image generation tasks. These models typically leverage reconstructive autoencoders like VQGAN or VAE to encode pixels into a more compact latent space and learn the data distribution in the latent space instead of directly from pixels. However, this practice raises a pertinent question: Is it truly the optimal choice? In response, we begin with an intriguing observation: despite sharing the same latent space, autoregressive models significantly lag behind LDMs and MIMs in image generation. This finding contrasts sharply with the field of NLP, where the autoregressive model GPT has established a commanding presence. To address this discrepancy, we introduce a unified perspective on the relationship between latent space and generative models, emphasizing the stability of latent space in image generative modeling. Furthermore, we propose a simple but effective discrete image tokenizer to stabilize the latent space for image generative modeling by applying K-Means on the latent features of self-supervised learning models. Experimental results show that image autoregressive modeling with our tokenizer (DiGIT) benefits both image understanding and image generation with the next token prediction principle, which is inherently straightforward for GPT models but challenging for other generative models. Remarkably, for the first time, a GPT-style autoregressive model for images outperforms LDMs, which also exhibits substantial improvement akin to GPT when scaling up model size. Our findings underscore the potential of an optimized latent space and the integration of discrete tokenization in advancing the capabilities of image generative models. The code is available at \url{https://github.com/DAMO-NLP-SG/DiGIT}. | https://openreview.net/pdf/fc5f5e9cc6e5aa9d1caa81081a8fd3e0b1c5f218.pdf | [
{
"confidence": 4,
"rating": 6,
"review_id": "IHelEQhpV9",
"review_text": "This paper designs a better quantized autoencoder on top of VQGAN. It builds an image autoencoder which is able to both achieve good recognition performance for linear probing, and have a latent space which is suitable for training a generative model. It studies the existing autoencoders from a high-level theoretical perspective and proposes design ideas which are targeted to improve them. The paper claims that the modern autoencoders ignore the fact that they will be utilized for downstream generative modelling tasks and mainly focus on reconstruction. The paper argues that adding recognition losses on top of the encoder would help. To fulfill this desiderata, the model takes DINOv2 features and discretizes them via k-means. Then it trains a translation model into VQ-GAN decoder features. For image generation, it trains a LLM in the discrete token space. For classification, it does linear probing on top of discretized DINOv2 features. As a result, it attains reasonable generative capabilities while being able to keep a latent space suitable for linear probing classification.\n\n- In terms of the scores, the paper achieves very good results in the sense of discrimination/generation tradeoff (judging by figure 1).\n- It's an interesting finding that one can discretize dino-v2 via K-means and train a strong generative model on top of such tokens.\n- The paper studies an important problem of more rigorous understanding of modern autoencoders\n\n- The paper shows an equivalence between a linear AE and PCA, but it's a well known fact: https://arxiv.org/abs/1804.10253. One can also just google \"equivalence between a linear autoencoder and PCA\", and find a ton of resources on that.\n- \"A reconstructive autoencoder does not necessarily establish an advantageous latent space for generative model\". That's a very well-known fact in the community (e.g., see Fig 5 in https://arxiv.org/pdf/2312.02116). The paper should not claim this observation as a novel contribution.\n- The proposed stability metric is interesting, but it's unclear whether it will correlate with downstream generative models performance\n- Proposition 2.4 is extremely vague and seems to be very different from its \"rigorous\" analogue in the supplementary.\n- FID metrics for VQGAN on ImageNet are much higher than in the original paper.\n- It's delusive to compare performance of the developed model vs those trained from scratch, since the developed model starts from strong pre-trained models.\n- For image generation, the paper shows just 16 random samples, which is extremely little to get a good understanding of the model. It's better to show much more (e.g. stylegan2 provides 100k samples for FFHQ: https://github.com/NVlabs/stylegan2).\n- Why DiT-XL/2 is included for comparison but not its guided variants? Why more recent papers are not included for comparison? (e.g., EDMv2).\n- The logical transitions in the paper are unclear, e.g., it's unclear why the proposed training improves D^G, it's unclear, why it follows from the propositions that we should improve the stability of the latent space (where stability is also not defined well), etc.\n\n- The transition \"Drawing from these theoretical insights, we introduce a metric to assess the stability of the latent space induced by different encoder models\" on L196 is extremely unclear. How exactly do theoretical results suggest that one should focus on stability of the latent space? Why would LDA lead to a better generative model? Why \"separating\" distributions class-wise would lead to a better generative model? What exactly do you mathematically define \"separation of distributions\" for an encoder?\n- Is linear probing done on top of discretized or non-discretized DINOv2 features?"
},
{
"confidence": 5,
"rating": 3,
"review_id": "w4ygBjMa3l",
"review_text": "Latent-based image generative models, such as LDMs and MIMs, have achieved success, but autoregressive models lag behind in image generation. Our research introduces a unified perspective on latent space stability and proposes a discrete image tokenizer, DiGIT, that significantly improves autoregressive image modeling, outperforming LDMs and benefiting from scaling similar to GPT in NLP.\n\n- The results beat some baseline models, though under a specific (and somewhat confused) experimental setting.\n- The topic of latent space property is worth investigating.\n\nThe paper has several weaknesses:\n\n1. **Factual Errors**:\n\n 1.1. The cited MIM models, such as MaskGIT and MAGE, cannot revise previously predicted tokens. This contradicts the claim in line 53 that \"iterative models like LDMs and MIMs can correct errors.\" I recommend the authors to their papers for more details.\n\n 1.2. In lines 72-73, the authors state that this work provides \"the first evidence that image autoregressive generative models behave analogously to GPT.\" However, the Parti[1] paper has already demonstrated that image autoregressive models have similar scalability to GPT and successfully scaled the model to 20B. The authors have not cited this work.\n\n2. The writing is poor and lacks rigor. For example, the discussion on the so-called \"common misconception\" in line 41 is not well-supported. What exactly is meant by the \"optimal latent space for reconstruction\"? How many studies hold this view? There are no citations provided.\n\n3. The quantitative comparisons are also peculiar. The authors cite many paper results without using CFG, while CFG has become a de-facto practice for augmenting generative models. Why not adopt CFG and perform more apples-to-apples comparisons to other SOTA methods with CFG?\n\n4. Presenting two tables (Table 2 lower and Table 3) for image generation performance is confusing. Why not consolidate the results into a single, clear table?\n\n[1] Yu, Jiahui, et al. \"Scaling autoregressive models for content-rich text-to-image generation.\" arXiv preprint arXiv:2206.10789 2.3 (2022): 5.\n\nSee above."
},
{
"confidence": 3,
"rating": 7,
"review_id": "yeECdhlS7O",
"review_text": "This paper tries to understand why latent autoregressive image models perform worse than latent diffusion models. The key insight is that existing tokenizers are trained primarily with the reconstruction objective, whose latent space is unstable and thus may not be easy to model autoregressively. To solve this issue, the authors propose first learning a stable latent space, which autoregressive models can model easily, and then learning to reconstruct pixels from this latent space. Experimental results show that this modification enables latent autoregressive image models to match latent diffusion models' performance in terms of image understanding and image generation.\n\n1. The paper proposed a new perspective—latent space stability—on understanding latent autoregressive image models, which was neglected in previous works. I think this explanation is intuitive since a fixed-depth autoregressive model may not be able to model very noisy distributions (e.g., the language data have high regularity)\n2. The proposed solution is straightforward -- just let image features with similar semantics share the same token.\n3. The experiments are comprehensive and interesting. Both image understanding and image generation are evaluated; improvements over previous latent autoregressive models are significant. The ablation study also makes sense to me.\n\n1. I think there is a tension between how stable the latent space is and how easily we can reconstruct the latent codes to pixels. The impact of the proposed method on reconstruction is not elaborated in this paper. For example, if we only care about reconstruction, how badly does the proposed method perform? This matters greatly if we are modeling high-resolution images and care about the visual details.\n2. The theoretical analysis and the proposed algorithm seem loosely connected to me -- I don't see the proposed algorithm as a direct result of the theoretical analysis. The stability analysis is more straightforward, though.\n\nHow negatively does the proposed method impact reconstruction?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "MtcyxOSUF7",
"review_text": "The paper propose to disentangle the encoder and decoder learning for image tokenzier which ultimately will be used for providing the latent space of AR generative model. In particular, SSL model such as DinoV2 is used for encoder (plus k-means clustering).\n\n1. The idea of disentangling the encoder and decoder learning for image tokenizier is interesting and novel.\n\n2. Strong empirical results can be obtained from the method. The fact that by changing a tokenizer and training the same AR model, FID can be halved to half is really impressive.\n\n1. The motivation for adopting self-supervised model as encoder/tokenizer is not very clear. Since the method is easy (DinoV2 + kmeans), the motivation of why doing so is the most critical part of the paper. However, I don't think this is presented very clearly and explicitly. Large improvements of the presentation is needed. \n\n2. The term \"stability\" or \"\"stablize\" is a bit confusing. Explicit explanation is needed. When is a latent space not stable? If it means hard to learn an AR model, probably a better term such as learnability is better. \n\n3. While the argument of \"iterative decoding process can stabilize the sampling process by correcting the data falling in the low-density overlap between distributions\" makes sense, it still requires justification and evidence, not just conceptual analysis.\n\n4. If you use SSL model as encoder, you need to train a decoder. Not much explicit detail is presented for this part.\n\n5. The metric is not very clearly defined. What's the name of the metric? What is the definition? How to compute it? All these information should be highlighted.\n\nOverall the presentation and organization is not very clear, some major rewrite is needed.\n\n1. In section 2.2, you mentioned that a drawback of auto-regressive image modeling is that each iteration only generate a patch so error in the previous generated patch will accumulate. How is this related to your method? IIUC, your tokenizer is still patch based, so it does not resolve the issue mentioned here."
}
] |
wZigMVFURk | RoPINN: Region Optimized Physics-Informed Neural Networks | Physics-informed neural networks (PINNs) have been widely applied to solve partial differential equations (PDEs) by enforcing outputs and gradients of deep models to satisfy target equations. Due to the limitation of numerical computation, PINNs are conventionally optimized on finite selected points. However, since PDEs are usually defined on continuous domains, solely optimizing models on scattered points may be insufficient to obtain an accurate solution for the whole domain. To mitigate this inherent deficiency of the default scatter-point optimization, this paper proposes and theoretically studies a new training paradigm as region optimization. Concretely, we propose to extend the optimization process of PINNs from isolated points to their continuous neighborhood regions, which can theoretically decrease the generalization error, especially for hidden high-order constraints of PDEs. A practical training algorithm, Region Optimized PINN (RoPINN), is seamlessly derived from this new paradigm, which is implemented by a straightforward but effective Monte Carlo sampling method. By calibrating the sampling process into trust regions, RoPINN finely balances optimization and generalization error. Experimentally, RoPINN consistently boosts the performance of diverse PINNs on a wide range of PDEs without extra backpropagation or gradient calculation. Code is available at this repository: https://github.com/thuml/RoPINN. | https://openreview.net/pdf/88a370c2bd8305f59706ceb1247e978608e578ff.pdf | [
{
"confidence": 3,
"rating": 4,
"review_id": "jxYOsBEN4Q",
"review_text": "The preprint proposes to replace the collocation based PINN loss by a sum of local continuous integrals over regions around the collocation points. These continuous integrals are then again discretized using Monte Carlo integration with a single quadrature pint. The authors furthermore propose to adapt the region size during training using gradient statistics.\n\nThe authors report good empirical performance on a number of benchmark problems.\n\nThe introduction of continuous integrals over regions around the collocation points that subsequentially are discretized by Monte Carlo integration again seems tautological. After all, the loss function in PINNs is already a Monte Carlo discretization of a continuous integral (over the whole computational domain). Furthermore, the analysis that the authors present for the modified loss in equation (5) should not be carried out with the continuous integral over the regions $\\Omega_r$ but with its Monte Carlo approximation. Otherwise, the comparison to the discretized PINN loss is unfair.\n\nI struggle to see why the proposed method should work theoretically. I acknowledge the adaptive nature of the regions for the sampling but struggle to see how this might help to accumulate integration points in regions of, e.g., high residual. A visualization that this, or something along these lines that explains why the proposed method works well, happens would be helpful. Furthermore, I am not convinced that the theoretical analysis presented is meaningful, as it analyzes the integrals over the region as continuous objects. Please comment or clarify misconception."
},
{
"confidence": 4,
"rating": 6,
"review_id": "AQagYK96s9",
"review_text": "This paper extends the optimization process of PINNs from isolated points to their continuous neighborhood regions, which can theoretically decrease the generalization error, especially for hidden high-order constraints of PDEs. A practical training algorithm, Region Optimized PINN (RoPINN), is seamlessly derived from this new paradigm, which is implemented by a straightforward but effective Monte Carlo sampling method. By calibrating the sampling process into trust regions, RoPINN finely balances sampling efficiency and generalization error. Experimentally, RoPINN consistently boosts the performance of diverse PINNs on a wide range of PDEs without extra backpropagation or gradient calculation.\n\n1. The idea of extending the optimization process of PINNs from isolated points to their continuous neighborhood regions is novel.\n2. Theoretical results on generalization error, convergence rate and estimation error of sampling are provided. \n3. A practical training algorithm, Region Optimized PINN (RoPINN), is seamlessly derived from the region optimization paradigm and associated theoretical results,\n4. RoPINN consistently boosts the performance of diverse PINNs on a wide range of PDEs without extra backpropagation or gradient calculation.\n\n1. It is better to include the main proof idea of theoretical results in the main text.\n2. Although generalization error bound is provided, an intuitive explanation of the reason behind the success of region optimization is desirable. For example, when sampling one point in each region, why is the total loss decreased compared with point optimization?\n\n1. Which results in section 4 are for comparisons with the losses with high-order terms and variational methods?\n2. How tight is the generalization error bound?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "0I4bArmEUb",
"review_text": "The authors developed a region optimized PINN to improve the prediction accuracy compared to the scatter-point based PINN.\n\nThe authors proposed the region optimization paradigm and conducted a theoretical analysis.\n\nThe practical application scope is limited.\n\n(1) It is suggested to add some descriptions of training difficulty factors for the canonical PINN on 1D-Reaction in Section 4.2. \n(2) Can the proposed method find a good number of sampling points well balancing the computational cost and convergence speed in Figure 3.\n(3) The motivation of using Monte Carlo approximation should be elaborated. Why don’t the authors choose some other more advanced methods to provide better accuracy.\n(4) The authors should add more details about the possible practical applications with the canonical loss function of L-Lipschitz-β-smooth."
},
{
"confidence": 4,
"rating": 6,
"review_id": "zQdG8idSKy",
"review_text": "The paper proposes a novel optimization method for training physics-informed neural networks (PINNs): Region optimization, which extends a regular pointwise optimization of PINNs to neighborhood regions, named RoPINNs. The paper provides theoretical analysis explaining the decrease of generalization error with RoPINNs and high-order PDE constraints satisfaction. Then the paper presents a practical algorithm to enable the RoPINNs paradigm by introducing Monte-Carlo approximation of region integral and a region calibration scheme. Lastly, the paper assesses the performance on several well-known benchmark problems and showed the improved performance over the considered baselines.\n\n- The paper is well-motivated and tackles the important problem in training PINNs (leveraging more information than just a point-to-point type mapping).\n\n- The paper presents theoretical analysis on benefits of RoPINNs, decreased generalization errors and satisfaction of higher-order PDE constraints.\n\n- The experimental results show that the proposed algorithm is effective in solving some challenging benchmark problems (known as failure modes) and is capable of producing more accurate solution approximates.\n\n- Although shown to be very effective in several benchmark problems, the paper does not seem to provide general guidelines on how to set some important hyper-parameters such as initial region size and the number of sampling points. (While acknowledging that the authors indicate this as one of the limitations,) it would be great to see some experts’ guidelines.\n\n- If the authors could provide some analysis with regards to computational wall time, that would provide more complete pictures on how the proposed method performs. For example, it would be information to see a figure depicting a scatter plot showing computational wall time versus final rMSE type information, where a point in the plot corresponds to a different hyper-parameter setting (that is, the number of sample points). \n\n- [minor] there is a typo in the second paragraph of Section 4.2: line 289 Figure 2 => Figure 3.\n\n- Eq (5) seems to suggest that region optimization is applied to the boundary condition as well as L = L_bc + L_ic + L_pde. Is this the correct understanding or is it a typo?\n\n- The proposed optimization method seems to benefit significantly in the case of 1D reaction case while the benefit in 1D Wave or 1D convection cases are not as significant as that of 1D reaction. That is, rMSE for example in Table 2 of 1D reaction achieves an order (or orders) of magnitude improvement over the second best performing methods. Do the authors have some explanation on why?"
}
] |
wZgw4CrxwK | Incentivizing Quality Text Generation via Statistical Contracts | While the success of large language models (LLMs) increases demand for machine-generated text, current pay-per-token pricing schemes create a misalignment of incentives known in economics as moral hazard: Text-generating agents have strong incentive to cut costs by preferring a cheaper model over the cutting-edge one, and this can be done “behind the scenes” since the agent performs inference internally. In this work, we approach this issue from an economic perspective, by proposing a pay-for-performance, contract-based framework for incentivizing quality. We study a principal-agent game where the agent generates text using costly inference, and the contract determines the principal’s payment for the text according to an automated quality evaluation. Since standard contract theory is inapplicable when internal inference costs are unknown, we introduce cost-robust contracts. As our main theoretical contribution, we characterize optimal cost-robust contracts through a direct correspondence to optimal composite hypothesis tests from statistics, generalizing a result of Saig et al. (NeurIPS’23). We evaluate our framework empirically by deriving contracts for a range of objectives and LLM evaluation benchmarks, and find that cost-robust contracts sacrifice only a marginal increase in objective value compared to their cost-aware counterparts. | https://openreview.net/pdf/1fdf8e5ba7d29e7866528df491d68f17a90f0e68.pdf | [
{
"confidence": 3,
"rating": 7,
"review_id": "VALHYDRcuw",
"review_text": "The authors formulate a theoretical setup for a LLM text generation service to incentivize the service to output high quality text the consumer. The authors formulate this setup as having the service having a set of models that has quality (as rated by a evaluator on the end of the consumer) that increases with the cost of running the LLM. The goal is to derive a framework for paying the LLM service based on that quality of the text generated that incentivizes the LLM service to always use its best model. The authors go about this by formulating the definition of a contract in this setting, and defining various metric (max payment, avg. payment, etc.) that consumer aims optimize. They show that the set of contracts that will incentivize the LLM service to output text with the best model can be derived from the set of optimal hypothesis tests that distinguish which model is being used from the evaluator outputs. They derive how the optimal contract can be formulated from these hypothesis tests, when only bounds are known on the costs of running each model for the LLM service.\n\nThe theoretical setup and monotone assumption of model performance, cost, etc. is quite reasonable and tackles and interesting and relevant problem with LLM queries. The results are simple and intuitive, and connect nicely with previous work on contract theory and hypothesis testing.\n\nThe main issue is the theoretical setup does require an assumption that the bounds on cost are known, which seems somewhat impractical. It might be useful to explicitly comment on how the different metrics degrade with increased looseness of the cost bounds (linearly, it seems like), since one can always pick extremely conservative cost bounds.\n\nMinor issue:\n\n- In Definition 3, it would be helpful to illustrate why $B_R^*$ and $B^*_\\rho$ are defined as they are, correctly. Further, the definition with $b \\geq c_n$ is a bit strange, since the definition of minmax hypothesis test does not involve worst case over cost vectors, so it doesn't seem correct to use $b$ to derive the minimax contract (and instead it should remain a function of $c_n - c_1$) --- maybe dropping that $b \\geq c_n$ case would be more accurate, since it is used in the definition of cost-robust later.\n\nShould the principal know the outcome distribution for each generator? This seems slightly unrealistic since consumers only ever have black box access to the API, and never know precisely which model they're responses are from. They do know that previous agent actions could only be mixed over the old (worse) models though."
},
{
"confidence": 3,
"rating": 5,
"review_id": "m9WzIwZdKt",
"review_text": "This paper addresses the issue of moral hazard in pay-per-token pricing for large language model (LLM) services. Firms may use cheaper, lower-quality models to cut costs, compromising text quality. Moreover, the firms's costs may be unknown to the clients. To counter this, the authors propose a pay-for-performance framework using cost-robust contracts that incentivize high-quality text generation under the uncertainty about the firms's costs. These contracts are designed based on and have a one-to-one correspondence to optimal composite hypothesis tests. Approximation guarantees are provided with respect to the optimal contracts. Empirical evaluations show that these contracts provide robust incentives with small cost increases.\n\nThe results of characterizing the forms of optimal cost-robust contracts using hypothesis testing, as well as the approximation guarantees, are interesting and have valuable contributions.\n\n1. The model's complexity is unnecessary. The problem could actually be studied in the most basic contract setting. \n2. The authors do not discuss the computational complexity of finding the optimal cost-robust contract. \n3. The authors do not discuss the cases where the action with the highest cost may not be the best action to incentivize.\n\n1. what is the computational complexity of finding the optimal cost-robust contract that incentivizes $c_n$ (equivalently, the complexity of finding the optimal test)?\n2. Can the characterization results and computational efficiencies be directly applied to the cases where actions with lower costs may be the best action to incentivize?"
},
{
"confidence": 3,
"rating": 5,
"review_id": "Y5BndYhabh",
"review_text": "* The paper concerns the problem of incentivizing LLMs to use the most costly model, which is assumed to be the model with the best performance. Without proper incentive, the LLM company has the incentive to charge customers for the highest payment, but deliver the service using a lower-cost model, because the performance of the model is usually not verified. Therefore, the paper proposes to use contract design to solve this problem. In particular, an automatic detector first gives an integer score for the performance of an LLM. Then, the task is to design a payment for the company for each of the integer scores. The goal is to minimize the total payment conditioned on incentivizing the best-performed model. \n* The main contribution is the discussion of the cost-robust contract, meaning how to design the optimal contract while the costs of LLMs are unknown. Empirical evaluations have shown how to use the theory in practical settings given LLM performance data.\n\n* (I’m not an expert in contract design.) I appreciate the theoretical contributions of the paper. To me, section 4 has several interesting insights into connecting cost-robust contracts with hypothesis tests. As claimed, this is the first paper considering cost-robust contract design. However, the real contribution should be better evaluated by experts.\n* In general, incentive issues of LLM uses have been very critical and challenging. I also like the connection between contract design and the production of LLMs.\n\nAlthough I believe contract design can speak with the production of LLMs, I’m not fully convinced that the proposed model is a good idea to solve the considered problem. \n\n* In practice, each company pricing its own AIs, so who should be the principal? In other words, the paper assumes there is a trust-worthy third party who can run the quality-detector and commits to a contract with the LLM companies. I’m not sure this is feasible in practice. I hope the authors can explain more carefully the application scenarios of their theory.\n* Furthermore, there is no evidence in the paper (and, I guess, on the Internet) that can prove LLM companies are cheating about their service quality. I also don’t think this is very likely because LLM companies have other incentives to provide high-quality services, e.g. their reputation. So, how do we know we are not solving a problem that does not exist?\n* Even though we go with the assumption that there is a contract that the company (agent) agrees on, I doubt cost-robustness is the first-order concern. The cost data is usually publicly obtainable from energy reports, as the authors did in their experiments. Even though this data is not public, the energy cost is usually easy to estimate. Therefore, I don’t think incentivizing LLM production is a suitable application for cost-robust contracts.\n\nSee weakness"
}
] |
wZ5kEOCTce | Rethinking Patch Dependence for Masked Autoencoders | In this work, we examine the impact of inter-patch dependencies in the decoder of masked autoencoders (MAE) on representation learning. We decompose the decoding mechanism for masked reconstruction into self-attention between mask tokens and cross-attention between masked and visible tokens. Our findings reveal that MAE reconstructs coherent images from visible patches not through interactions between patches in the decoder but by learning a global representation within the encoder. This discovery leads us to propose a simple visual pretraining framework: cross-attention masked autoencoders (CrossMAE). This framework employs only cross-attention in the decoder to independently read out reconstructions for a small subset of masked patches from encoder outputs, yet it achieves comparable or superior performance to traditional MAE across models ranging from ViT-S to ViT-H. By its design, CrossMAE challenges the necessity of interaction between mask tokens for effective masked pretraining. Code is available [here](https://anonymous.4open.science/r/mae-cross-anon-11EB/README.md). | https://openreview.net/pdf/92dc82d399d0b3f0bf350c86b137bc062eb04012.pdf | [
{
"confidence": 4,
"rating": 4,
"review_id": "Os11Y7Xv0s",
"review_text": "This paper reveals the role of inter-patch dependencies in the decoder of MAE on representation learning. The paper shows that MAE achieves coherent image reconstruction through global representations learned in the encoder rather than interactions between patches in the decoder. Based on this, the authors propose CrossMAE, which only utilizes cross-attention in the decoder.\n\n- The approach of analyzing the reconstruction process through self-attention between mask tokens and cross-attention between mask and visible tokens is intriguing.\n- The writing is clear and easy to follow, with main messages that are solid and insightful.\n\n1. Idea/Novelty\n- The claim that MAE reconstruction is achieved through global representation learning within the encoder rather than interactions between patches needs more support. Recent studies linking MAE to contrastive learning have found that the receptive field of specific mask tokens in the decoder is relatively small. Could the role of mask tokens in the decoder be to capture local area information? This might explain the smaller attention magnitude of masked tokens compared to visible tokens in Figure 1(b). \n- There is a concern that without self-attention (i.e., with the proposed method), the observation that authors made on the vanilla MAE may no longer be valid. Additional explanation on this point is necessary as this observation is the main motivation for suggesting CrossMAE.\n\n2. Additional justification\n- Effectiveness of using a subset of mask tokens as queries: Unlike the traditional architecture, this method uses only a subset of mask tokens as queries. Detailed analysis and interpretation are needed on why this is effective. \n- Performance differences when using the entire set of mask tokens versus a subset (and what percentage of mask tokens is used) should be reported.\n\n3. Experiment\n- For a fair comparison, CrossMAE's performance should be evaluated using the same setting as the original MAE, especially regarding the fine-tuning recipe.\n- The current experimental results do not convincingly demonstrate the effectiveness of the method. For classification tasks, only the linear-probing and fine-tuning results on IN1K are reported. Following the previous works, classification on various downstream datasets should be also considered.\n- For generalizability, evaluation on another task like semantic segmentation (e.g. on ADE20K) would be useful to verify that the suggested method learns the generalizable feature representation.\n\nPlease refer to the weakness section."
},
{
"confidence": 4,
"rating": 5,
"review_id": "c4csm04ShX",
"review_text": "The paper introduces a novel pre-training approach called CrossMAE. Instead of concatenating the masked and visible tokens for the decoder, the authors add cross-attention to decode the masked tokens by using them and the visible patch embeddings as separate inputs to the decoder. Further, the authors introduce a method to only partially reconstruct the masked patches, and leverage inter-bock attention to fuse feature across different layers.\n\n- The paper is well motivated through a practical observation\n- The authors propose a useful technical contribution which seem intuitive given the described observations\n- The paper is well written and technically sound\n- All visualizations provide additional value, I especially like Figure 5. It describes the effect of the contributions well\n- Judging from the experiment section, the presented approach mostly improves over the vanilla MAE and other MAE-like follow-up works\n\n- I feel like the paper is missing a more structure ablation of the individual contributions. I think the paper would benefit from having a simple table where all contributions are added sequentially to better identify the performance effect of the individual contributions as in:\n\tMAE X.X\n\t+ Cross-Attn X.X\n\t+ Partial Reconstruction X.X\n\t+ Inter-Block Attn X.X\n- As can be observed from Table 3 c), the final setting (underlined) of the prediction ratio, 0.75, turns out to be exactly the same as the optimal masking ratio, 0.75. If I understood correctly, this means that in practice, CrossMAE works best when it predicts all tokens that were masked, not just a fraction of them. Only predicting part of the masked tokens was previously listed as a contribution. Therefore, I don’t understand how this additional hyper parameter provides any benefit for better downstream performance. Maybe I’m missing something and this be cleared up by answering the previous point.\n- All models are only trained for 800 epochs. The original MAE reaches peak performance at 1600 epochs. For a thorough comparison, it would be necessary to also train CrossMAE for 1600 epochs and see if the performance gains sustain, or if performance has peaked at 800 epochs.\n- Table 1 is missing the CrossMAE ViT-H with 75% masking ratio\n- Contribution 2 and 3 don’t seem to be as well motivated in the introduction in comparison to Contribution 1\n- Better performance is listed as a contribution. IMO this is not a contribution, rather a result of the technical contributions\n\nI like the idea and motivation of the paper. It starts from an interesting observation of the vanilla MAE, and aims to improve this aspect. Unfortunately, it is not fully clear which of the proposed contributions actually have an impact on performance. Table 3a) shows that adding Cross-Attn improves downstream performance. But since the authors choose the masking ratio to be the same as the prediction ratio, there doesn’t seem to be an improvement resulting from the second contribution. The effect of improved computation complexity only exists if prediction ratio < masking ratio. Lastly, according to Table 3 d), with the right number of fused feature maps, inter-block attention CAN improve the model, but the authors choose 12 as their default number of fused feature maps, which doesn’t improve performance over just adding Cross-Attn.\n\nConcretely, I think the following additions could improve the paper:
\n- Introduce a comprehensive analysis of the individual contributions’ impact on performance, and also computational complexity if you want to highlight that, in a similar manner as proposed above\n- Train both models for 1600 epochs and evaluate if the performance increase can be sustained \n\nI’m willing to increase my score if my concerns are adequately addressed, and/or if the other reviewers list further convincing arguments for accepting the paper."
},
{
"confidence": 4,
"rating": 7,
"review_id": "7mapGA7sUs",
"review_text": "This paper presents CrossMAE, a methodology for improving pre-training efficiency over that of MAE for an encoder. The paper motivates its approach by presenting visual evidence that, in standard MAE pre-training, masked tokens attend to other masked tokens significantly less than to non-masked (aka, visible) tokens. Using this motivation, the paper then presents CrossMAE, which differs from MAE largely in that it replaces the MAE self-attention with cross-attention between the masked tokens and a learnable weighted combination of the encoder feature maps. This aspect decouples queries from keys and values (which is not the case in MAE), which the paper then exploits to allow only some (but not necessarily all) mask tokens to be used during reconstruction to pre-train the model. The paper presents an analysis of which encoder block features are optimal to cross attend with each decoder block, and it presents ablation studies on multiple design decisions. Finally, it presents visual and fine-tuning results showing comparable performance to MAE and similar methods.\n\nThis paper motivates CrossMAE well by showing evidence of a potential inefficiency in MAE (self-attention) and then presenting an approach to remedy it (cross attention). I particularly like how the paper delves even deeper, though: instead of stopping at the level of replacing self-attention with cross-attention, it then points out that this choice allows for a significantly fewer number of masked patches to have to be reconstructed, which reduces flop count significantly. The ablations in Table 3 are fairly thorough and answered some questions I have developed. The performance of CrossMAE appears comparable to other SOTA methods but with significantly more efficient pretraining.\n\n1) In Fig 1b, IIUC, for one particular mask token, the two $\\mu$'s are the respective attention values averaged over all transformer blocks and all masked/non-masked tokens. If this is the case, my concern is that by averaging over all transformer blocks, variations in the attention is being hidden. Naively, I would think that for early blocks, the attention due to masked tokens would be small (as the paper concludes) but becomes larger for the later blocks (since now the masked tokens have actual useful signal in them). Did you consider this?\n\n2) I do not follow why CrossMAE does not need an MLP at the end to convert final decoder tokens back to raw pixels. Line 218 says that the inputs to the first encoder block are included in the feature maps for cross attentions. Does this cause a final MLP to not be used?\n\n3) Less critical:\n 3a) Fig 1b should point the reader to Section A.1. I spent much of my reading confused about what $\\mu$ is.\n 3b) Fig 4a should have a different number of decoder layers than encoder layers. When I saw this figure, I immediately wondered why a decoder block wasn't being paired with feature maps from its \"partner\" encoder. I had to wait until lines 204-207 to get an explanation of why this doesn't work.\n 3c) Line 187 references a \"second question\" in Sec 3.1, which doesn't exist as far as I can tell.\n 3d) Fig 4a shows the \"vanilla\" version of Cross MAE, where the final encoder layer feature maps are attended with all decoder layers. But the paper presents results exclusively (?) on the version that uses a learned combination of the feature maps. Anyway, the figure confused me. Maybe I just didn't understand what the solid arrows vs dotted ones are supposed to represent.\n\nSee \"Weaknesses\"."
}
] |
wWyumwEYV8 | A Sober Look at the Robustness of CLIPs to Spurious Features | Large vision language models, such as CLIP, demonstrate impressive robustness to spurious features than single-modal models trained on ImageNet. However, existing test datasets are typically curated based on ImageNet-trained models, which aim to capture the spurious features inherited in ImageNet. Benchmarking CLIP models based on the ImageNet-oriented spurious features may not be sufficient to reflect the extent to which CLIP models are robust to spurious correlations within CLIP training data, e.g., LAION. To this end, we craft a new challenging dataset named CounterAnimal designed to reveal the reliance of CLIP models on realistic spurious features. Specifically, we split animal photos into groups according to the backgrounds, and then identify a pair of groups for each class where a CLIP model shows high-performance drops across the two groups. Our evaluations show that the spurious features captured by CounterAnimal are generically learned by CLIP models with different backbones and pre-train data, yet have limited influence for ImageNet models. We provide theoretical insights that the CLIP objective cannot offer additional robustness. Furthermore, we also re-evaluate strategies such as scaling up parameters and high-quality pre-trained data. We find that they still help mitigate the spurious features, providing a promising path for future developments. | https://openreview.net/pdf/416b6767b5924fc0e5fe05a6729b748f4fdecdc6.pdf | [
{
"confidence": 5,
"rating": 6,
"review_id": "soygMlaZn2",
"review_text": "The authors aim to investigate spurious correlations learned by CLIP models. For this, they curate a novel dataset where animals are organized into common and uncommon backgrounds, e.g. a polar bear is more likely encountered in snow than on grass. The authors then perform experiments where they benchmark various CLIP and ImageNet models on the curated dataset. They observe that CLIP models suffer from spurious correlations which stem from changing the background.\n\nI think the issue of spurious correlations is important and one needs to understand how and whether VLMs learn spurious features. The paper presents many experiments and shows that scale or backbone capacity do not improve the effective robustness on CounterAnimal which is interesting.\n\nThe paper has many issues, both in terms of writing and the methodology which need to be fixed.\n\n### Major:\n**The authors missed important previous works**: The paper “Recognition in Terra Incognita” is very related to this work and also proposes a “dataset designed to measure recognition generalization to novel environments” based on camera traps. The dataset is sorted according to difficult environments for different animals, which makes it very similar to CounterAnimal. I think the authors need to cite and discuss this paper. Currently, I do not understand the benefit of having a new dataset in addition to the already present one. The waterbirds dataset is also highly similar and should be discussed (https://arxiv.org/pdf/1911.08731). The authors cite that paper, but do not discuss it in the Related Work section, nor put it into context with CounterAnimal. The backgrounds challenge (https://github.com/MadryLab/backgrounds_challenge) is also highly related and should be discussed. In general, the related work section is very weak, given how extensively spurious correlations and worst-group-accuracy have been studied. Another important work to be discussed would be \"Finding and Fixing Spurious Patterns with Explanations\" (https://arxiv.org/abs/2106.02112).\n\n**The naming of the common vs counter groups is misleading**: \nLine 165: “Photos with the highest CLIP accuracy are assigned to the common group, and those with the lowest CLIP accuracy are assigned to the counter group.” I have a major understanding issue here. As far as I understood the paper before this line, the goal was to put images with common backgrounds into the common group and images with uncommon backgrounds into the counter group. This is also depicted in Fig. 1 or Table 1. The caption in Fig.1 says that “Most ice bears appear in a snow background (i.e., common), while it also is reasonable to find some ice bears in a grassy environment (i.e., counter)”. But here, the authors write that accuracy has actually been used to separate images into these groups? But then the frequency of the co-occurrence of certain backgrounds and classes has not been taken into account, or rather, it is a conjecture that those backgrounds where the CLIP model has higher accuracy on are more “common”? \n\n**The terms \"effective robustness\" and \"robustness\" are used interchangeably which is wrong and confusing**:\nI think the paper conflates the terms “robustness” and “effective robustness” which is confusing. When looking at effective robustness plots, such as in Fig. 2, we are interested in the residual difference between the measured value and the value predicted by the linear fit. As I can see, all plotted markers (CLIP and ImageNet) lie on their respective linear fits, and none of the interventions, such as CLIP-DC or CLIP-DFN offer any effective robustness benefits. It is though true that the **absolute** robustness numbers are overall higher for the CLIP-DFN models, for larger models or models trained on more data. I am however confused by the authors discussion of this observation. On the one hand, they write that larger CLIP models are more robust but increasing the dataset size does not yield improvements. First, I am confused whether they mean “effective robustness” or “robustness” here. Second, I do not see the effect the authors are describing: Both more data and larger backbones have higher absolute robustness but the same effective robustness as the other models. The statement “CLIP models trained on high-quality data are more robust” is also confusing, because it is not clear whether “robustness” or “effective robustness” is meant. \n\n**Due to methodology issues, results on CLIP models cannot be compared to results on ImageNet models (or other advanced LVLMs):**\nLine 60: “d) Spurious discovering: preserving classes and associated data based on the decrease in zero-shot performance (i.e., evaluating based on pre-trained CLIP models without fine-tuning) when shifting the backgrounds.” This step is really unclear. Do the authors curate the dataset based on the zero-shot accuracy of a CLIP model? From the introduction and the abstract, it sounds like the authors want to benchmark the robustness of CLIP vs ImageNet models on this custom dataset. But then, it is strange that CLIP models also seem to be used during the curation process. After reading the more detailed description in line 156, I think the statements made in line 85 are misleading. The authors write “ImageNet models are more robust to spurious correlations captured by CounterAnimal” and “Compared with CLIP models (colored in blue), surprisingly, we find that ImageNet models exhibit a stronger robustness to the spurious correlations in CounterAnimal.” Given that CounterAnimal has been curated based on the performance drop of a CLIP model, I find it very unsurprising that CLIP models perform worse on it compared to ImageNet models. I think that if CounterAnimal had been curated based on an ImageNet-trained ResNet50, the trend would have been reversed. I think all statements comparing CLIP and ImageNet trained models on CounterAnimal need to be relaxed and I think that this comparison is quite meaningless because of the described selection bias. I think that the whole Section 3.3. is misleading for this reason and statements such as the following cannot be made given the methodology issues: “Surprisingly, we find that ImageNet models are more robust to spurious features in the CounterAnimal dataset. This finding may contradict the common belief [Radford et al., 2021, Shi et al., 2023] that the CLIP models tend to be more robust to spurious correlations than single-modal supervised learning.” Similarly, the conjecture paragraph from line 265 onwards is wrong and cannot be made. \n\nFor the same reason, the comparison to advanced LVLMs in line 273 onwards cannot be made.\n\nFigure 1: Are these examples cherry-picked or are they representative of the data points present in CounterAnimal? I am asking this, because of the Winoground dataset [A]. This dataset tests the compositionality of VLMs by forcing a model to match two captions to two respective images. Winoground has later been criticized because the two images in the choice process are not equally hard [B]. For example, the model needs to match “the glass is on the grass” and “the grass is in the glass” to the corresponding images. However, there is much more grass in the image matching to the first caption, and the model likely picks that image for both captions just because there is more grass and it makes the decision in a bag-of-words-manner. To summarize, Winoground did not control for object size, orientation and other confounders. In Fig.1, it appears that the main objects (the polar bears) are equal in size, so size could be excluded as a possible confounder? Did the authors consider this possibility, i.e. that the drop in performance could be explained by other differences in the images from the respective domains?\n[A] https://arxiv.org/abs/2204.03162\n[B] https://arxiv.org/abs/2211.00768\n\n### General:\nLine 25: please cite CLIP\n\nLine 64: “The resulting dataset covers a total of 45 animal classes, ends up with 7,174 common photos and 5,926 counter photos, aligning with the standard size as an evaluation dataset [Recht et al., 2019, Hendrycks et al., 2021].” -> I do not understand this statement. Different test sets have different numbers of test images. ImageNet Val has 50k images for example. In what sense are the presented numbers standard?\n\n\nLine 94: “Overall, larger CLIP backbones (i.e., larger markers) can improve the effective robustness, implying that scaling up backbones may enhance robustness against spurious features.” -> I do not see this in Fig. 2. The larger markers appear to be on the fitted line, same as the smaller markers. Effective robustness measures the difference with respect to the linear fit, and there is none for the larger CLIP backbones. Please clarify this point.\n\n\nLine 146: “feature noise involves severe feature corruptions” -> Please be more specific here. What do you mean with feature noise? Do features refer to animals features such as missing ears or such? Or to the images themselves?\n\nLine 147: “clarity issues arise when animal objects are not in major positions” -> unclear formulation: what is a major position? Do the authors mean that the animals are too small or not in the center of the image?\n\nLine 153: “Note that the class space of backgrounds as above is not entirely orthogonal with each other due to the inherent ambiguity of the real-world situations. Nevertheless, we try our best to discern the assigned background labels within each animal class.” -> This is unclear. How many images would be ambiguous? I could imagine that many images would have two backgrounds, such as e.g. grass and sky or snow and water. For example, the last image in Fig. 1 on the left has both snow and water. It is not clear to me that only picking the snow background and ignoring the water is correct here. Further, at least for CLIP, the caption can contain several background keywords.\n\nFurther, I imagine animals occur in all kinds of environments, but there are only two backgrounds for each animal. Were the other images also discarded?\n\nLine 214: “Therefore, we conclude that our CounterAnimal dataset possesses some realistic shifts that are generally contained in large-scale pre-training data, regardless of backbones.” This conclusion cannot be drawn from this experiment since the backbone has not been varied here.\n\n### Section 4:\nThe proposed experiment is very similar to the well-known ShiftMNIST [D] or ColoredMNIST [E] datasets, which test the influence of spurious correlations. The findings here are not novel and should be brought into perspective with previous work. I do not understand how Fig. 11 relates to the text. What is “supervised”, “obj”, “objbkg”?\n[D] https://arxiv.org/pdf/1811.00401\n[E] https://arxiv.org/pdf/1907.02893\n\n### Typos, grammar:\nThe quality of the text is poor on some occasions which makes reading and understanding the paper difficult. The manuscript would benefit from a round of proof-reading. Some statements and formulations should be made more precise.\nLine 32: “The performance boosts over ImageNet models seem to suggest that CLIP resolves distribution shifts and thus spark a rich discussion about its rationale.” Strange formulation. How can “distribution shifts be resolved”? Please rephrase for clarity.\n\nLine 112: “More specifically, [Yang et al., 2023] report that CLIP models may misaligned frequently co-occured objects with the corresponding texts.”\n\nLine 115: “[Tong et al., 2024] find that CLIP misaligned samples will further cause the hallucination of LVLMs.” I do not understand this statement, grammar errors.\n\nLine 132: “Meanwhile, many existing datasets, e.g., DomainBed and Wilds, do not have overlapped label space with ImageNet, making the comparison between ImageNet and CLIP models hard.” There is a version of DomainBed [C] where the dataset has been filtered to only include classes compatible with ImageNet, such that an evaluation of ImageNet models is possible out-of-the-box.\n[C] https://openreview.net/pdf?id=LiC2vmzbpMO\n\nLine 171: “Recalling that, when CLIP models resort to the shortcut of data, the model performance will heavily correlate with the backgrounds presented in the common group yet is compromised when coming to the counter group.” Grammar errors, I do not understand this sentence. What is “the shortcut of data”?\n\nLine 208: “It suggests that the CounterAnimal dataset captures some general spurious shifts that at least commonly present in the pre-train dataset of LAION400M.” grammar\n\nLine 213: “Here, the spurious features degenerate the zero-shot robustness of CLIP models trained on both LAION2B and by OpenAI.” Typo? “degenerate”?\n\nLine 243: “In Figure 7, we consider two pre-train datasets, namely, LAION2B and the close-soured data from OpenAI” typo\n\nLine 297: “Nevertheless, in the following theorem, we justify that CLIP remains learning to use spurious features, aligned with our experimental observations on the CounterAnimal dataset.” grammar\n\nStrange space break between line 310 and 311.\n\n# Summary of the review:\nWe could fix the naming convention from \"common\" and \"counter\" to something like \"hard\" and \"easy\" since accuracy has been used rather than frequency of certain backgrounds to classify backgrounds into certain groups. Based on my arguments below, I believe we cannot compare CLIP models to ImageNet models on the proposed dataset in any sensible way due to the introduced selection bias. I believe the very title of the paper is misleading since the posed question cannot be answered based on the methodology issues. But if we remove the claims about comparing ImageNet models and CLIP models, then, the main point of the paper is that there exist backgrounds which are harder for CLIP models, given certain classes, and other backgrounds which are easier. I don't think that this observation is particularly interesting on its own. The authors did not relate the hardness of the backgrounds to their frequency in the pretraining dataset or anything else. The observation that backgrounds matter is also not novel but quite well-known and the authors do not offer a solution. Further, the writing is quite poor and confusing on many occasions; I provided many examples of incorrect and confusing sentences below.\n\nI have written a very detailed review above. I expect clarifications with respect to the raised points, at the very least in the \"Major\" paragraph."
},
{
"confidence": 4,
"rating": 5,
"review_id": "ZlFRMjXMk8",
"review_text": "This paper presents CounterAnimal, an evaluation dataset featuring two subsets: animals with common backgrounds and those with unusual backgrounds. The images were sourced from iNaturalist. Data with high CLIP accuracy are categorized as \"Common\", while those with low CLIP accuracy are labeled as \"Counter\". Results shows that CLIP models experience a greater accuracy drop compared to ImageNet models when tested on this dataset.\n\n- This paper analyzes multiple factors affecting CLIP accuracy, including model size and training data quality.\n- The paper combines both experimental results and theoretical analysis. The analysis in Section 5 is interesting and novel.\n- The paper is well-written and easy to follow.\n\n- The proposed dataset is not sufficiently robust to analyze the influence of spurious bias, as this is not the only difference between the common and counter datasets.\n - To analyze the accuracy drop caused by spurious features such as background, the background should be the only difference between common and counter image pairs. Prior work [4,5] has proposed such datasets focusing on background.\n - In the proposed dataset, other factors may influence the model accuracy gap besides background. For instance, as shown in Figure 1, the more varied gestures of ice bears on the right compared to the left could be a contributing factor to the accuracy drop.\n\n- Current experiments cannot conclusively show that ImageNet models generalize better than CLIP.\n\n - As the common and counter groups are selected according to the CLIP accuracy (see line 165 in the paper), they indicate easy and hard samples for CLIP. Since ImageNet models have different training characteristics, it is natural that hard cases for these models may differ from those for CLIP, resulting in a smaller performance drop for ImageNet models. This result cannot support that ImageNet models are more robust than CLIP models.\n - The accuracy drop from common to counter group can be greatly influenced by the model used to divide the common and counter dataset. Using the combined proposed common and counter dataset, a new Common' and Counter' dataset can be created based on the accuracy of ImageNet models. What is the impact of this dataset division on the accuracy drop for different models?\n\n\n- Prior studies[1,2,3,4,5,6] have proposed datasets specifically to analyze the influence of background, which are not discussed in this work. These datasets can be used for CLIP evaluation as they do not overlap with the CLIP training set. Additionally, creating datasets based on model accuracy in this work is similar to the approach in [6].\n\n[1] Noise or Signal: The Role of Image Backgrounds in Object Recognition.\n\n[2] Objectnet: A large-scale bias-controlled dataset for pushing the limits of object recognition models, NeurIPS 2019.\n\n[3] Dataset Interfaces: Diagnosing Model Failures Using Controllable Counterfactual Generation.\n\n[4] ImageNet-E: Benchmarking Neural Network Robustness via Attribute Editing, CVPR 2023.\n\n[5] LANCE: Stress-testing Visual Models by Generating Language-guided Counterfactual Images, NeurIPS 2023.\n\n[6] ImageNet-D: Benchmarking Neural Network Robustness on Diffusion Synthetic Object, CVPR2024.\n\nPlease refer to the weaknesses."
},
{
"confidence": 4,
"rating": 6,
"review_id": "sKA3ZjPX6H",
"review_text": "In this work, the authors create an evaluation dataset comprising two groups, one with animals in usual backgrounds (common group) and another with unusual backgrounds (counter group). They then evaluate a suite of models of different backbones, model sizes, and datasets. They find that CLIP models do poorly than ImageNet-trained models, and generally high quality data or bigger model size improves counter group accuracy.\n\n1. The CounterAnimal dataset is a nice contribution that can be of value to the community.\n2. The authors have evaluated a number of models on the dataset and that too could be of value to the community.\n\nPlease see questions for more information.\n\n1. **Biased dataset:** The dataset is split into to common and counter group using a CLIP model. Therefore by construction, the CLIP models will perform poorly and it is no surprise that the ImageNet-trained models do a bit better. One could construct a split of this dataset where ImageNet models do better than the CLIP models. If the dataset was collected in a model agnostic way then the conclusions could potentially be more interesting. \n\n2. **On the premise:** As such the premise or the primary question seems a bit vacuous. Models arguably learn different features and there will exist some type of evaluation where one does better than the other. But are there useful tasks/evaluations where ImageNet models are preferred over CLIP models? That is an interesting open question. This work doesn't necessarily start from there and create a benchmark that is supposed to represent a task. The authors rather create a biased dataset that by design make CLIP models perform poorly. Therefore the primary premise of the work seems erroneous. There is some value in the other evaluations so maybe the paper could be rewritten by positioning things differently. \n\n3. **Lack of novelty:** Keeping the primary result aside, other conclusions like that better datasets or model sizes improve robustness are not new. Please see [1, 2, 3, 4]. \n\n[1] [Geirhos 2021] https://proceedings.neurips.cc/paper/2021/hash/c8877cff22082a16395a57e97232bb6f-Abstract.html\n\n[2] [Idrissi 2022] https://arxiv.org/abs/2211.01866\n\n[3] [Fang 2022] https://arxiv.org/abs/2205.01397\n\n[4] [Nguyen 2022] https://arxiv.org/abs/2208.05516"
},
{
"confidence": 5,
"rating": 5,
"review_id": "qBnkutpo76",
"review_text": "This work asks one interesting question: \"Do CLIP models always generalize better than ImageNet models?\" Driven by this question, this work proposes a new benchmark dataset named CounterAnimal. This dataset consists of a) the common group: comprising animals in common backgrounds, and b) the counter group: including animals in plausible yet unusual backgrounds. The main idea is that the performance drops from the common to counter groups quantify the reliance on spurious background features for animal predictions. The main observation is that CLIP models exhibit notable performance drops when tested on the counter group. In comparison, ImageNet models can be more robust than CLIP models.\n\n- It is always good to see a new and novel dataset proposed for evaluating CLIP and ImageNet-trained models. The proposed dataset CounterAnimal is complementary to existing datasets that cannot reflect the robustness of CLIP models to spurious correlations. \n\n- The dataset construction is well-presented. The statistics, curation, background labeling, and spurious discovery are well introduced in Section 2\n\n- The analysis around spurious correlation is good. This work tries to give insights from several aspects, such as pre-trained datasets, scaling up, and learning paradigms. The observations are sound to me.\n\n- I found the analysis of why CLIPs rely on spurious features interesting. However, I think the claim is somewhat \"obvious\": there exists a relatively strong correlation between the object captions and the parts of image backgrounds, CLIP will learn to align the backgrounds, i.e., spurious features. If the training dataset contains many examples of spurious correlations, then models will tend to be biased.\n\n- I am curious about why ImageNet models may not be so influenced by the spurious bias in CounterAnimal. Is this because the ImageNet training set does not have too many spurious correlation examples? Or ImageNet has a spurious bias but such bias is different from the one in CounterAnimal? Please provide a discussion or share some insights on this question. \n\n- This paper adopts absolute performance drop in Section 3.3. Such a metric may not be so robust. For example, model A drops from 40 to 39, and model B drops from 90 to 89. They drop the same but the I would say model B is better. Please comment on this, and discuss the metric of absolute performance drop.\n\n- ImageNet models are not so biased toward spurious correlations compared with CLIP models. Why? Is this because the ImageNet training set does not have too many examples that exhibit spurious correlations?\n\n- While I appreciate this work includes results on ColoredCOO and ColoredMINIST, some other spurious correlation benchmarks (e.g., WaterBirds) would be greater if they were also included."
}
] |
wWiAR5mqXq | Reflective Multi-Agent Collaboration based on Large Language Models | Benefiting from the powerful language expression and planning capabilities of Large Language Models (LLMs), LLM-based autonomous agents have achieved promising performance in various downstream tasks. Recently, based on the development of single-agent systems, researchers propose to construct LLM-based multi-agent systems to tackle more complicated tasks. In this paper, we propose a novel framework, named COPPER, to enhance the collaborative capabilities of LLM-based agents with the self-reflection mechanism. To improve the quality of reflections, we propose to fine-tune a shared reflector, which automatically tunes the prompts of actor models using our counterfactual PPO mechanism. On the one hand, we propose counterfactual rewards to assess the contribution of a single agent’s reflection within the system, alleviating the credit assignment problem. On the other hand, we propose to train a shared reflector, which enables the reflector to generate personalized reflections according to agent roles, while reducing the computational resource requirements and improving training stability. We conduct experiments on three datasets to evaluate the performance of our model in multi-hop question answering, mathematics, and chess scenarios. Experimental results show that COPPER possesses stronger reflection capabilities and exhibits excellent generalization performance across different actor models. | https://openreview.net/pdf/3b17b8aba5d866085a47c8258c92406af2fc2e10.pdf | [
{
"confidence": 4,
"rating": 4,
"review_id": "jObQIo3keU",
"review_text": "The paper introduces COPPER, a novel framework designed to enhance collaboration in multi-agent systems using a learnable self-reflection mechanism. COPPER utilizes a shared reflector fine-tuned to adjust actor model prompts via a counterfactual PPO mechanism. This approach includes counterfactual rewards to address the credit assignment problem and enables the reflector to customize reflections based on agent roles, optimizing computational resources and training stability. The framework's efficacy is validated through experiments in multi-hop question answering, mathematics, and chess, demonstrating improved reflection capabilities and generalization across various actor models.\n\nThis paper is clearly written and explores a new setting — multiagent reflection. It also show improved performances on all three tasks. Using counterfactual rewards to perform PPO training sounds straightforward.\n\n- My main concern is this paper involves a combination of various components and I could not clearly infer from the paper which part is most important. This makes the improvement for each part look marginal. Generally, this paper proposes a novel training method to enhance reflection, as well as use reflection-based multi-agent discussion to improve agent reasoning. I believe the method could be directly applicable to single agent scenario as reward for each agent is updated independently. Could you perform ablation in terms of single-agent?\n \n- The test scenario focuses on single-step tasks, can this framework be applied to multi-step agent tasks like AlfWorld?\n\n- How is the performance of COPPER compared to shared parameter & loss training for all LLMs?\n\nSee Weakness."
},
{
"confidence": 3,
"rating": 6,
"review_id": "FVvgwkRb1k",
"review_text": "The paper proposes a multi-agent reflection framework COPPER to solve reasoning tasks on several datasets such as HotPotQA, GSM8K, and Checkmate in One Move. The two main contributions are:\n1. designing counterfactual rewards to alleviate the credit assignment problem;\n2. training a shared reflector to personalize the reflection for each agent.\n\n1. Novelty: The paper introduces counterfactual rewards from RL to LLM agents, to deal with the credit assignment problem in multi-agent cooperation.\n2. Soundness: The authors conducted extensive experiments to thoroughly analyze the proposed mechanism.\n\n1. The motivation of the shared reflector may not align with reality. Embodied scenarios do not allow complete information sharing with a central reflector.\n2. The computation of counterfactual rewards can be very high. Every agent demands two times of simulation to calculate the rewards, and the computational costs could be much higher when the number of agents increases.\n3. The claims of personalized reflection may not be completely conducted. For the Cooperative Debate Paradigm, there are no roles for the debaters.\n\n1. How to determine the number of agents for each task?\n2. Can you present the computational cost? Including training and inference stages.\n3. Can you provide more case studies, especially for the other two datasets?\n4. Does the shared reflector take all the agents' trajectories together to reflect?"
},
{
"confidence": 3,
"rating": 7,
"review_id": "2TzrguAfuL",
"review_text": "This paper proposes COPPER to enhance the collaboration ability of multi-agent systems through a learnable self-reflection mechanism. It involves reflections from different agent-specific profiles. The contribution of each agent-specific reflector is measured based on their marginal reward. This reflector is shared among agents and generates personalized reflections according to agents' roles. Experimental results on several datasets demonstrate its effectiveness.\n\n1. This paper explores the reflection on multi-agent collaboration. Previous work on reflection mainly focuses on a single LLM, ignoring the complex environment and interaction in the multi-agent system.\n2. The introduction of the counterfactual reward in PPO training assigns the reward to rate each agent's reflection, helping the credit assignment problem.\n3. The comprehensive analysis of the counterfactual reward, the shared reflector, and different LLMs for reflectors provide a deep insight into the proposed method.\n\n1. Including the Retroformer under the multi-agent setting as one of the baselines would be better.\n2. When the environment provides a sparse reward such as the credit for the reflection of different agents may become very similar. For example, the removal of all reflections may result in a counterfactual reward of 0 because both trials fail. Then the counterfactual reward may degrade to the episode reward in Retroformer.\n3. With the complex PPO training, COPPER's performance is not very impressive, especially when the trial is small (in GSM8k and Checkmate in One Move of Figure 4 and Figure 8)\n\n* The left part of Figure 2 is a little confusing due to the position of Step 1 to 4. The execution order is unclear. The text in Figure 2 is too small to be seen, especially the right part. \n* Will there be any negative counterfactual reward? For example, the removal of a specific reflection will improve the performance. \n* What is the impact of the agent profile on the reflector? Will a personalized reflector be better than a general reflector?"
}
] |
wWguwYhpAY | Neural Experts: Mixture of Experts for Implicit Neural Representations | Implicit neural representations (INRs) have proven effective in various tasks including image, shape, audio, and video reconstruction. These INRs typically learn the implicit field from sampled input points. This is often done using a single network for the entire domain, imposing many global constraints on a single function.
In this paper, we propose a mixture of experts (MoE) implicit neural representation approach that enables learning local piece-wise continuous functions that simultaneously learns to subdivide the domain and fit it locally.
We show that incorporating a mixture of experts architecture into existing INR formulations provides a boost in speed, accuracy, and memory requirements. Additionally, we introduce novel conditioning and pretraining methods for the gating network that improves convergence to the desired solution.
We evaluate the effectiveness of our approach on multiple reconstruction tasks, including surface reconstruction, image reconstruction, and audio signal reconstruction and show improved performance compared to non-MoE methods. Code is available at our project page https://sitzikbs.github.io/neural-experts-projectpage/ . | https://openreview.net/pdf/29a5178e806f04207f02516fcb74d8395ed9af42.pdf | [
{
"confidence": 5,
"rating": 5,
"review_id": "ozQOq5dApc",
"review_text": "This paper proposes a mixture of experts (MoE) approach for INRs, which allows the learning of local piece-wise continuous functions by subdividing the domain and fitting locally. The incorporation of a MoE architecture enhances speed, accuracy, and memory efficiency. They also propose a novel manager architecture and initialization that enable domain subdivision without ground truth.\n\n1. The paper is well-written and easy to follow.\n2. The proposed MoE INR has a good performance compared to baselines.\n3. The idea of delivering MoE as a learnable partition region for INR fitting with randomized initialization is novel to me. From the ablation study, the randomized initialization improves the performance a lot.\n\n1. Missing some closely-relative works. I encourage the authors to have a detailed discussion of previous MoE INRs [1,2] and decomposition/partition-based INRs [3,4].\n2. Lacking some key comparison experiments with decomposition/partition-based INRs. The authors only compare their method with the baseline SoftPlus and SIREN (and their wider version). However, some related works [4] have also shown the INR based on pre-determined masks can also outperform the wider version of SIREN. I encourage the authors to experimentally compare your method with [4] to illustrate the necessity of learnable partition regions.\n3. A detailed ablation study on the hyper-parameters of the MoE INRs is missing, such as the layer of encoder, manager, and experts. Given a fixed number of parameters, how to allocate the parameters to the three modules remains unknown.\n\n[1] Zhao, Jianchen, et al. \"MoEC: Mixture of Experts Implicit Neural Compression.\" arXiv preprint arXiv:2312.01361 (2023).\n[2] Wang, Peihao, et al. \"Neural implicit dictionary learning via mixture-of-expert training.\" International Conference on Machine Learning. PMLR, 2022.\n[3] Rebain, Daniel, et al. \"Derf: Decomposed radiance fields.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.\n[4] Liu, Ke, et al. \"Partition speeds up learning implicit neural representations based on exponential-increase hypothesis.\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\n\n1. Could you please discuss how to choose the number of experts giving a fixed number of parameters and iteration time? Is it better to have more experts with each one having fewer parameters or fewer experts with each one having more parameters?\n2. I wonder whether it is possible to apply your method to NeRF since there are no supervised signals for the 3D ground truth.\n3. When comparing the Vanilla MoE INR and your Neural Experts, have you kept their total parameters similar (smaller experts for your Neural Experts due to the extra parameters needed by the encoders)?"
},
{
"confidence": 4,
"rating": 5,
"review_id": "I4bIAuzhXO",
"review_text": "This paper proposes a new architecture for implicit neural representations (INRs) based on the mixture-of-experts (MoE) architecture. This new architecture is differs from traditional MoE architectures in that now all the experts have a shared encoder and expert-specific decoders, while the manager also now has an encoder-decoder architecture, with the manager decoder taking as input both the manager encoder's representation as well as the experts' shared encoder representation. The authors also provide a pre-training method for the manager. The method is evaluated on image reconstruction, audio reconstruction, and 3D shape reconstruction by fitting signed distance functions (SDFs).\n\nThis paper proposes a novel MoE-based architecture for INRs together with a novel pre-training strategy for the MoE manager. \n\nThe empirical results are good and there is an ablation study on the major components. \n\nThe paper is well-written and easy to understand. \n\nMany details relating to reproducibility are provided in the supplemental material.\n\nOne of the major weaknesses of this paper is the experimental evaluation. The method does not compare against other methods that propose a new INR architecture (e.g. Gaussian activation function [1], WIRE [2], FINER [3]) or a standard MLP with positional encoding. The experimental evaluation is not also very robust, as only small datasets are used (Kodak, only 3 audio recordings, only 4 shapes) and more complicated tasks investigated by similar works are not considered (for example, WIRE [2] and FINER [3] both include evaluation on neural radiance fields). \n\nThe proposed MoE architecture also did not show improvements when used with softplus activation. \n\nThis paper also did not include other metrics normally used to evaluate the tasks, such as SSIM and LPIPS for 2D image fitting (e.g. FINER [3]). \n\nMinor point: inclulding the number of parameters in Table 1 may be helpful since comparisons between the number of parameters is discussed in the text.\n\nReferences\n1. Ramasinghe, Sameera, and Simon Lucey. \"Beyond periodicity: Towards a unifying framework for activations in coordinate-mlps.\" European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022.\n2. Saragadam, Vishwanath, et al. \"Wire: Wavelet implicit neural representations.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.\n3. Liu, Zhen, et al. \"FINER: Flexible spectral-bias tuning in Implicit NEural Representation by Variable-periodic Activation Functions.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\n\n1. Can this work be combined with traditional ReLU MLPs with positional encoding or MLPs with activation functions other than sine?\n2. How does this work compare to other novel INR architectures (e.g. WIRE, FINER, etc)?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "0NlZwfPKj5",
"review_text": "The paper presents a novel INR framework that leverages the Mixture of Experts (MoE) technique. The proposed strategy consists of an expert and a manager branch. Each branch has an encoder that processes the input coordinate and extracts an embedding. By processing the two encoder embeddings, the manager predicts the probability of which of the N experts should be used for extracting the signal. They show how the proposed INR framework achieves better reconstructions than SIREN on several modalities, such as audio, image, or 3D surfaces. They also propose a manager pre-training strategy, which is necessary to exploit all the experts effectively.\n\n-\tThe original idea might be conducive to new research in this direction.\n-\tThe paper is well-written and easy to follow. \n-\tSupervising experts with semantic losses and obtaining networks specialized in specific semantic areas of the input signal might unlock several applications for INRs and ease their interpretability.\n-\tThe proposed framework achieves good reconstruction performance.\n-\tThe ablations in Table 4 and Table 5 are very insightful.\n\nW1) The major weakness of the paper is the misalignment between the experimental results and the motivations of this research:\n\n a- In the introduction (L26-28), the authors correctly point out that, in traditional INRs, each coordinate needs to be processed by the whole network. Even though they claim that this problem can be solved by MoE INRs, in the proposed architecture, the input coordinate needs to be processed by the full manager and the encoder of the expert branch. The only saved computation is the one from the final expert, a much smaller network than the others. Thus, the saved computation looks minimal to standard SIREN (considering the same total weights). Moreover, no experiments regarding computational efficiency and the advantages of parallelized inference are necessary to motivate this claim. Maybe, instead of talking about the absolute efficiency of the proposed approach, it is better to show the better trade-offs in terms of efficiency and reconstruction quality than SIREN.\n\nb- In the introduction (L28-30), the authors claim that standard INRs extract features vastly different for close spatial coordinates (i.e., locality problem). I am unaware of studies that formally investigate this INR property. Thus, I suggest adding a reference work or validating it with experiments. Moreover, the authors claim MoE INRs can learn local piece-wise functions (L259 in the conclusion section). Thus, they do not suffer from the problem above. Yet, the experiments show something different. For instance, by looking at Figure 3c, different experts predict audio signals for close temporal coordinates. I can notice the same behavior in the last column of Figure 6, in which many distinct experts predict pixels in the upper part of the image.\n\nW2 ) The idea of using MoE resembles the idea of KiloNeRF [1]. In that case, the routing strategy is not learned and depends only on the input 3D coordinate, and each expert focuses on a pre-determined spatial region. I think the authors should add this reference, explaining the pros and cons of the two kinds of approaches.\n\nW3) In recent years, INR frameworks have been proposed to be faster and more efficient than MLP-only techniques such as SIREN. For instance, hybrid INRs such as InstantNGP [2] (hash grid + MLP) can be used as the base framework to speed up computation when learning INRs of images and surfaces or NeRFs. The paper should also include more recent competitors than SIREN.\n\n\n[1] Kilonerf: Speeding up neural radiance fields with thousands of tiny mlps. ICCV 2021.\n[2] Instant neural graphics primitives with a multiresolution hash encoding. ACM transactions on graphics (TOG) 2022.\n\nQ1) In Table 1, does the vanilla MoE baseline employ Softplus activations, while the final strategy uses Sine activations? In this case, the comparison is unfair and does not validate the superiority of their approach to the vanilla one.\n\nQ2) Can the authors also include the Chamfer Distance metric in Table 3?\n\n\n\nI like the paper's core idea. However, the author's response to my concerns will greatly influence my final rating."
},
{
"confidence": 3,
"rating": 6,
"review_id": "GIREWRcLxn",
"review_text": "This paper introduces a MoE architecture for INRs, enhancing scalability, local fitting, and parallelization. Traditional INRs use a single network, imposing global constraints, while this method learns several local expert functions, subdividing the domain. Key contributions include a novel manager architecture and initialization method for better convergence and domain subdivision without ground truth. It show improved performance and efficiency over traditional methods, across image, audio, and 3D surface reconstruction,\n\n- It demonstrates a neat architectural design along with a robust ablation study.\n- It consistently shows performance improvement across various tasks, and the performance with respect to the number of parameters is also superior.\n- It is interesting that random initialization, which includes no inductive bias in the manager pretraining process, outperforms initializations like SAM.\n\n- It requires more parameters compared to the vanilla model. While it is fair to compare it with the wider version of the vanilla model, it is unclear if the proposed model still performs well when it has the same number of parameters as the vanilla model (not wider version). Tab.3 alleviates this concern to some extent.\n\n- In addition to comparing with the vanilla model, it would be good to include a discussion on various INR methods that apply locality bias (e.g., spatial functa).\n\n- As shown in the convergence graphs in the appendix, it shows more unstable convergence compared to the baseline.\n\n- Please see the above weaknesses.\n- In Tab.1, what is the PSNR of Neural Experts SIREN with the same number of parameters as SIREN?"
}
] |
wTIzpqX121 | Probabilistic Weather Forecasting with Hierarchical Graph Neural Networks | In recent years, machine learning has established itself as a powerful tool for high-resolution weather forecasting. While most current machine learning models focus on deterministic forecasts, accurately capturing the uncertainty in the chaotic weather system calls for probabilistic modeling. We propose a probabilistic weather forecasting model called Graph-EFM, combining a flexible latent-variable formulation with the successful graph-based forecasting framework. The use of a hierarchical graph construction allows for efficient sampling of spatially coherent forecasts. Requiring only a single forward pass per time step, Graph-EFM allows for fast generation of arbitrarily large ensembles. We experiment with the model on both global and limited area forecasting. Ensemble forecasts from Graph-EFM achieve equivalent or lower errors than comparable deterministic models, with the added benefit of accurately capturing forecast uncertainty. | https://openreview.net/pdf/08914a753f1c9eaf87d2102390cab1b1f7a9663e.pdf | [
{
"confidence": 4,
"rating": 5,
"review_id": "14MxjFkCoU",
"review_text": "This work introduces a VAE variant of GraphCast for global medium-range weather forecasting and a VAE variant of a UNet (that is formulated as a GNN) for limited area modeling over Scandinavia. For this, they adapt GraphCast to have a similar hierarchical structure to UNets, and then treat the coarsest hierarchical layer (the bottleneck) as a latent variable representing the mean of isotropic Gaussians. The ensemble predictions from the model are similarly fast as a single deterministic prediction, achieved through batching. Their calibration can be good for some variables (e.g. global t2m for 10 day lead time has a spread/skill ratio of 0.99), while poorer for others (e.g. local wvint has a spread/skill ratio of 0.57 for 24h lead time).\n\n1. The proposed VAE extension to GraphCast is significantly faster than diffusion-based approaches (like the GenCast model).\n2. The work does not limit itself to just global weather forecasting, but also presents results for limited area modeling, which is the class of models used by many national weather services.\n3. The paper is reasonably well written, keeping a good amount of detail in the main paper, and presenting many additional details in the appendix.\n\nMajor points:\n1. Questionable baselines: I am unsure if the chosen baselines are very strong, let me name a few reasons for this:\n - Tab 1 presents performance for GraphCast, e.g. RMSE=387 for z500, 5 day leadtime. However, if I check the headline scores in the WeatherBench 2 (https://sites.research.google/weatherbench/) for GraphCast, i see RMSE=274 for z500, 5 day leadtime, which is significantly higher and beats all models presented in this study.\n - Both Tab 1 and Tab 2 do not include scores for the conventional weather models. I would expect Tab 1 to include IFS & IFS-ENS scores and Tab 2 to include MEPS scores.\n - Since this work introduces a probabilistic weather model, i would expect comparison with other recent works on probabilistic weather models, like the ones cited in this paper (e.g. GenCast).\n - Graph-FM has almost 6x the parameters compared to GraphCast* (Tab 5) - which quite possibly could be the major reason for its improved performance, and not the introduced architectural feature of hierarchical layers.\n2. Overlooked connection to UNets: The Graph-FM that was introduced for the LAM setting looks to me as equivalent to a UNet:\n - The input data comes on a regular grid with 236 x 268 pixels. Which is subsequently downsampled using 3x3 windows. Processing at each depth level is done with a locally connected layer (in other words: a local convolutional filter). A semantically simpler description of such a model would be a UNet with 3x3 pooling (learned in this case) and 3x3 conv filters at each stage. Possibly, implementing it as a UNet could also be computationally advantageous, making use of the highly optimized kernels for 2d convolutions and pooling operations, instead of GNN layers that rely on scatter sums.\n - UNets have been previously used for Weather forecasting: e.g. https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2018GL080704 & https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2020MS002203\n3. Proposed VAE implementation physically not meaningful? Your VAE is implemented with a latent variable at the coarsest level. This is supposed to capture epistemic uncertainty related to the forward model (and not due to initial state). However, one may argue for atmospheric models most model uncertainty comes from the subgrid-scale parametrizations and not from the coarse-scale representation of atmospheric dynamics. Hence, to me it seems far more intuitive to introduce the stochasticity at the finest level, representing the small scales. I assume you chose the hierarchical description mostly for computational reasons, but given a lack of physical basis, i would at least expect a more thorough investigation of potential errors introduced by this, e.g. is the ensemble variability too smooth?\n4. Missing reference to previously published work? A workshop paper at last years NeurIPS has introduced both the hierarchical GNN and the MEPS dataset https://arxiv.org/abs/2309.17370 , if I am not misstaken. I am not really sure about NeurIPS policy here, but even if this work is a direct extension of the previous work and the previous work is to be considered as non-archival, I still believe you should at least cite the workshop paper.\n\nMinor points:\n1. GraphCast + SWAG: This is a baseline with poor performance, that is somewhat arbitrarily picked from many possible approaches to obtain ensemble predictions from neural networks. I see two options here: Either you keep it, but also introduce many other such baselines, to make clear that you did not cherrypick a particularly weak one. Other approaches that should not be prohibitively expensive to run could e.g. be MC-Dropout or Laplace Approximation. Or, you simply drop it, as is, it does not add much to the paper.\n2. Introduction lacks motivation for LAM: This is an ML conference that you are submitting to. It would probably be good to briefly motivate why doing LAM is even necessary (i.e., why can't we just rely on global models instead)?\n3. Extreme Weather evaluation / case study: One key reason for ensemble prediction is capturing the tails, i.e. the extremes. You state in Appendix A that this is out-of-scope for the work. I would argue you are making your life too easy here. Since the presented models are likely not useful unless they display decent performance also for extreme weather, it would be important to evaluate just that. It may be enough for this paper to e.g. study a single extreme event as a case study.\n\nWhy does the original GraphCast paper not report the visual artifacts that you found for Graph-EFM (ms)? Could it be that your models have simply not been trained sufficiently or that there is a bug in your implementation?\n\nWhy do you use a fixed variance of the latent variable for the global predictions? It would be interesting to see this ablation."
},
{
"confidence": 5,
"rating": 10,
"review_id": "szSQI3M6qy",
"review_text": "This paper introduces a new method for predicting weather using advanced deep learning models. The approach, called Graph-EFM, improves accuracy and better handles uncertainties in weather forecasts. It uses a 1.5 degree version of ERA5 and making weather predictions more reliable and useful for real-world applications.\n\nThe paper's strengths include the innovative use of Graph-EFM for accurate probabilistic weather forecasting, detailed experiments on large datasets, and clear presentation of methods. Graph-EFM significantly enhances uncertainty estimation and forecast reliability adding value to both research and practical weather prediction applications.\n\nWhat happened if 0.25 degree ERA5 is used?\n\nI need to some figures during extreme events, e.g cyclones like Yaku.\n\nCan the model deal with higher resolution data?\n\nOr ERA6 when available?\n\nOr more localised higher resolution data?\n\nDo the results change a lot if it's only trained from 1980 onwards?"
},
{
"confidence": 3,
"rating": 7,
"review_id": "G83KG9HWjI",
"review_text": "The authors propose a graph-based ensemble forecasting model (Graph-EFM) to provide weather prediction with a hirearchical GNN framework. They used a hierarchical mesh graph to handle the challenges of capturing processes unfolding over different spatial scales and modeling the uncertainty in the chaotic system. The Graph-EFM provides a probabilistic weather forecasting with the benefit of capturing forecast uncertainity. The experiment results show the effectiveness and advantages of Graph-EFM compared to other deterministic models.\n\nThe hierarchical mesh graph provides a reasonable idea to handle different spatial scales for weather forecasting, which could inspire other researchers to handle problems in different domains.\nThe spatial dependencies are considered and handled within GNN layers.\nUsing ensumble-based model could capture the uncetainty of weather system.\n\nIn Figure 3, it seems like the selected ensemble members vary a lot, and how close is your forecast to the ground truth. Possibly, explaining a little bit of the underlying meaning of the measures in table 1 & 2 in the paper.\n\n1. The authors could better explain the underlying meaning of meaures, RMSE, CRPS, and SpSkR, for the results of weather forecast. Basically, I would like to ask authors to show how close their Graph-EFM's forecast is to the ground truth weather."
},
{
"confidence": 5,
"rating": 6,
"review_id": "PuFl1Ux901",
"review_text": "The paper proposes Graph-EFM, a method that combines a hierarchical multi-scale graph neural network with a variational objective for probabilistic weather forecasting. The method performs on par with Graphcast on deterministic metrics with the extra benefit of uncertainty estimation.\n\n- The paper is well-written and easy to follow.\n- The paper tackles probabilistic weather forecasting, which is an important problem in the field.\n- The proposed method is intuitive and makes sense. Overall, generative modeling is a potential direction for probabilistic weather forecasting. People have used GANs and diffusion, so a latent variable model is a natural addition to the literature.\n- The performance looks promising, and it is more efficient than existing methods using diffusion.\n\n- The authors should replace Table 1 with a line graph figure instead, as it allows comparison across different variables and lead times.\n- Please see my questions below.\n\n- How important do you think the architecture is to the performance versus the objective function? The proposed architecture has an intuition similar to UNet, i.e., multi-scale features and the lowest layer can be used to parameterize the hidden variable.\n- Diffusion models are considered the best family of models for generative modeling, surpassing GANs and latent variable models for other fields such as computer vision. What is the reason to believe latent variable models are the way to go for probabilistic weather forecasting?\n- Why does the paper compare with Graphcast+SWAG but not the perturbed version of Graphcast and Gencast?\n- How does the performance vary w.r.t. the number of ensemble samples? Given that sampling from a latent variable is fast, have the authors tried using more ensemble members?\n- Is there an explanation why Graphcast is better than Graph-FM, but Graph-EFM is better than Graph-EFM (ms)?\n- Why in LAM, the Graphcast architecture is doing better than the proposed architecture?"
}
] |
wT6GHk5ShC | Enhancing In-Context Learning Performance with just SVD-Based Weight Pruning: A Theoretical Perspective | Pre-trained large language models (LLMs) based on Transformer have demonstrated striking in-context learning (ICL) abilities. With a few demonstration input-label pairs, they can predict the label for an unseen input without any parameter updates. In this paper, we show an exciting phenomenon that SVD-based weight pruning can enhance ICL performance, and more surprising, pruning weights in deep layers often results in more stable performance improvements than in shallow layers. However, the underlying mechanism of those findings still remains an open question. To reveal those findings, we conduct an in-depth theoretical analysis by presenting the implicit gradient descent (GD) trajectories of ICL and giving the mutual information based generalization bounds of ICL via full implicit GD trajectories. This helps us reasonably explain the surprising experimental findings. Besides, based on all our experimental and theoretical insights, we intuitively propose a simple, model-compression and derivative-free algorithm for downstream tasks in enhancing ICL inference. Experiments on benchmark datasets and open source LLMs display the method effectiveness. | https://openreview.net/pdf/2b62bd7805c4971355586b1fc5697d6266237e68.pdf | [
{
"confidence": 3,
"rating": 5,
"review_id": "KFwFKguftY",
"review_text": "The authors provide a theoretical perspective on the stability of in context learning via implicit gradient descent trajectories. Ultimately, the analysis suggests that high condition numbers of the weight matrices belonging to layers with a high index can be pruned in order to achieve a model which performs better on ICL tasks.\n\n- In context learning is important, and something which has not been studied as deeply as other topics of ML due to the recent rise of transformers and ICL in general.\n- The method intuitively makes sense and is something which can be conditionally tuned after training based on specific tasks if a validation set is available.\n\n- It would be good to define deep and shallow, as these are subjective terms depending on the reference frame.\n- Figure 1 cpation says: \"We operate on the whole of MLP or ATTN.\" What does this mean?\n- If as figure 1 states, you can clip 99.5% of the original weights, what happens if you just drop that layer entirely? Recent work has shown that the deeper layers can be completely dropped without much effect. [1]\n - I cannot see much benefit gained from pruning part of the weights with SVD when it seems that the in nearly all cases, the benefit can be had by dropping the layer entirely.\n \n- Is the mask on L138 supposed to represent a causal mask? If so, I do not think the notation is correct, as the Identity matrix would only have $N$ binary values which is much less than is needed for a causal mask.\n- How can equation 1 and 2 use the same mask?\n- Example 1 appears to be incorrect:\n - There is no parentheses around $W_{V_r}^k + \\delta_V h_i^{k-1}$ in the first line.\n - The triangle inequality seems to say that line 2 $\\geq$ line 1\n - Given the above, I do not see what conclusion can be drawn from this equation.\n - Have I missed something here?\n\n- I don't believe the clipping process was adequately explained. Once the SVD operation is done, do you clip starting from the largest singular value? or starting from the smallest?"
},
{
"confidence": 3,
"rating": 5,
"review_id": "2qNcqtotph",
"review_text": "This paper investigates the effect of singular value decomposition (SVD)-based weight pruning on the in-context learning (ICL) performance of large language models. \n\nThe Authors show that SVD-based pruning can enhance ICL performance, with deeper layers showing more stable improvements. \nThey provide theoretical analysis to explain these findings, presenting implicit gradient descent trajectories for ICL and deriving generalization bounds. \n\nBased on their insights, they propose a simple algorithm for enhancing ICL inference in downstream tasks.\n\n- The Authors provide a theoretical analysis to explain their empirical findings, including the derivation of implicit gradient descent trajectories and generalization bounds for ICL.\n\n- Furthermore, they propose a simple, derivative-free algorithm for enhancing ICL performance in downstream tasks, demonstrating the practical value of their theoretical insights.\n\n- The theoretical analysis primarily focuses on linear attention, which may not fully capture the complexities of standard Softmax attention used in most transformer models\n\n- The proposed algorithm is derivative-free, but the search for optimal clipping rates may still be computationally expensive for very large models or datasets\n\n- There is a substantial lack of comparison with other pruning methods: the study focuses on SVD-based pruning but doesn't compare it with other pruning techniques, which could provide context for the method's effectiveness\n\n- Poor language, frequent typos, and grammatical errors are significant issues in this paper. This does not help readability, and would likely be a barrier to publication in its current form.\n\n- An essential part of the paper, which is the discussion of related works is not part of the main text. Furthermore, this discussion is prone to criticism. For example, quoting the seminal paper by Frankle and Carbin as an example of low-rank properties of neural networks is clearly misleading. I think that this discussion should be an essential part of the main text, and should also be substantially revised in order to avoid conceptual confusions.\n\nI suggest the Authors to address 1) the concerns regarding a better framing of the work in the current literature. In particular, there is a large body of evidence that is growing on the resilience of LLMs to pruning (see for example \"The Unreasonable Ineffectiveness of the Deeper Layers\", Gromov et al, https://arxiv.org/pdf/2403.17887, and references therein), and 2) the quality of writings by proofreading it together with a native speaker."
},
{
"confidence": 3,
"rating": 5,
"review_id": "EkIUZaxEim",
"review_text": "This paper demonstrates that (1) SVD-based weight pruning can sometimes achieve better in-context learning performance, and (2) pruning weights in deeper layers often results in more stable outcomes compared to shallow layers. The authors explain their findings through theoretical analysis and propose an intuitive matrix condition number-based weight pruning algorithm to achieve both stable and improved performance.\n\nThis work conducts an in-depth analysis to explain the \"stability\" of transformer weight pruning across different layers. The framework is interesting and validated through experiments. Moreover, the theoretical analysis can be applied to design new algorithms like algorithm 1 in this paper .\n\nDespite adopting various simplifications (such as using a linear attention transformer without MLP and layer normalization, treating each in-context example as a single vector, implementing attention masks for query tokens, and using meta-gradients for in-context learning) in their theoretical analysis, the results are still limited. They only explain why SVD-based weight pruning can achieve \"stable\" performance, leaving the more intriguing question of why transformers can achieve \"better\" performance with pruning unclear. Additionally, even with detailed hyperparameter tuning, the effectiveness of Algorithm 1 remains uncertain. Further details are provided in the questions section.\n\n[1] How should Theorem 2 be interpreted? It seems only provide a weak upper bound for in-context learning stableness. Can this also be applied to empirical risk, such as $L_{H_S}$ and $L_\\mu$?\n\n
[2] Theorem 2 gives the upper bound for expected generalization error. If we fix N in the constructed transformer and reduce the number of in-context examples to $N'$ in the input sequence, then we can find that while the factor $R^2/N$ remains unchanged in theorem 2 , $\\Delta W_t$ will change from $(\\sum_{i=1}^N)…$ to $(\\sum_{i=1}^{N'})…, where $ ($N' < N$). Based on the analysis across different layers, could this mean that fewer context examples are more robust for SVD weight pruning?\n\n[3] Note that in fig-3, large matrix condition numbers can exist in some modules of shallow layers, such as the attention key (K) in GPT-J-6B . What would be the effect of pruning only a single module in a shallow layer (e.g., the key projection matrix) rather than pruning the entire attention module (including Q, K, and V)?\n\n
[4] In C.5, it's noted that the optimal clipping rate is sometimes very small and varies across datasets. What would happen if we apply the same clipping rate (e.g., 0.95) as used in SST-2 to other datasets?"
},
{
"confidence": 3,
"rating": 5,
"review_id": "rro3C7Vk3F",
"review_text": "This paper discusses the phenomenon: SVD-based weight pruning can increase the in-context learning abilities of transformer based LLMs. In this paper, the authors conduct theorectical analysis by presenting the implicit gradient descent trajectories of ICL and providing the generation bounds visa full implicit gradient descent trajectories. This paper also provide a simple yet effective algorithm to clip the LLM by SVD to enhance ICL inference.\n\nFirst, this paper has a clear writing and is easy to follow. \n\nIt provides a detailed theoretical analysis on why SVD based weight pruning will improve ICL performance by leveraging the implicit gradient descent trajectories. It also provides the generalization bounds of ICL, in Theorem 2, it can be inferred that the noise level and the norm of of gradient contribute to the error bound. It provides the theoretical insight of SVD based method.\n\nThe authors provides a simple algorithm to leverage the discovered phenomenon to improve ICL performance of LLM in a gradient-freee way. The ratio between $\\sigma_{max} $ and $\\sigma_{min}$ is a good choice of heuristic conditional number.\n\n1. More details of algorithms is not shared. e.g. the range / number of clipping rate candidates set. \n2. In experiments result of C.5, the optimal $\\xi$ varies a lot across different tasks and different modules. However, this phenomenon is not touched in the theoretical part.\n\n1. Matrix condition number is an option for the indicator. But could there be more options, such as compute the decreasing rate of eigenvalues? Because when p=2, conditional number only leverages two values among all the eigenvalues.\n2. Could authors provide further more clarification why optimal $\\xi$ varies, and is there a way to explain this phenomenon under current theoretical framework provided in this paper?"
}
] |
wT5AgMVkaJ | Aligning Vision Models with Human Aesthetics in Retrieval: Benchmarks and Algorithms | Modern vision models are trained on very large noisy datasets. While these models acquire strong capabilities, they may not follow the user's intent to output the desired results in certain aspects, e.g., visual aesthetic, preferred style, and responsibility. In this paper, we target the realm of visual aesthetics and aim to align vision models with human aesthetic standards in a retrieval system. Advanced retrieval systems usually adopt a cascade of aesthetic models as re-rankers or filters, which are limited to low-level features like saturation and perform poorly when stylistic, cultural or knowledge contexts are involved. We find that utilizing the reasoning ability of large language models (LLMs) to rephrase the search query and extend the aesthetic expectations can make up for this shortcoming. Based on the above findings, we propose a preference-based reinforcement learning method that fine-tunes the vision models to distill the knowledge from both LLMs reasoning and the aesthetic models to better align the vision models with human aesthetics. Meanwhile, with rare benchmarks designed for evaluating retrieval systems, we leverage large multi-modality model (LMM) to evaluate the aesthetic performance with their strong abilities. As aesthetic assessment is one of the most subjective tasks, to validate the robustness of LMM, we further propose a novel dataset named HPIR to benchmark the alignment with human aesthetics. Experiments demonstrate that our method significantly enhances the aesthetic behaviors of the vision models, under several metrics. We believe the proposed algorithm can be a general practice for aligning vision models with human values. | https://openreview.net/pdf/ea5f1333d5f234e8c6e2fe92907a8aba4c99a5cb.pdf | [
{
"confidence": 5,
"rating": 7,
"review_id": "DezCTwzq1B",
"review_text": "This paper studies the problem of aligning vision models with human aesthetic standards in a retrieval system. There are three key parts in the proposed model including LLM rephrasing, re-ranking, and RL fine-tuning. Two novel benchmarks are also introduced to integrate aesthetic quality into evaluation metrics. Experimental results demonstrate the effectiveness of the proposed method and the benchmarks.\n\n1. This paper addresses the aesthetic quality issue in image retrieval systems and introduces a reinforcement learning fine-tuning strategy that enables the retrieval model to directly retrieve images based on both semantics and aesthetic quality, eliminating the need for multi-stage filtering. This approach holds significant value.\n2. The paper introduces two evaluation benchmarks, addressing the limitation of current image retrieval benchmarks that fail to evaluate aesthetic quality.\n3. The experiments are comprehensive, validating the importance of each component in the proposed method.\n\n1. The methodological process described in the article is somewhat cumbersome, with Figure 2 merely outlining key processes and concepts in a rudimentary manner, thereby increasing the difficulty for readers to comprehend.\n2. The authors appear to conflate \"no-reference image quality assessment\" with \"image aesthetic quality assessment.\" While these tasks are indeed closely related, they are distinct. MANIQA, for instance, should not be regarded as an aesthetic quality assessment model, and its paper does not evaluate the model's performance on aesthetic datasets.\n3. There remain some details in the article that are inadequately explained. It is peculiar that in Appendix Table 7, the same stride seemingly yields a different number of images.\n4. The manuscript contains typos. For example, the indicator function symbol in Equation 11 is clearly garbled.\n\n1. In line 62, it is said that the open-source IAA datasets cannot be used for aesthetic retrieval evaluation. Can you give a further explanation?\n2. In the first step of data preprocessing, the authors use the concept of \"topic\". Can you give a further explanation?"
},
{
"confidence": 4,
"rating": 5,
"review_id": "AmWSDQsDC0",
"review_text": "The paper looks into the alignment task for vision and language models within retrieval models where properties such as visual aesthetic comes to play. To achieve this, the paper collects some data to design a metric suitable for taking into account human aesthetic evaluation. And employs an RL-based technique to exploit the human opinion for better aligning the retrieved images with human aesthetic preferences.\n\n* It is well-written paper\n* The concept of aligning vision with aesthetic preferences is interesting and useful in some applications.\n* The experiments are well-designed and quite convincing. \n* It is interesting that LLM rephrasing could improve the quality of results\n\n* The proposed metric could be elaborated better and maybe explained how the study ensured the metric is not designed under influence of the model.\n\n* The work has been focused on visual aesthetics, given the LLM rephrasing results, it may be beneficial to look into the sophistication of language parts and how that could correlate with the aesthetic of the image. Could that motivate higher level of language sophistication is also correlated with higher visual aesthetics?\n* Could you elaborate how the RL-based approach could scale?"
},
{
"confidence": 2,
"rating": 5,
"review_id": "AKXp4WnbsR",
"review_text": "This work aims to align vision models with human aesthetic standards in a retrieval system. To do this, the authors propose a preference-based reinforcement learning method that fine-tunes the vision models to distill the knowledge from both LLMs reasoning and the aesthetic models to better align the vision models with human aesthetics. The authors further propose a novel dataset named HPIR to benchmark the alignment with human aesthetics.\n\n1.\tThe idea of aligning vision models with human aesthetics in retrieval is interesting. This work has potential applications in various real-life applications. \n2.\tThe authors’ motivation of utilizing the reasoning ability of large language models (LLMs) to rephrase the search query and extend the aesthetic expectations is insightful. \n3.\tThe paper is well-written and informative.\n4.\tThe proposed dataset HPIR can be used by fellow researchers in the related fields.\n\nI feel it can be further improved in the following ways. \n\n1.\tFor benchmarking human preferences, it might be better to record down the human variance in their annotations. I understand the authors used multiple annotations to ensure robustness, but since aesthetics is a subjective concept, human variance itself tells something.\n2.\tFollowing point 1, I feel the work can be made more solid if it includes some human evaluation studies on the experimental results. For example, in Fig. 5, it does not seem so obvious to me on the respective enhancement with finetuning. \n\nWithout the above two points, I feel the paper has somewhat overclaimed the \"alighing vision models with human aesthetics\".\n\n1.\tHave the authors considered human variance in aesthetics perception?\n2.\tAre the objective metrics enough for results evaluation? Have the authors considered using human studies to evaluate the results?"
},
{
"confidence": 3,
"rating": 6,
"review_id": "VgBvqYwhOh",
"review_text": "This paper aligns the vision models with human values by leveraging LLM for query rephrasing and introducing preference-based reinforcement learning. The paper also presents a novel dataset named HPIR to benchmark the alignment with human aesthetics.\n\nThis paper introduces a novel approach to align visual models with human aesthetics, combining LLM rewriting to enhance query understanding and using preference-based reinforcement learning to fine-tune the model. The paper is comprehensive in experiments and introduces the HPIR dataset for benchmarking. The paper is well-structured and the methods are clearly explained. Key concepts are well defined and the use of diagrams helps to effectively illustrate the results. And this paper improves the aesthetic quality of results in image retrieval by aligning visual models with human preferences. The proposed method and dataset provide valuable ideas for future research in this area.\n\n[W1] This paper lacks a detailed user study to validate the actual effectiveness of the proposed method. Including a user study with different participants to evaluate the subjective improvement of aesthetic alignment could provide stronger evidence for the actual effectiveness of the method.\n[W2] Placing the related work in Section 6 makes it difficult for readers to have a clear understanding of the problem domain and existing research results before reading the specific methods and experiments, which is not conducive to the coherence of the paper structure.\n\nComputational Cost: Can you elaborate on the computational costs and resource requirements of the reinforcement learning-based fine-tuning process? Are there any optimizations that could reduce the computational burden?\nUser Study: Have you conducted any user studies to validate the improvements in aesthetic alignment from a user perspective? If not, do you plan to include such studies in future work?"
}
] |
wT2TIfHKp8 | Taming the Long Tail in Human Mobility Prediction | With the popularity of location-based services, human mobility prediction plays a key role in enhancing personalized navigation, optimizing recommendation systems, and facilitating urban mobility and planning. This involves predicting a user's next POI (point-of-interest) visit using their past visit history. However, the uneven distribution of visitations over time and space, namely the long-tail problem in spatial distribution, makes it difficult for AI models to predict those POIs that are less visited by humans. In light of this issue, we propose the $\underline{\bf{Lo}}$ng-$\underline{\bf{T}}$ail Adjusted $\underline{\bf{Next}}$ POI Prediction (LoTNext) framework for mobility prediction, combining a Long-Tailed Graph Adjustment module to reduce the impact of the long-tailed nodes in the user-POI interaction graph and a novel Long-Tailed Loss Adjustment module to adjust loss by logit score and sample weight adjustment strategy. Also, we employ the auxiliary prediction task to enhance generalization and accuracy. Our experiments with two real-world trajectory datasets demonstrate that LoTNext significantly surpasses existing state-of-the-art works. | https://openreview.net/pdf/948e70fa886488a4436315922b773598b84b073d.pdf | [
{
"confidence": 5,
"rating": 7,
"review_id": "lklW8T0Fw4",
"review_text": "This paper addresses the challenge of predicting less frequently visited points-of-interest (POIs) in human mobility data, a problem known as the long-tail issue in spatial distribution. The authors introduce a new framework called Long-Tailed Adjusted POI Prediction (LoTNext), which includes two main components: long-tailed graph adjustment module and long-tailed loss adjustment module. Additionally, the framework employs an auxiliary prediction task to enhance the model's generalization and overall accuracy. The effectiveness of LoTNext is demonstrated through experiments on two real-world trajectory datasets, where it significantly outperforms existing state-of-the-art methods in human mobility prediction.\n\n1. The code has been provided, which makes the reproducibility of this paper good.\n2. The paper is generally well-writern and easy to follow.\n3. The proposed method is motivation-grounded.\n\n1. The presentation quality of this paper can be further enhanced.\n2. The authors are encouraged to conduct experiments on more datasets and provide more detailed analysis.\n3. This paper can supplement more theoretical analysis to guarantee the proposed method's effectiveness.\n\n1. I am curious if the structure of preliminary model structure can bring significant influence to the final results.\n2. According to my experiments, the detailed value setting of $\\lambda_1$, $\\lambda_2$, and $\\lambda_3$ in Eq. 16 can lead to the obvious variation of the ultimate model performance. However, the hyperparameter sensitivity experiments on this point is missing. More discussions and empirical results regarding this are welcomed."
},
{
"confidence": 3,
"rating": 6,
"review_id": "kvPadEboia",
"review_text": "The paper presents the Long-Tail Adjusted Next POI Prediction (LoTNext) framework to address the long-tail problem in next POI prediction. This problem refers to the uneven spatial and temporal distribution of POI visits, making it challenging for prediction models to predict less frequently visited POIs. LoTNext combines a Long-Tailed Graph Adjustment module to reduce the noise and impact of long-tailed nodes in the user-POI interaction graph and a Long-Tailed Loss Adjustment module to balance the loss between head and tail POIs. Additionally, an auxiliary prediction task is employed to enhance generalization and accuracy. The proposed method was evaluated on two real-world trajectory datasets, Gowalla and Foursquare, where it significantly outperformed existing methods.\n\n- LoTNext introduces a unique combination of graph adjustment and loss adjustment modules to tackle the long-tail problem, which is a significant contribution to the field of human mobility prediction.\n- The framework is evaluated on two real-world datasets and compared with ten existing methods, demonstrating superior performance across multiple metrics.\n- The paper provides a thorough explanation of the methodology, including the embedding generation, transformer encoder, spatial contextual attention layer, and the overall optimization process, making it reproducible and transparent.\n\n- The proposed model is complex and involves multiple components and adjustments, but it is not clear how computationally expensive it would be to make predictions in services and elsewhere.\n- The model performed well on the dataset used, but it is unclear under what conditions the proposed method will perform well, such as visit intervals and frequency of visits.\n\nHow does the complexity of LoTNext affect its scalability and real-time performance in practical applications? Are there any simplifications or optimizations that can be applied without significantly compromising performance?"
},
{
"confidence": 5,
"rating": 6,
"review_id": "LoALdN9Wdj",
"review_text": "This paper introduces the LoTNext framework, which is designed to improve the prediction of human mobility patterns, specifically addressing the challenge of long-tail distribution in POI visitations. The authors propose a novel approach that includes a Long-Tailed Graph Adjustment module and a Long-Tailed Loss Adjustment module, along with an auxiliary prediction task, to enhance the model's ability to predict less frequently visited POIs. The paper demonstrates the effectiveness of LoTNext through comprehensive experiments on two real-world datasets, showing significant improvements over existing state-of-the-art methods.\n\nI like the research gap proposed by this paper. This is a worthwhile issue to study.\n\n(1) The evaluation could be expanded to include a broader range of metrics to further validate the generalizability of the LoTNext framework.\n(2) It's better to have more explainability related experiments.\n(3) A more detailed literature review is needed (at least in the appendix) so that the novelty of the method could be better evaluated. \n(4) The comparison methods used are somewhat outdated. Why didn't you use the latest methods, such as TPG (https://arxiv.org/abs/2304.04151) or LLM-Move (https://arxiv.org/pdf/2404.01855), for comparison?\n\n(1) Why does cutting off the long tail in the user-POI graph for noise reduction, followed by using loss to reintroduce the long tail, theoretically improve long tail prediction?\n\n(2) In Figure 4, how should we interpret \"four least frequently occurring pois\"? Does it refer to only the four POIs with the lowest frequencies? I guess many POIs only appear once."
},
{
"confidence": 4,
"rating": 5,
"review_id": "UurjdhAYLf",
"review_text": "This study proposes the Long-Tail Adjusted Next Point-of-Interest Prediction (LoTNext) framework. By combining a Long-Tailed Graph Adjustment module and a Long-Tailed Loss Adjustment module, it reduces the impact of long-tailed nodes in the user-POI interaction graph and adjusts loss through logit score and sample weight adjustment strategies. Experimental results show that LoTNext outperforms several existing methods on two real-world datasets.\n\n1. The structure and organization of this paper are well-designed, and the writing is clear and easy to comprehend.\n2. This paper investigates the long-tail problem by proposing a general framework for next POI recommendation, filling the gap in addressing the long-tail issue in POI recommendation. This work is meaningful and valuable.\n3. To enhance the readability of the paper, the authors provide detailed results analysis, parameter settings, and the motivation behind the design of each module in the appendix.\n\n1. In the related work section, the authors review common methods for addressing the long-tail problem in recommendation systems. Since this paper focuses on addressing the long-tail problem, adding several baselines that tackle the long-tail issue in recommendation systems (e,g,, [1]) would better demonstrate the effectiveness of the proposed method.\n2. The novelty of this paper is not very strong. The long tail effect of check-in data, such as the POI frequency distributions, has been studied before. \n3. Additional comparative analyses should be included to illustrate the shortcomings of baselines in handling the long-tail issue. For instance, comparing the proposed model's performance with all baselines (not just Graph-Flashback) on long-tail POIs would better demonstrate its effectiveness in addressing the long-tail problem.\n4. The experimental results are not convincing enough, as the compared methods are not the SOTA method. More recent baselines should be compared (e.g., [2-4]).\n\n[1] Meta graph learning for long-tail recommendation, SIGKDD, 2023.\n[2] EEDN: Enhanced Encoder-Decoder Network with Local and Global Context Learning for POI Recommendation, SIGIR-23\n[3] Adaptive Graph Representation Learning for Next POI Recommendation, SIGIR-23\n[4] Spatio-Temporal Hypergraph Learning for Next POI Recommendation, SIGIR-23\n\n1. How do the authors balance the performance between head POIs and long-tail POIs? In other words, how do they enhance the performance of long-tail POIs without compromising the performance of head POIs? As shown in Figure 3(c), the proportion of predicted long-tail POIs is relatively high. Does this affect the prediction of head POIs?\n2. In the experimental section, it would be beneficial to show the performance of the proposed method and the baselines on both head POIs and long tail POIs to enhance the persuasiveness of the conclusions."
}
] |
wT2KhEb97a | Iterative Methods via Locally Evolving Set Process | Given the damping factor $\alpha$ and precision tolerance $\epsilon$, \citet{andersen2006local} introduced Approximate Personalized PageRank (APPR), the \textit{de facto local method} for approximating the PPR vector, with runtime bounded by $\Theta(1/(\alpha\epsilon))$ independent of the graph size. Recently, Fountoulakis \& Yang asked whether faster local algorithms could be developed using $\tilde{\mathcal{O}}(1/(\sqrt{\alpha}\epsilon))$ operations. By noticing that APPR is a local variant of Gauss-Seidel, this paper explores the question of *whether standard iterative solvers can be effectively localized*. We propose to use the *locally evolving set process*, a novel framework to characterize the algorithm locality, and demonstrate that many standard solvers can be effectively localized. Let $\overline{\operatorname{vol}}{ (\mathcal S_t)}$ and $\overline{\gamma_t}$ be the running average of volume and the residual ratio of active nodes $\textstyle \mathcal{S_t}$ during the process. We show $\overline{\operatorname{vol}}{ (\mathcal S_t)}/\overline{\gamma_t} \leq 1/\epsilon$ and prove APPR admits a new runtime bound $\tilde{\mathcal{O}}(\overline{\operatorname{vol}}(\mathcal S_t)/(\alpha\overline{\gamma_t}))$ mirroring the actual performance. Furthermore, when the geometric mean of residual reduction is $\Theta(\sqrt{\alpha})$, then there exists $c \in (0,2)$ such that the local Chebyshev method has runtime $\tilde{\mathcal{O}}(\overline{\operatorname{vol}}(\mathcal{S_t})/(\sqrt{\alpha}(2-c)))$ without the monotonicity assumption. Numerical results confirm the efficiency of this novel framework and show up to a hundredfold speedup over corresponding standard solvers on real-world graphs. | https://openreview.net/pdf/018805bdb5e7dfb1133288f180c5012bb6b6e388.pdf | [
{
"confidence": 3,
"rating": 6,
"review_id": "Q5X1JcsEJ3",
"review_text": "This paper considers the study of local algorithms for graph clustering which is an important problem in the field of graph data analysis. In particular this paper is considers the task of computing Personalized Page Rank (PPR) vectors for a given graph. In this problem the algorithm is given a graph in the form of its adjacency and degree matrices, the goal is to approximate the Personalized Page Rank vector for a given starting vertex and dampening factor $\\alpha$ up to precision $\\epsilon$ without accessing the entire graph. The classical algorithm of Andersen, Chung and Lang runs in time $O(1/\\alpha \\epsilon)$, which independent of the graph size. The central question posed by subsequent works is whether the dependence on $\\alpha$ can be improved to $1/\\sqrt{\\alpha}$. The main contribution of the paper is to propose a new algorithmic framework based on the locally evolving set process. Under this framework they are able to implement existing algorithms such as Andersen et al.'s APPR algorithm as well as localized implementation of standard gradient descent. They are also able to develop localized versions of chebyshev and heavy ball methods that do achieve the $1/\\sqrt{\\alpha}$ dependence for some fixed constant value of $\\epsilon$. Finally they show that on several large scale graphs, their new localized chebyshev and heavy ball methods do outperform APPR and related methods empirically.\n\nThe main strengths of this paper are to develop a new algorithmic framework that can not only encompass existing algorithms but lead to the development of better ones that overcome previously known limitations for designing local graph clustering algorithms. They also back their theoretical analysis with the practical implementation of their method which is also shown to be superior to previous algorithms.\n\nOne weakness is that the paper is only able to obtain a quadratic improvement in the dependence on the parameter $\\alpha$, obtained by the local implementation of the Chebyshev and Heavy-ball method, only for a value of $\\epsilon$ and not for all.\n\nOne question is that what is the core reason for not obtaining convergence result for accelerated methods for all $\\epsilon>0$."
},
{
"confidence": 4,
"rating": 9,
"review_id": "dbM68Snnqf",
"review_text": "This paper uses the evolving set procedure to give a local PageRank algorithm whose dependence on \\alpha (the reset probability) is \\sqrt{\\alpha}.\n\nIt proposes accelerated local iterative methods with coefficients given by Chebyshev iteration. The convergence of this algorithm in both graph theoretic and general sparse linear systems settings are analyzed in detail. Discussions of the relations between this method and other local iterative algorithms are also given in detail.\n\nThe method was implemented and tested on a range of graphs, mostly coming from social networks. This includes two large ones with edges in the billions. On moderate ranges of \\alpha (reset probability), the experiments show significant speedups (factor of about 3) and convergences (factor of 10) on most graphs.\n\nLocal algorithms are widely used in graph analytics. The question studied is natural, and has been proposed before.\n\nThe method is theoretically well-founded, and has significant technical depth.\n\nThe experiments are thorough and well documents, and clearly demonstrate the advantages of this method in multiple parameter regimes.\n\nThe gains only kick in at a relatively large number of steps: it's not clear to me that these are the parameter regimes in which local algorithms actually get used.\n\nIdeally for the empirical works I'd also like to see comparisons of downstream tasks and effects on overall accuracies (e.g. F-1 score), but the paper itself has already covered a lot of ground.\n\nIs the dependence on \\epsilon optimal? Aka. have methods with \\sqrt{\\eps} (or even \\log(1 / \\eps)) dependences been ruled out?"
},
{
"confidence": 2,
"rating": 7,
"review_id": "HQkYPlroli",
"review_text": "This paper considers the approximate personalized page rank. Classical results for this problem have a runtime that is linear in $1/\\alpha\\epsilon$ where $\\alpha$ is the damping factor and $\\epsilon$ is the error parameter. The authors show that APPR is simply a local variant of Gauss-Seidel Successive Overrelaxation. Using this connection, the authors derive new run time bounds for APPR and also propose a new algorithm based on Gradient Descent. The execution time for both these are, in the worst-case, identical to the previous bounds. However, they are more sensitive to the state of execution of the algorithms (depend on the active nodes) and seem to mirror the actual performance of these algorithms. Also, under certain assumptions, they improve the worst-case execution time.\n\nThe paper addresses an important problem, provides deeper insights into an existing algorithm, provides a new algorithm and also reanalyzes the algorithm in a more fine-grained way. All of this is done via connection to GSSOR which seems to be new.\n\nI find the result quite interesting. However, I am not very familiar with recent work on personalized page rank. For this reason, I recommend accepting but with a low confidence.\n\nNA\n\nCan you explain how weak/strong are the assumptions that you make to achieve the improvement for the local Chebyshev method? I couldn't quite gauge the usefulness of this result."
}
] |
wSqpNeMVLU | A Theoretical Perspective for Speculative Decoding Algorithm | Transformer-based autoregressive sampling has been the major bottleneck for slowing down large language model inferences. One effective way to accelerate inference is Speculative Decoding, which employs a small model to sample a sequence of draft tokens and a large model to validate. Given its empirical effectiveness, the theoretical understanding of Speculative Decoding is falling behind. This paper tackles this gap by conceptualizing the decoding problem via markov chain abstraction and studying the key properties, output quality and inference acceleration, from a theoretical perspective. Our analysis covers the theoretical limits of speculative decoding, batch algorithms, and output quality-inference acceleration tradeoffs. Our results reveal the fundamental connections between different components of LLMs via total variation distances and show how they jointly affect the efficiency of decoding algorithms. | https://openreview.net/pdf/c4bf0178245b320f82c8dda9da595d767fd77654.pdf | [
{
"confidence": 2,
"rating": 6,
"review_id": "SIeernQtsD",
"review_text": "This paper presents a theoretical study on speculative decoding, an efficient inference method for large autoregressive models. It highlights practical implications, proposing a Pareto-optimal solution for the rejection-distribution bias tradeoff.\n\n- The authors provide a robust theoretical foundation, illustrating the practical implications of speculative decoding, such as the improvement of rejection accuracy, which cannot be achieved by simply changing the acceptance probability.\n - The study explores the trade-offs between inference cost and quality degradation, supported by an optimization model. This analysis is valuable for practical applications.\n\n- The main figure does not clearly communicate the core concept of speculative decoding. It might lead readers to believe that speculative decoding primarily addresses hallucination, which is not its main advantage.\n - The experimental results are not distinctly highlighted, and the authors do not explain how these results support their theoretical analysis. While the theoretical contributions are significant, the paper would benefit from more extensive empirical validation.\n\nNA"
},
{
"confidence": 5,
"rating": 6,
"review_id": "duKcJyPU8i",
"review_text": "The paper presents a theoretical perspective on speculative sampling. Through Theorems 1 and 2, the authors demonstrate that the sampling method employed by speculative sampling is optimal and unbiased. Subsequently, Theorem 3 introduces a multi-candidate approach to enhance the acceptance rate of speculative sampling.\n\nThe writing is very clear, with takeaways provided under each theorem to explain the theory.\n\nTheorems 1 and 2 are crucial for speculative sampling. In paper [23], the authors showed that speculative sampling is unbiased but did not prove its efficiency compared to other rejection sampling methods. The proof provided here is very important.\n\nThe experiments are not sufficient. I would like to see improvements in batch speculative sampling in real-world scenarios.\n\nI am curious if batch speculative sampling can be combined with tree-style methods, e.g., [1] CaPE and [2] Medusa?\n\n[1] Du, C., Jiang, J., Yuanchen, X., Wu, J., Yu, S., Li, Y., ... & You, Y. GliDe with a CaPE: A Low-Hassle Method to Accelerate Speculative Decoding. In Forty-first International Conference on Machine Learning.\n[2] Cai, T., Li, Y., Geng, Z., Peng, H., Lee, J. D., Chen, D., & Dao, T. (2024). Medusa: Simple LLM inference acceleration framework with multiple decoding heads. In Forty-first International Conference on Machine Learning.\n\nsee weakness"
},
{
"confidence": 2,
"rating": 6,
"review_id": "gobKxgntbI",
"review_text": "The author aim to develop theoretical understanding of speculative decoding. The authors assume that given a large and small model participating in speculative decoding, the computation complexity of the small model is negligible. Under this assumption, they characterize the expected rejection rate of speculative decoding. They show that this bound depends on the total variation distance between the generations from small and large model. Next, the authors show that spectral decoding gives optimal rejection bounds in class of all rejection based methods. Motivated by recent works analyzing batch speculative decoding, where the rejection is done only if M tokens are rejected from a given sample. Finally, given an acceptance probability, the authors show an optimal solution solution to the total variation loss between the distributions of large model and the one found by speculative decoding. This objective changes linearly with the rejection probability. This provides insights on selecting the optimal value of rejection threshold as per requirement. The presented theoretical results are backed up with appropriate experiments validating them.\n\n1) The theoretical analysis presented by the authors provide several interesting insights about the inference efficiency observed by spectral decoding.\n\n2) All the results are backed up with simulation experiments, which strengthen the results presented in the paper.\n\n1) It is not completely clear, why making the assumption about negligible compute of the small model is not a strong assumption. Since the small model needs to generate the tokens autoregressively therefore even though its single pass could be small as compared to the larger model but it the context length is high i.e. several autoregressive passes are made, the compute of small model might not be negligible. It would be great if the authors can provide some empirical evidence to justify this assumption.\n\n2) It would have been great if the authors provided evidence using real world models in support of their theory. Although, this is not a major weakness, but authors should consider it in the camera ready version.\n\nI request the authors to kindly address the questions in the weaknesses section."
},
{
"confidence": 4,
"rating": 5,
"review_id": "Pt4vqjpoz9",
"review_text": "This paper provides detailed analysis to speculative decoding and batch speculative decoding. The conclusions of the paper are: (1) speculative decoding is unbiased and it shows the expected rejection rate; (2) speculative decoding has the lowest rejection rate in all the unbiased algorithm that belongs to the familty defined in Algorithm 2; (3) batch speculative decoding has lower rejection rate than speculative decoding; (4) it analyzes the trade-off between efficiency and effectiveness of the family of algorithm defined in Algorithm 2.\n\n1. The paper provides comprehensive theoretical analysis.\n\n2. The findings in Theorem 4 and 5 are interesting.\n\n3. The paper is easy to understand.\n\n1. Although the paper provides lots of theoretical analysis. But I find only Theorem 4 and 5 are somewhat interesting. Theorem 1 is already derived in the original speculative decoding paper. For Theorem 2, although speculative decoding is proven to be optimal in the family of algorithms defined in Algorithm 2, but I don't think there are a lot of existing algorithms can be formulated in Algorithm 2. In fact, is there any algorithm that belongs to Algorithm 2 and is unbiad and it not speculative decoding? For Theorem 3, the finding that batch speculative decoding has lower rejection rate than vanilla speculative decoding is not surprising. \n\n2. Although Theorem 4 and Theorem 5 are interesting, it only solves half of the problem: given b, what should P be. It would be better if the authors could also discuss the design of b.\n\n3. I think the paper can also be improved if the authors could summarize a new speculative algorithm from Theorem 4 and 5 and running experiments to compare with vanilla speculative decoding.\n\nsee weakness above"
}
] |
wSpIdUXZYX | Pretraining Codomain Attention Neural Operators for Solving Multiphysics PDEs | Existing neural operator architectures face challenges when solving multiphysics problems with coupled partial differential equations (PDEs) due to complex geometries, interactions between physical variables, and the limited amounts of high-resolution training data.
To address these issues, we propose *Codomain Attention Neural Operator* (CoDA-NO), which tokenizes functions along the codomain or channel space, enabling self-supervised learning or pretraining of multiple PDE systems.
Specifically, we extend positional encoding, self-attention, and normalization layers to function spaces. CoDA-NO can learn representations of different PDE systems with a single model. We evaluate CoDA-NO's potential as a backbone for learning multiphysics PDEs over multiple systems by considering few-shot learning settings. On complex downstream tasks with limited data, such as fluid flow simulations, fluid-structure interactions, and Rayleigh-Bénard convection, we found CoDA-NO to outperform existing methods by over 36%. | https://openreview.net/pdf/8f9b484442e75171a47f932c6ba656a806dac2e1.pdf | [
{
"confidence": 4,
"rating": 5,
"review_id": "we9wQsIB5Q",
"review_text": "The authors introduced an innovative attention-based neural operator and evaluated it against various baselines. They employed masked pretraining and finetuning techniques, comparing the model's performance to multiple benchmarks. Their study included interesting problems such as fluid-structure interactions. The authors showed that their approach is effective for few-shot learning in their experimental evaluations.\n\n- The paper presents a new neural operator architecture based on attention mechanisms. This architecture demonstrates superior performance compared to tested baseline models on the NS and NS+EW benchmarks, highlighting its potential advancements in solving PDE-related problems.\n- To the best of my knowledge, the authors are the first ones to use masked training for PDE learning effectively\n- The mathematical formulation of the proposed model is well-articulated in the paper. This clarity helps readers understand the underlying principles of the model's operation.\n- The study addresses a compelling and very relevant multiphysics problem involving fluid-structure interactions.\n- The authors demonstrated through empirical evidence that their approach is effective for few-shot finetuning in various scenarios.\n\n- The study in Table 1 demonstrates the model’s performance on two specific PDEs: Navier-Stokes for fluid flow and a coupled Navier-Stokes with elastodynamics equations for fluid-solid interactions. While these cases provide some insight into the model's capabilities, they are not sufficient to generalize the model's applicability to a broader range of multiphysics problems.\n- For the NS dataset with Reynolds number Re=400, the model trained from scratch with only 25 samples matches the performance of the pretrained model. In the case of NS+EW benchmark, when the Reynolds number increases to 4000, even with just 5 samples, both the finetuned and scratch-trained models exhibit similar testing errors. This suggests that pretraining may not provide significant advantages in many cases.\n- The use of the L2 loss metric to evaluate model performance is problematic because it aggregates outputs of different physical meanings, such as pressure p, velocity u, and displacement d, into a single loss value. This can obscure individual variable contributions and lead to misleading conclusions about model accuracy.\n- The absence of prediction visualizations diminishes the interpretability of the L2 loss values. Visualizing predictions could provide more intuitive insights into model performance and clarify discrepancies in the loss metric.\n- The study in the Table 1 does not include a Fourier Neural Operator. Including such a benchmark is crucial to fairly evaluate CoDA-NO’s performance against an FNO model of similar size.\n- The FNO model used for comparison in Table 2 has 1.9 billion parameters, vastly outnumbering the CoDA-NO's 11 million parameters. This overparameterization likely affects the model's performance due to the relatively small training set sizes and makes the comparison with CoDA-NO’s performance misleading. Smaller FNO models could provide a more realistic performance benchmark. The claim that CoDA-NO’s better performance compared to a much larger FNO model demonstrates parameter efficiency is misleading. Parameter efficiency should be evaluated with models of comparable sizes, and overparameterized models may not reflect typical scenarios.\n- CoDA-NO has significantly higher inference times compared to other baseline models. \n\nThese points collectively highlight the need for more comprehensive experiments, appropriate metrics, realistic model comparisons, and practical considerations like inference time to fully evaluate the model's capabilities.\n\n- What is the reasoning behind applying the attention in the in the channel space? Do the models scale better? Do they have higher expresive power?\n- Why is 1.9B FNO model used for comparisson?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "cHyOspUKlZ",
"review_text": "This paper presents a new operator learning method for solving multiphysics PDEs. The attention scheme is designed on channel space to capture multiple physical variables, which is called co-domain. Moreover, positional encoding and normalization layers are considered. Such a strategy enables self-supervised pretraining of PDE systems. The experiments have shown the effectiveness of the proposed method.\n\n- The proposed idea is interesting. It enables the generalization to coupled physical systems, which is of interest to the scientific machine learning (SciML) community. Also, self-supervised pretraining is one emerging tool in SciML and will gain a lot of attention in the future. \n\n- This paper provides the experiments on a Navier-Stokes equation and its coupled version with the elastic wave equation.\n\n- This paper is well-organized and well-written. The details are easy to follow.\n\n- This paper only considers one coupled system, i.e., NS and NS+EW. It may not validate the general applicability of the proposed method. The motivation of using this case should be enhanced. Also, considering some other PDE systems might strengthen the paper, such as the Rayleigh-Benard convection system. It is also a coupled system with NS + temperature. \n\n- The motivation for the combination of positional encoding, self-attention, and normalization layers seems to be better clarified. Although those parts are modular (claimed in Line 68), the connections between each other are also important. \n\n- In Appendix B.1, it would be good to include more details of self-supervised pretraining, such as masked ratio. \nThe evaluation metrics might not be sufficient. This paper only considers L2 errors. There are many papers considering relative l2 error [1]. For the turbulence data, researchers also care about the infinity norm a lot. It would be better to add more evaluation metrics in this paper.\n\n**References:** \n\n[1] Hao, Zhongkai, et al. \"Dpot: Auto-regressive denoising operator transformer for large-scale pde pre-training.\" arXiv preprint arXiv:2403.03542 (2024).\n\n[2] Ren, Pu, et al. \"Superbench: A super-resolution benchmark dataset for scientific machine learning.\" arXiv preprint arXiv:2306.14070 (2023).\n\n- This paper considers the generalization to different Reynolds numbers. Is it possible to generalize to different physical parameters of elastic wave equations, such as the object size or the solid density \\rho^s?\n\n- Lines 100-101, this paper claims that it considers diverse physical systems in terms of input functions, geometries, and Reynolds numbers. I would say it’s just different PDE scenarios within one PDE type (NS and NS+EW). It seems unrelated to the diversity of PDE systems, such as reaction-diffusion, convection systems, etc."
},
{
"confidence": 3,
"rating": 6,
"review_id": "sMZlaUfxbQ",
"review_text": "This paper introduces Codomain Attention Neural Operator, which tokenizes function along the channel dimension. It allows to learn representations of different PDE systems within a single model. The authors shows that finetuning a pretrained CoDA-NO on different physics yields good accuracy.\n\n- I like the problem setting and the idea of the algorithm.\n- The experimental task is quite interesting.\n- The results are convincing, and the fact that the model can generalize to higher Reynold numbers seen during training is promising.\n- I liked the used of GNO for handling non-uniform grids.\n- The code seems solid.\n\n- To me, the main weakness of the paper is that the presentation lacks of clarity. I don't see the point of doing 3 pages of mathematics in function space if, in practice, everything is done in discrete space. I think this blurs the message of the paper and it is difficult for the reader to understand what is the relevant information for understanding the actual CoDA-NO algorithm. In my opinion, these mathematics are not essential to the algorithm and could be put in appendix. I can always express a neural network architecture in function space, but since in practice we are working on discretized space, it is never done in experimental deep learning papers. Moreover, no discussion on how to go from infinite-dimensional space to discretized space is given by the authors.\nThis space could be used to have the actual detailed architecture. I may have missed the point on the usefulness of these sections and am willing to understand the point of view of the authors regarding this. \n- I don't fully understand the CoDA-NO algorithm and I think a Figure showing the whole architecture would have clarified this.\n\n- Why do we need a VSPE per physical variable? Positional encoding are usually used when there is some sort of order between the tokens?"
},
{
"confidence": 3,
"rating": 7,
"review_id": "R1dbnIyhEM",
"review_text": "The authors propose CoDA-NO, a neural operator architecture that captures interactions across different physical variables of coupled PDE systems. The method involves a generalization of the transformer architecture, including self-attention, positional encodings, and normalization, to function spaces. On two novel datasets for fluid-structure interaction and fluid dynamics, the authors show that their method achieves state-of-the-art performance.\n\n- The paper investigates an interesting problem of how to appropriately capture interactions across different physical variables, that allows for generalization to new codomains.\n- As far as I am aware, the generalization of the Transformer architecture to function spaces is novel.\n- The experimental results, especially the generalization capabilities (from fluid dynamics to fluid-solid interactions) are impressive.\n- Ablation studies on the proposed architectural changes are thorough.\n\nOverall, the experiments seem quite compelling. However, it could be illuminating to provide a graphical visualization of the data from Table 1, regarding efficiency of fine-tuning and robustness to out-of-distribution inputs: see questions.\n\n- It seems that the performance of the models across the board continue to improve with increase few-shot fine-tuning samples beyond N=100. What does the scaling look like for the proposed model and where does performance saturate?\n- Similarly, the model is evaluated on the in-distribution Re=400 and the out-of-distribution Re=4000 settings, for which the performance of the model is comparable. What does the scaling look like as the task becomes further out-of-distribution (e.g. decreasing velocity)?"
}
] |
wQpNG9JnPK | Neural Collapse Inspired Feature Alignment for Out-of-Distribution Generalization | The spurious correlation between the background features of the image and its label arises due to that the samples labeled with the same class in the training set often co-occurs with a specific background, which will cause the encoder to extract non-semantic features for classification, resulting in poor out-of-distribution generalization performance. Although many studies have been proposed to address this challenge, the semantic and spurious features are still difficult to accurately decouple from the original image and fail to achieve high performance with deep learning models. This paper proposes a novel perspective inspired by neural collapse to solve the spurious correlation problem through the alternate execution of environment partitioning and learning semantic masks. Specifically, we propose to assign an environment to each sample by learning a local model for each environment and using maximum likelihood probability. At the same time, we require that the learned semantic mask neurally collapses to the same simplex equiangular tight frame (ETF) in each environment after being applied to the original input. We conduct extensive experiments on four datasets, and the results demonstrate that our method significantly improves out-of-distribution performance. | https://openreview.net/pdf/0e5a651b0723810c001303ef89ef83bf1da33e4c.pdf | [
{
"confidence": 4,
"rating": 4,
"review_id": "o9GoweowjQ",
"review_text": "This paper addresses the problem of spurious correlations caused by environments from where data are collected.\nThe proposed method applies a mask to input data to separate spurious and semantic features.\nThe masked input data are fed into a local model specialized to each environment.\nEach local model is trained to induce neural collapse for OOD generalization.\n\n- S1: Making use of neural collapse for OOD generalization is interesting.\n\n- W1: Comparison with not only OOD generalization methods but also spurious correlation (sometimes called bias or shortcut) methods is necessary. Methods that can automatically detect and split spurious and semantic features have been developed [a-e].\n- W2: Types of spurious features that the proposed method can handle need to be clarified. Can the proposed method handle spurious features in superposition, e.g., objects and textures?\n- W3: The rationale behind the proposed method needs to be clarified. For instance, it is unclear why the method adds the noise to the mask when learning it.\n- W4: Deeper analyses in the experiments would make the paper more interesting. For example, \n - Whether the neural collapse is achieved by the proposed method should be confirmed in the experiment. \n - Visualizing learned masks would produce more valuable insights.\n- W5: What is described in the introduction and what is done in the proposed method seems to be different. Although L42 states that `we propose to compute the Frobenius norm (F-norm) of the difference between the feature prototypes and the standard simplex ETF`, the F-norm does not appear in the proposed method.\n- W6: Writing and formatting can be improved. There are many inconsistent spellings. For example, \n - Is \"variable features\" in L153 the same as spurious features? \n - The meaning of \"interaction\" in L189, 192, and so on is unclear. Maybe \"training?\" \n - Such inconsistent spellings occur from Section 4.\n\n[a] Tiwari, Rishabh, and Pradeep Shenoy. \"Overcoming simplicity bias in deep networks using a feature sieve.\" ICML2023. \n[b] Bahng, Hyojin, et al. \"Learning de-biased representations with biased representations.\" ICML2020. \n[c] Yang, Wanqian, et al. \"Chroma-vae: Mitigating shortcut learning with generative classifiers.\" NeurIPS2022. \n[d] Liu, Evan Z., et al. \"Just train twice: Improving group robustness without training group information.\" ICML2021. \n[e] Nam, Junhyun, et al. \"Learning from failure: De-biasing classifier from biased classifier.\" NeurIPS2020.\n\n- Q1: Why do the accuracies of IRM (w/ env) in Tables 1 and 2 differ?\n- Q2: When environment labels are not available, how are spurious and semantic features learned? Is there a possibility that the two types of features are learned conversely?\n- Q3: How do we determine the number of local models when the total number of the environments is unknown?"
},
{
"confidence": 4,
"rating": 5,
"review_id": "3NrhDu3L47",
"review_text": "The paper leverages the neural collapse inspired ETF behavior to simulate different environments in datasets, and uses it for OOD classification.\n\nThe paper uses a phenomenon that's apparent in the standard setting, for a task that varies from the standard setting. It uses intuitive notions to tackle the task of OOD classification. The paper experiments are generally convincing.\n\nThe paper seems generally consistent and well merited. The experiments are a bit lacking, but are convincing.\n\nThe following papers seem missing from the neural collapse literature that may be helpful: \n\n1. https://arxiv.org/abs/2112.15121\n2. https://proceedings.neurips.cc/paper_files/paper/2023/hash/b63ad8c24354b0e5bcb7aea16490beab-Abstract-Conference.html\n3. https://openreview.net/pdf?id=162TqkUNPO"
},
{
"confidence": 5,
"rating": 7,
"review_id": "UN06jVS7OE",
"review_text": "The spurious correlation between image background features and their labels is a significant research problem, and the existing research suffers from the issue of difficult decoupling. In this paper, we propose a new approach to solve the spurious association problem by alternately performing environment segmentation and learning semantic masks from the perspective of neural collapse. Extensive experiments are conducted on four datasets and the results show that the proposed method significantly improves the out-of-distribution performance.\n\nThis paper explores an important and widespread problem in real-world applications with solid and extensive experiments. The writing is clear and the narrative is easy to follow, facilitating an understanding of the spurious correlations problem. The use of neural collapse is particularly innovative.\n\nW1: In lines 48-50, it is mentioned that IRM-based methods learn similar representations from different environments, indicating a lack of proper alignment. Could you provide a corresponding experiment to demonstrate this phenomenon?\n\nW2: In Figure 3, the explanation of the middle module that uses logits to judge the environment is unclear. Could you please clarify the structure of the local models, the number of local models used, and the specific meaning of the logit values?\n\nW3: Could you explain the differences between masks based on pixel-level and feature-level approaches? If using feature-level masks, what is the impact of different network-layer features on model performance?\n\nThis work addresses an important and interesting question by introducing neural collapse from an invariant perspective, which I believe can provide valuable insights to the community. However, my main concern is that the same mask is used to learn both invariant and variable feature information. What are the advantages of the mask learning mechanism proposed in this paper compared to HRM's [1] mask mechanism?\n\n[1] Heterogeneous Risk Minimization\n\nFor more information see Weaknesses."
}
] |
wQiJNyPENt | Batched Energy-Entropy acquisition for Bayesian Optimization | Bayesian optimization (BO) is an attractive machine learning framework for performing sample-efficient global optimization of black-box functions. The optimization process is guided by an acquisition function that selects points to acquire in each round of BO. In batched BO, when multiple points are acquired in parallel, commonly used acquisition functions are often high-dimensional and intractable, leading to the use of sampling-based alternatives. We propose a statistical physics inspired acquisition function that can natively handle batches. Batched Energy-Entropy acquisition for BO (BEEBO) enables tight control of the explore-exploit trade-off of the optimization process and generalizes to heteroskedastic black-box problems. We demonstrate the applicability of BEEBO on a range of problems, showing competitive performance to existing acquisition functions. | https://openreview.net/pdf/eb1379a8a03b7fd20503da9134763dc8400b8d6b.pdf | [
{
"confidence": 3,
"rating": 3,
"review_id": "rPwQjNwafk",
"review_text": "This paper introduces a new acquisition function BEBBO for batched BO. BEBBO tries to build (negative) free-energies-like acquisition function, enabling gradient-based optimization, tight exploration-exploitation control, and risk-averse BO under heteroskedastic noise. It tries to improve existing parallel acquisition functions in the following ways:\n1.\tuses a hyper-parameter T to directly balance exploration and exploitation by separating these two parts clearly;\n2.\tkeeps the behavior predictable by scaling E and I with batch size;\n3.\tenables the optimization of gradient descent by holding the availability of closed-form expressions for GP.\nThis paper demonstrates several experimental comparisons and shows its effectiveness.\n\n1.\tThis paper shows an enlightening acquisition function method inspired by statistical physics. \n\n2.\tThe experimental results show the effectiveness of BEBBO on problems without noise or with heteroskedastic noise.\n\n1.\tThe idea is straightforward: simply combine two common components—entropy reduction and the weighted sum of values. In my opinion, the novelty is not very strong. \n\n2.\tThe article doesn't discuss the situation when BEEBO is used with other surrogate models. It seems that if BEEBO is not used with GP, it loses the advantages of closed-form expressions and gradient descent optimization. \n\n3.\tIn control problems shown in Figure A2, the performances of meanBEBBO and maxBEBBO are not outstanding. Especially, these two variants are surpassed by KB in Robot pushing problem.\n\n4.\tAlthough BEEBO performs well on many synthetic test problems, its versatility and effectiveness require more experimental validation in specific applications.\n\n5.\tIn the experiments in main text, the authors only showed the comparison with q-UCB. It would be better to show comparisons with other batched baselines and provide a thorough analysis. The comparison with q-UCB shows the advantage on the balance between exploration and exploitation. But other advantages emphasized in the paper, such as the benefits of gradient descent optimization and the tight control of the explore-exploit trade-off, are not fully demonstrated. I suggest that the authors can re-organize the paper to move the comparison with other baselines to the main paper. \n\n6.\tThe theoretical analysis is not deep. No regret bound is analyzed.\n\n- How to set the parameter T properly? Do you have some instructions?\n\n- In line 158, why does I(x) also scale linearly with batch size? \n\n- The Equation A5 is not shown.\n\n- What are the advantages of multiplying batch size Q in E(X)?\n\n- How does the behavior of BEBBO change with batch size?"
},
{
"confidence": 3,
"rating": 4,
"review_id": "hExOAwba9l",
"review_text": "Proposing a new acquisition function inspired by statistical physics, which allows explicit control of exploration-exploitation trade-offs in a batch BO setting.\n\nDrawing inspiration from statistical physics is a promising direction, as it naturally aligns with Bayesian approaches.\n\n**Major Points**\n1. **Lack of Unique Selling Point:** \n The method does not appear to solve any unique cases that other methods cannot. While the related work section outlines many similar approaches, this work only compares itself to q-UCB. Without a theoretical study, such as convergence rate analysis, there is insufficient motivation to adopt another heuristic approach like this. To demonstrate efficacy, a comprehensive empirical study with extensive comparisons to existing works is necessary, given the unclear advantages.\n\n2. **Review of Claimed Selling Points:**\n - **Not Based on MC Integration Like BoTorch:** \n While this is true, it is unclear if it is beneficial. MC approaches are approximations but have convergence guarantees (refer to the BoTorch paper's appendix). This work lacks such guarantees.\n - **Tight Control of Exploration-Exploitation Trade-off:** \n The proposed method is not the only solution. UCB theory can bound the feasible region by [max LCB(x), UCB(x)] with 1 - $\\delta$ probability (e.g., see [1]). This region can be controlled by $\\beta$ hyperparameters, corresponding to $\\delta$. Constrained batch BO within this region would yield similar results with theoretical guarantees.\n - **Heteroskedastic BO:** \n There are no comparisons with existing methods. UCB variants can address this problem. Vanilla UCB theory does not differentiate between epistemic and aleatoric uncertainty. Therefore, UCB with heteroscedastic GP can serve the same purpose. For example, training a heteroscedastic GP with the observed dataset and replacing the inner GP on noise variance with normal isotropic likelihood variance (pure epistemic uncertainty) will yield similar results as risk-averse batch BO, without needing the change in modelling the acquisition functions regardless of hetero-/homo-scedastic cases like this work.\n\n3. **Setting k in Practice:** \n Setting $k$ is not an easy task for users. In UCB, $\\beta$ presents a similar challenge, but there are theoretical guidelines and algorithm to compute this (e.g., [2]).\n\n4. **Constrained Uncertainty Sampling:** \n This work can be understood as a variant of constrained uncertainty sampling. As [3] explains, variance reduction can be written similarly to the entropy proposed in this paper (see section 2.2). It also shows that the variance-only approach is inferior to UCB both theoretically and empirically. The batch setting may lead to model misspecification, particularly when hallucination (aka fantasization) is applied. The concerns and approach are notably similar to ([49] in your citation number), making a comparison with their method unavoidable.\n\n5. **Data Augmentation Procedure:** \n The explanation is unclear. Is it fantasization (aka hallucination) or simply observed points? How does this differ from Eq.(3) in [3]?\n\n- [1] Fengxue Zhang, Jialin Song, James C Bowden, Alexan- der Ladd, Yisong Yue, Thomas Desautels, and Yuxin Chen. Learning regions of interest for bayesian optimiza- tion with adaptive level-set estimation. In International Conference on Machine Learning, pages 41579–41595. PMLR, 2023.\n- [2] Kihyuk Hong, Yuhang Li, and Ambuj Tewari. An optimization-based algorithm for non-stationary kernel bandits without prior knowledge. In International Conference on Artificial Intelligence and Statistics, pages 3048–3085. PMLR, 2023.\n- [3] Srinivas, N., Krause, A., Kakade, S., & Seeger, M. (2010). Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design. In Proceedings of the 27th International Conference on Machine Learning (pp. 1015-1022).\n\nQuestions are written in the above weakness section."
},
{
"confidence": 4,
"rating": 5,
"review_id": "2lJifqDPkW",
"review_text": "The paper introduces a new approach to batch Bayesian optimization that explicitly trades off between exploration and exploitation via energy and entropy terms. The method is able to efficiently generate large batches for evaluation that outperform comparable methods for Bayesian optimization.\n\nThe method is novel and well-motivated. \n\nI found the analysis in Appendix B to be especially strong, in which the proposed method BEEBO is compared to other methods for batch Bayesian optimization. This analysis shows the originality of the method and gives strong context for it.\n\nThe paper is clearly written and easy to read. The appendix was especially useful and had many valuable parts.\n\nThe analysis of heteroskedastic noise was great to see and shows a useful and understudied setting where the method is especially valuable.\n\nI think the method is interesting and will be of value to the field. However, I do not find that the experimental evaluation of the method provides full support for the claims of the paper.\n\n**Issue 1: Evaluation limited to large-batch setting**\n\nThe major issue is that the paper claims the method is for general batch Bayesian optimization problems, without any qualifiers that I can see. The experiments all use q=100, which is a large batch size. Smaller batch sizes are often of interest too, e.g. batch sizes of 5, 10, and 50 in the GIBBON paper. The setting q=100 used here is also used in the TURBO paper (Eriksson et al. 2019) where it is described as a \"large batch size.\" \n\nGiven the experiment results in the paper, I don't know if this method will perform well for small- or medium-sized batches. Thus, either the experiments need to be expanded to include experiments with batch sizes such as 5 and 10, or the framing of the paper needs to be adjusted to emphasize that the method is specifically for large-batch Bayesian optimization, not general batch BO problems.\n\nThis issue also relates to the choice of baselines. The experiments only explore large-batch settings, where GIBBON (as the paper notes) is known to perform poorly. If the paper wants to claim that it performs better than GIBBON in general, then it needs to make that comparison on batch sizes of 5 and 10. If the paper wants to claim superiority only on large-batch settings, then that's fine to only use q=100, but then it needs to compare to state-of-the-art for large batch. Thompson sampling is a popular method for large-batch settings which is included as a baseline, but to my knowledge the state-of-the-art for large-batch BO is TURBO (Eriksson et al. 2019). In fact the q=100 and the robot pushing and rover trajectory problems are all exactly as in the TURBO paper, so its inclusion as a baseline is pretty obvious and, I think, necessary.\n\n**Issue 2: Lack of statistical significance**\n\nThe results of the experiments do not appear to be statistically significant. The main results given in the main text are tables, and these tables do not have confidence intervals. The only place where uncertainty in the results is show are the figures in the appendix, and there the confidence intervals appear to overlap in most cases. This is due to the use of only 5 replicates. I appreciate that these experiments are costly to run since they are using 1000 iterations, nevertheless the lack of statistical significance in most of the results provides weak support for the claim that BEEBO is actually better, vs. what we're seeing just being noise in the experiments. The paper needs some form of statistical analysis to convince the reader that what we're seeing is not just noise in the experiments. The best way to do this would be to include confidence intervals in tables 2 and 3, and then increase the number of replicates as necessary to achieve statistical significance in the differences in means. I do not feel it appropriate to highlight the method as being \"best value\" when it is possible that the confidence interval for that value contains the values of the other methods.\n\nThe issues raised above can be addressed by running more experiments (batch sizes 5 and 10; TURBO; more replicates to get reasonable CIs for the results tables). But it will probably require more experiments that can be run in the rebuttal period. Do we expect the method to work well for Q=5 and Q=10?"
},
{
"confidence": 4,
"rating": 5,
"review_id": "b2TSOfbPMJ",
"review_text": "This work introduces a batched acquisition function that balances exploration and exploitation by using a weighted sum of mutual information and expected value, with the weights defining the trade-off. The discussion links the proposed algorithm to UCB and asserts that it naturally addresses heteroskedastic noise.\n\n1. The proposed acquisition function and its optimization and approximation methods are straightforward and practical. \n2. The paper provides extensive empirical results to illustrate the proposed algorithm's efficiency.\n\n1. The introduced parameter controlling the trade-off lacks interpretation as in previous methods.\n2. The completeness of the related work discussion is concerning. This is potential because the summarization lacks high-level extraction of the design of the algorithm, and the focus of the paper is, to some extent, scattered。\n\nOne concrete example of the second aforementioned weakness is the criticism of MC approximation in high-dimensional applications. Recent advancements in applying MCMC address this issue in a principled manner and might be of interest. \n\n***Reference:***\n\nYi, Zeji, Yunyue Wei, Chu Xin Cheng, Kaibo He, and Yanan Sui. \"Improving sample efficiency of high dimensional Bayesian optimization with MCMC.\" arXiv preprint arXiv:2401.02650 (2024)."
}
] |
wN5AgP0DJ0 | Space-Time Continuous PDE Forecasting using Equivariant Neural Fields | Recently, Conditional Neural Fields (NeFs) have emerged as a powerful modelling paradigm for PDEs, by learning solutions as flows in the latent space of the Conditional NeF. Although benefiting from favourable properties of NeFs such as grid-agnosticity and space-time-continuous dynamics modelling, this approach limits the ability to impose known constraints of the PDE on the solutions -- such as symmetries or boundary conditions -- in favour of modelling flexibility. Instead, we propose a space-time continuous NeF-based solving framework that - by preserving geometric information in the latent space of the Conditional NeF - preserves known symmetries of the PDE. We show that modelling solutions as flows of pointclouds over the group of interest $G$ improves generalization and data-efficiency. Furthermore, we validate that our framework readily generalizes to unseen spatial and temporal locations, as well as geometric transformations of the initial conditions - where other NeF-based PDE forecasting methods fail -, and improve over baselines in a number of challenging geometries. | https://openreview.net/pdf/d1a0db94f57add0d588388b6446bcdc66c454e64.pdf | [
{
"confidence": 4,
"rating": 5,
"review_id": "4rwtcsEs03",
"review_text": "The paper presents a novel framework for solving Partial Differential Equations (PDEs) by leveraging the power of Equivariant Neural Fields (ENFs). The authors propose a space-time continuous approach utilizing the symmetry of the PDEs, which is crucial for improving generalization and data-efficiency. The framework is tested on various geometries and PDEs, showing its effectiveness in handling complex dynamics.\n\n1. **Data efficiency**: By designing a system that preserves the symmetries of PDEs, the proposed framework enhances the model's ability to generalize from limited data.\n2. **Novel initialization method**: The use of meta-learning to structure the latent space of the ENF simplifies the learning process and leads to better performance than autodecoding.\n\n1. **Error Accumulation:** The usage of ODESolver might pose a challenge with error accumulation over time, particularly for dynamics occurring beyond the training horizon, which could affect the model's long-term predictive accuracy. So it would be helpful if the model is tested in a longer timespan.\n2. **Lack of Comparative Analysis:** While the paper compares its approach to a baseline method, a more comprehensive comparison with existing state-of-the-art methods in PDE solving would strengthen the paper's claims, such as Geo-FNO[1], GNOT[2], Transolver[3].\n\n[1] Li, Z., Huang, D. Z., Liu, B., & Anandkumar, A. (2023). Fourier neural operator with learned deformations for pdes on general geometries. *Journal of Machine Learning Research*, *24*(388), 1-26.\n\n[2] Hao, Z., Wang, Z., Su, H., Ying, C., Dong, Y., Liu, S., ... & Zhu, J. (2023, July). Gnot: A general neural operator transformer for operator learning. In *International Conference on Machine Learning* (pp. 12556-12569). PMLR.\n\n[3] Wu, H., Luo, H., Wang, H., Wang, J., & Long, M. (2024). Transolver: A fast transformer solver for pdes on general geometries. *arXiv preprint arXiv:2402.02366*.\n\nUsing ODESolver often incurs higher computational costs. Did the authors implement any acceleration methods to speed up the integration process?"
},
{
"confidence": 3,
"rating": 6,
"review_id": "sAUnps5IGx",
"review_text": "This work proposes a space-time continuous method for solving PDEs that respects the inherent symmetries of the PDE via equivariance constraints. Building upon prior work which (a) fits a conditional neural field to output latent vectors and (b) evolves the latent state through time via a Neural ODE, the contribution of this work is to additionally enforce equivariance constraints in the latent space itself. Secondly, the work employs meta-learning to obtain the initial latent representation, which improves the structure of the latent space representation and accelerates the inference time. The authors show improved performance of the method for linear and nonlinear PDEs on complex geometries such as the 2d torus, 3d sphere and 3d ball.\n\n- The proposed method significantly reduces overfitting compared to non-equivariant baselines. \n - The method shows good stability at 3–5x the length of the training regime (even though the error accumulates slowly).\n\n- The computational cost of the method v/s baselines is not shown. Relatedly, do the DINo baselines have similar parameter counts as your method?\n\n- In figure 2, why is the bottom right image not the rotated version of the solution field on the top right?\n - Complex geometries can be a strong use-case for symmetry-preserving PDEs. However, complex geometries can often have non-trivial boundary conditions as well. Is there any way the method can be extended to handle non-equivariant boundary conditions?\n - Due to the global attention mechanism in the ENF there could be scalability concerns. Could you comment on how the method could be applied to larger scale problems in this case?"
},
{
"confidence": 3,
"rating": 6,
"review_id": "Vp195XuHrB",
"review_text": "The paper attempts to learn the dynamics of certain PDEs from time series data using implicit neural representations, while encoding symmetry information of the domain. In fact, constructing a neural model that is aware of Euclidean transformations is the primary focus of this paper. To this end, the authors design two equivariant neural networks. Given an initial condition, a latent state is obtained by meta-learning. This latent state is then integrated in time by a neural ODE (first network) to obtain a final latent state. The second network then takes this final latent state as an input, and maps any given point coordinate (in the domain) to the solution at the final time. Examples are presented on periodic domains in $ \\mathbb{R}^2 $, 2-torus, 2-sphere and the 3D-ball. The paper builds on a 2022 ICLR paper [1] which attempts the same, but without any symmetry assumption.\n\n[1] Yuan Yin, Matthieu Kirchmeyer, Jean-Yves Franceschi, Alain Rakotomamonjy, and Patrick Gallinari. Continuous pde dynamics forecasting with implicit neural representations. 2022.\n\nThe paper is well written.\n\nPDEs are often posed on domains that have symmetric properties. This is in addition to the fact that the operators appearing in the PDE have their own symmetry / directional properties. While learning from data, most of the existing methods attempting to learn PDE dynamics ignore the symmetry information. Therefore, this is a welcome idea.\n\nThe method exhibits impressive extrapolation results.\n\nOnly infinite domain (or periodic boundary conditions) are considered.\n\nIn the examples, the transformation groups are chosen by carefully considering the nature of the domain and the operators appearing in the equations. But in a real application, this information, especially the operator information, is not known a priori.\n\nExtrapolation results are shown where results outside the training time horizon are predicted. The problem with such prediction is that they look good until they do not. And there is no logical or analytical bound on the time horizon where the extrapolation is supposed to work. The time horizon is always chosen so as to exhibit the effectiveness of the method. But no analysis is presented in that regard. Therefore such extrapolation results, even though impressive in some respects, do not add to either the understanding or the applicability of this method to a new application.\n\nMemory and execution (training) times are not compared (only the training times of the proposed method are included). Error comparisons are made with other methods. Sometimes this method outperforms the other methods (e.g., Table 2), but in some cases, it is marginally better than the others (e.g., Table 3, 4). Providing memory and training times would make these comparisons more well rounded.\n\n(Minor) Line 276: should be $ \\nabla \\cdot u = 0 $.\n\nHow will this method work with other boundary conditions?\n\nIn the examples, the invariance groups are chosen according to the equation at hand. But how does one choose the invariance groups when the underlying operators and functions (RHS) are unknown?\n\nThe extrapolation results are compared with ground truth data, and it is seen that the accuracy deteriorates as the inference horizon goes farther from the training horizon. In a new application, how to determine a limit for accurate extrapolation, i.e., the temporal horizon where extrapolation is always successful? Does this limit exist?\n\nThe forward problem triggers an ODE solve. What is the typical DOFs associated with this ODE solve?\n\nWhat is the boundary condition applied on the heat equation example?"
},
{
"confidence": 3,
"rating": 5,
"review_id": "cGtuhQ8m20",
"review_text": "The work proposes a novel framework combining equivariant neural fields and neural ODEs, providing a continuous space-time solution for PDEs while respecting associated equivariance constraints. The author uses PDE-specific bi-invariant attributes for equivariant neural fields and a meta-learning approach for learning the initial latent state. The proposed method achieves better performance on the chosen PDE problems.\n\n1. The work addresses an important and complex issue of equivariance in the context of solving partial differential equations (PDEs). The proposed architecture is not only space-time continuous but also respects the equivariance constraint. This characteristic makes it particularly valuable and effective for various types of scientific research and applications.\n\n\n2. The proposed method is well-motivated and clearly explained in the paper.\n\nI have found the empirical study to be the weak point of the work. In order to argue the effectiveness of the proposed solution over existing approaches, the authors need to consider established benchmarks, large-scale datasets, PDEbench, and CFDBench, especially with irregular domains (domains with holes or solid objects as hindrances).\n\nI also find that the choice of baselines is not extensive. For example, SFNO [a] is used for the shallow water equation. Also, baselines like [b,c] are not considered.\n\nAlso, as the proposed solution is claimed to be time continuous, zero-shot super-resolution along the time domain should be demonstrated (analogous to Table 3). \n\n\na. Spherical Fourier Neural Operators: Learning Stable Dynamics on the Sphere\n\nb. GNOT: A General Neural Operator Transformer for Operator Learning\n\nc. Geometry-Informed Neural Operator for Large-Scale 3D PDEs\n\n1. What is the training and inference time of the proposed method compared to existing methods like FNOs and Deeponets?\n\n2. what is the Reynolds number of the Navier-Stokes equation problem?"
},
{
"confidence": 4,
"rating": 7,
"review_id": "6wdFB0qNb4",
"review_text": "The paper introduces a novel framework that leverages Equivariant Neural Fields (ENFs) to solve Partial Differential Equations (PDEs). By preserving geometric information in the latent space, the proposed method respects the known symmetries of the PDE, enhancing generalization and data efficiency. The framework demonstrates improved performance in various challenging geometries, validated through experiments against other neural PDE forecasting methods.\n\nThe paper presents an innovative approach by integrating equivariant neural fields, which respect the symmetries of PDEs, thereby enhancing model performance.\nThe methodology addresses significant limitations of existing NeF-based PDE solvers, particularly in generalization to unseen spatial and temporal locations and geometric transformations.\nExtensive experimental validation across various geometries (e.g., plane, torus, sphere) demonstrates the robustness of the proposed framework over existing methods.\n\nThe framework's performance decreases when extrapolating beyond the training horizon for complex PDEs.\nWhile the approach shows competitive performance, the computational complexity due to the global attention operator in the ENF backbone can be high.\nError accumulation in long-term predictions could be mitigated with increased model capacity, but this comes at the cost of computational resources.\n\nNone"
}
] |
wK0Z49myyi | CRAYM: Neural Field Optimization via Camera RAY Matching | We introduce camera ray matching (CRAYM) into the joint optimization of camera poses and neural fields from multi-view images. The optimized field, referred to as a feature volume, can be “probed” by the camera rays for novel view synthesis (NVS) and 3D geometry reconstruction. One key reason for matching camera rays, instead of pixels as in prior works, is that the camera rays can be parameterized by the feature volume to carry both geometric and photometric information. Multi-view consistencies involving the camera rays and scene rendering can be naturally integrated into the joint optimization and network training, to impose physically meaningful constraints to improve the final quality of both the geometric reconstruction and photorealistic rendering. We formulate our per-ray optimization and matched ray coherence by focusing on camera rays passing through keypoints in the input images to elevate both the efficiency and accuracy of scene correspondences. Accumulated ray features along the feature volume provide a means to discount the coherence constraint amid erroneous ray matching. We demonstrate the effectiveness of CRAYM for both NVS and geometry reconstruction, over dense- or sparse-view settings, with qualitative and quantitative comparisons to state-of-the-art alternatives. | https://openreview.net/pdf/168f165b72a71321d07a27b12f06fd7ac0fa9bd3.pdf | [
{
"confidence": 3,
"rating": 5,
"review_id": "qoKh6cLZa6",
"review_text": "The manuscript #3263 entitled \"CRAYM: Neural Field Optimization via Camera RAY Matching\" proposes a novel uncalibrated NeRF strategy based on prior keypoints matching across images. Specifically, the authors propose two novelties to improve the quality of the reconstruction and the pose estimation of the cameras: 1) Enriched ray features using surrounding rays sampled around the keypoints and 2) a ray matching index which can be used to re-weight the color regression part, leading to better robustness to occlusions.\nThe proposed technique has been evaluated across various standard datasets and against meaningful NeRF-like algorithms.\n\n- The idea of separating key rays and auxiliary rays is interesting and meaningful.\n- Numerous and conclusive results.\n- Assessment on a large number of datasets.\n- Good ablation study underlying the benefit of each novelty.\n\n- The robustness of the approach against outlier matches is not evaluated. Introducing artificial outliers (wrongly matched keypoints) into the dataset to assess how well the technique can handle mismatches would be of some interest.\n*Question*: Would the matched rays consistency and the epipolar geometry compensate for that? Or would the training diverge?\n\n- As stated in the literature review of this manuscript, other approaches taking advantage of the epipolar geometry and prior matching have already been designed in this context. I have difficulty understanding what is significantly different with this work apart from the sampling of additional surrounding rays and the matching ray index used to weight color prediction using image pairs. These two novelties seem rather incremental, but they nonetheless lead to strongly improved results.\n*Question*: I assume that the other keypoints-based approaches are not \"self-calibrated\". Is the proposed technique the first \"keypoint-based\" calibration-free NeRF? If it is not the case, it would be meaningful to compare against such techniques too.\n\n- Adding surrounding rays around a key ray appears to be quite effective; however, the sampling of auxiliary rays is not well described in the paper.\n*Question*: How are the rays sampled?\n\n- The initialization of the pose lacks details.\n *Question*: What is the effect of the pose initialization on the result?\n\n- The intrinsic parameters of the camera could additionally be optimized.\n*Question*: Just out of curiosity, have you conducted such an experiment?\n\n- In equation (4), it seems that the proposed solution considers only pairs of images. \n*Question*: How are those pairs selected?\n\nThe proposed approach is inspired by existing techniques integrating matched keypoints (like using the epipolar loss) and other techniques, such as NeuS.\n\n- The loss function contains many regularization factors.\n*Question*: Is the final loss hard to balance?\n\n\nOverall, the paper is interesting and proposes a few contributions that seem to lead to strongly improved results. Moreover, the approach has been evaluated on various standard datasets and against representative methods. However, this novel approach remains relatively incremental, and many points remain to be clarified regarding the robustness of the technique. For all the above-mentioned reasons, I would like to issue a rather mixed opinion regarding the acceptance of this work for this conference.\n\nThe questions are integrated in the Weaknesses part."
},
{
"confidence": 5,
"rating": 5,
"review_id": "KAfghjs2r1",
"review_text": "This paper presents a new technique called camera ray matching, which is integrated into the joint optimization of camera poses and a neural field. The method utilizes an uncalibrated set of images as input, incorporating photometric and geometric constraints through key points and key rays matching, with the aim of enhancing the quality of novel view rendering and 3D surface reconstruction. The approach comprises two simple modules and is implemented using grid-based representation (iNGP). Photometric experiments were exclusively compared with MLP-based methods, specifically NeRF-like, on the synthetic dataset of the vanilla NeRF, while geometric experiments demonstrate some positive results. The authors provide additional results in the appendix.\n\nThis work is a positive extension to the field of neural reconstruction (like NeRF and SDF) under the setting of images captured with noisy poses. Authors make efforts to simultaneously solve the problems involving the camera pose, detailed renderings, and accurate surface reconstruction. Experiments show good results.\n\nToo many factors are taken into account in the writing simultaneously, which leads to a lack of clear theme or a clear academic or technical problem to be addressed in this paper. The work appears to build incrementally upon previous research and offers limited novelty. The so-called Epipolar loss and Point-alignment loss are actually based on Bundle Adjustment (BA), using key points matching, which has been previously applied in the optimization of neural reconstruction in works such as SCNeRF, BARF, L2G, and Level2sfm. The proposed two modules do not bring significant innovation. It is also confusing that this work is implemented using a grid-based representation (i.e., iNGP), while the compared methods are implemented using MLP-based representation, which does not allow for a precise and fair comparison. I suggest that the authors refer to ZipNeRF for guidance on how to formulate research problems and conduct appropriate comparisons.\n\nBased on the mentioned shortcomings, I have several questions:\n\n1. What is the main research problem addressed in this paper? It appears that the paper intends to address three problems simultaneously: camera pose, fine-detailed rendering, and accurate surface reconstruction. However, if the goal is to solve the camera pose problem, it would be beneficial to present more results related to the accurate regression of camera poses before discussing high-fidelity results. Could you provide additional results about the accurate regression of camera poses?\n\n2. If the primary focus is on the issue of high-fidelity rendering under noisy camera poses, the experimental results do not offer strong conviction. It would be beneficial to showcase more experimental results on different types of datasets, similar to how BARF, SPARF, and L2G-NeRF were compared on the LLFF dataset, with an emphasis on trajectory of pose optimization and high-fidelity renderings.\n\n3. Initialization of camera poses is a sensitive issue in joint optimization. Have you attempted to change the initial camera poses or initialize poses using COLMAP to test the robustness of pose regression?\n\n4. This work is implemented using grid-based representation, while the compared methods are implemented using MLP-based representation, leading to an imprecise and unfair comparison. I am curious about the total number of iterations each method was trained for. BARF and L2G-NeRF trained all models for 200K iterations, and SPARF trained models for 100K iterations. It seems likely that this paper trained for significantly fewer iterations (perhaps 10k iterations) due to the inherent faster convergence of the grid-based representation. It would be fairer to re-implement this paper using MLP-based representation and train for a duration comparable to other methods, then make comparisons. Additional experiments are needed to support these claims.\n\n5. The comparison under sparse-view conditions lacks precision and distinction. SPARF evaluated on the DTU dataset with only 3 views, while you used 48 images in your setting, although you have reported results for sparse input (3 views) on the LEGO data, it may not be sufficiently convincing.\n\n6. Since Neus and PET-NeuS have compared the surface reconstructions on the DTU dataset, I would appreciate the inclusion of visual results. Additional ablation studies on 3D surface reconstruction using your modules on the DTU dataset would enhance the paper.\n\n7. Considering the integration of multi-resolution hash encodings into a neural surface representation, I recommend that the authors compare with Neus2. Neus2 has compared their work with Instant-NGP and Neus, showcasing fast training and detailed surface reconstruction. If the goal is to demonstrate the superiority of 3D surface reconstruction, it is most appropriate to consider Neus2 in the comparison."
},
{
"confidence": 5,
"rating": 8,
"review_id": "j4mwzcBXTx",
"review_text": "This work suggests a novel neural representation and training scheme that jointly solves for the scene representation and the multi-view camera localization. It is done using several new ideas that generalize existing NeRF based methods. \nThe representation itself is a combination of a geometry-network, which predicts a signed-distance-function (SDF) and a feature vector, that are fed into the texture-network that predicts the usual color and density values.\nThe main key novelty, is that the optimization is done over matching rays, obtained from matching keypoints using a pretrained network. The standard photometric loss function is extended to incorporate an epipolar loss (that constrains the camera positions) and a point-alignment loss that ensures the ray intersect at the predicted depth estimates along the rays. Another strong addition, is the use of 'auxiliary' rays around each matched pair of rays, from which features are fused to produce a more robust representation, that can aid the optimization under errors in matching and camera poses.\nExtensive experiments demonstrate the importance of each component and the strong performance of their combination.\n\n* The paper presents an extension of the NeRF framework, based on several novel and interesting additions that are framed in a single pipeline. The experimental results show that these contributions work well together and yield new state-of-the-art results, across the board.\n* One promising idea, in my view, is the joint optimization of both geometry and texture networks, which clearly complement eachother and are helpful in obtaining stronger and more accurate constraints on the scene understanding (as opposed to most NeRF pipelines that focus on image reconstruction and are less accurate for 3D reconstruction). \n* The other strong idea, is the joint optimization of matching rays, once again - imposing consistency contraints (on both camera and surface locations) that were not previously exploited to such an extent in prior work.\n* The paper is well written and the contributions are very clearly highlighted, while the understanding of the conventional parts is left for the reader (which is mostly fine).\n\n* Reproducibility - I believe that many details are missing (including from the appendix) for one to be able to implement the proposed method. For example:\n * What are the settings of the preprocessing SuperPoint and SuperGlue matching? What is the typical match density?\n * How are the auxiliary rays sampled? How many and under which distribution?\n * What is the function g in Eq 2 that fuses the key and auxiliary features?\n * What are the balance weights in the final loss (Eq. 7)?\n * How are poses initialized?\n* Complexity - There is no discussion what so ever about the impact of the suggested changes on memory and runtime complexity, but at traning and in inference.\n* Qualitative results are relatively limited. \n * Synthesized images are all very small, so it is difficult to appreciate the fidelity.\n * No depth images are shown\n * No examples of key and auxiliary point matches are shown (over entire images)\n\n* Is the training done only on matching key points? If so, what happens if correct matching keypoints do not 'cover' the space adequately?\n* How sensitive is the method to the quality and density of the 2D matches? It reportedly worked well on sparse image sets, which is somewhat surprising.\n* What are the runtimes (training and inference) compared to some baseline methods?\n* How different is the use of auxiliary rays compared to sampling from conical frustums as in Mip-NeRF?"
},
{
"confidence": 4,
"rating": 5,
"review_id": "UoW5siYRPY",
"review_text": "This paper introduces Camera Ray Matching for optimizing camera poses and neural fields from multi-view images. The optimized feature volume supports novel view synthesis and 3D geometry reconstruction by probing camera rays, which carry both geometric and photometric information. CRAYM claims to improves efficiency and accuracy by focusing on keypoints and integrating multi-view consistencies, enhancing both geometric reconstruction and photorealistic rendering. The method shows result in NVS and geometry reconstruction compared to baseline methods.\n\n- The paper is well-structured and easy to follow.\n\n- Experiments were only conducted on NeRF-synthetic datasets and not on LLFF datasets.\n- Comparison is made with older baseline methods (e.g., SPARF, BARF, L2G) which are more than 2 years old. It’s recommended to include more recent methods such as NoPe-NeRF and BAA-NGP.\n- It is suggested that the authors perform Neural Image Alignment to enhance the evaluation.\n\nFigure 5 and Table 3 appear to have a significant overlap in the information they present."
}
] |
wJaCsnT9UE | Sharpness-diversity tradeoff: improving flat ensembles with SharpBalance | Recent studies on deep ensembles have identified the sharpness of the local minima of individual learners and the diversity of the ensemble members as key factors in improving test-time performance. Building on this, our study investigates the interplay between sharpness and diversity within deep ensembles, illustrating their crucial role in robust generalization to both in-distribution (ID) and out-of-distribution (OOD) data. We discover a trade-off between sharpness and diversity: minimizing the sharpness in the loss landscape tends to diminish the diversity of individual members within the ensemble, adversely affecting the ensemble's improvement. The trade-off is justified through our rigorous theoretical analysis and verified empirically through extensive experiments. To address the issue of reduced diversity, we introduce SharpBalance, a novel training approach that balances sharpness and diversity within ensembles. Theoretically, we show that our training strategy achieves a better sharpness-diversity trade-off. Empirically, we conducted comprehensive evaluations in various data sets (CIFAR-10, CIFAR-100, TinyImageNet) and showed that SharpBalance not only effectively improves the sharpness-diversity trade-off but also significantly improves ensemble performance in ID and OOD scenarios. | https://openreview.net/pdf/94d83140bd5537ee6cc7018fa3fdf7107c58db7b.pdf | [
{
"confidence": 4,
"rating": 6,
"review_id": "tQ5dnyp6pr",
"review_text": "This paper presents introduces a training approach for ensemble learning called SharpBalance to balance sharpness and diversity within ensembles. This paper shows theoretically that SharpBalance achieves a better sharpness-diversity trade-off.\n\n1. Ensemble learning is an important research direction.\n2. Understanding of sharpness and diversity within deep ensembles is important for the study of generalization to both in-distribution and out-of-distribution data.\n3. The paper is technically sound.\n\n1. Since SharpBalance focuses \"on a diverse subset of the sharpest training data samples\", it may not apply in small datasets where available data is already sparse.\n2. Empirical improvement over existing methods is marginal.\n\n1. Why does SharpBalance seem more effective on corrupted data?\n2. Do models in an ensemble converge on the same local minima?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "pgwQPVcvcI",
"review_text": "This paper investigates the sharpness and diversity within deep ensembles. Specifically, it identifies the trade-off phenomenon between sharpness and diversity with both theoretical and empirical evidence. Additionally, it proposes a method called SharpBalance, which trains individuals using selective 'sharp' subsets. Conducted experiments have demonstrated the effectiveness of the proposed SharpBalance when applied to deep ensembles.\n\nThere are several strengths in this paper:\n\n- The exploration of sharpness and diversity in deep ensembles is both interesting and novel.\n\n- Sufficient theoretical and empirical evidence has been provided for validation.\n\n- The proposed method is simple, effective, and accompanied by code for verification.\n\nHowever, I still have the following concerns:\n\n - The evaluation seems a bit weak. The authors should consider comparing with more ensemble baselines.\n\n - What is the scale of $D_{SAM}^i$ and how does it change during training? Providing some details on this would help in understanding the proposed method.\n\n - Refer to Line 166: How do the authors train individuals with the full datasets? Are these individuals trained with different initializations?\n\n - (Optional) As described, the model's generalization is not merely correlated with sharpness, which aligns with some recent advanced SAM variants. Thus, integrating these advanced variants [1][2] with SharpBal would be more beneficial for studying the trade-off between sharpness and diversity.\n\nReferences:\n\n[1] Random Sharpness-Aware Minimization. In NeurIPS 2022.\n\n[2] Gradient Norm Aware Minimization Seeks First-Order Flatness and Improves Generalization. In CVPR 2023.\n\nPlease refer to the **Weaknesses**."
},
{
"confidence": 3,
"rating": 6,
"review_id": "08VW5TVNbk",
"review_text": "The paper proposes SharpBalance, that is a method aiming to investigate the relationship between sharpness and diversity for deep ensembles.\n\n- SharpBalance looks quite effective for the out-of-distribution setting. The goal of balancing sharpness and diversity within ensembles is an important idea. \n- Great theoretical analysis\n\n- The authors are aware of the paper called “Diversity-Aware Agnostic Ensemble of Sharpness Minimizers” [1], the idea is quite like the proposed paper, they aim to investigate the relations between sharpness and diversity on ensemble learning. I suggest the authors to discuss the main differences between both. \n\n[1] Anh Bui, Vy Vo, Tung Pham, Dinh Phung and Trung Le, Diversity-Aware Agnostic Ensemble of Sharpness Minimizers, arXiv:2403.13204. \n\n- Regarding the baselines the authors only compare SharpBalance with SAM. Nevertheless, newer, and stronger baselines like GSAM [2] and OBF [3] should also be benchmarked since they are the current state-of-the-art. \n\n[2] Zhuang, J., Gong, B., Yuan, L., Cui, Y., Adam, H., Dvornek, N., Tatikonda, S., Duncan, J., and Liu, T. Surrogate gap minimization improves sharpness-aware training. arXiv preprint arXiv:2203.08065, 2022. \n\n[3] Vani, A; Tung, F; Oliveira G; Sharifi H. Forget Sharpness: Perturbed Forgetting of Model Biases Within SAM Dynamics, International Conference on Machine Learning (ICML) 2024. \n\n- Another point to improve are the datasets. I strongly suggest the authors to benchmark with at least a couple large scale datasets. Options are ImageNet-V1 [4] for training and ImageNet-Real [5] and ImageNet-V2 [6] for testing, ImageNet-R [7] for out-of-distribution robustness benchmark and ImageNet-Sketch [8]. \n\n[4] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. ImageNet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255. IEEE, 2009 \n\n[5] Beyer, L., He ́naff, O. J., Kolesnikov, A., Zhai, X., and Oord, A. v. d. Are we done with imagenet? arXiv preprint arXiv:2006.07159, 2020. \n\n[6] Recht, B., Roelofs, R., Schmidt, L., and Shankar, V. Do imagenet classifiers generalize to imagenet? In Interna- tional conference on \nmachine learning, pp. 5389–5400. PMLR, 2019. \n\n[7] Hendrycks, D., Basart, S., Mu, N., Kadavath, S., Wang, F., Dorundo, E., Desai, R., Zhu, T., Parajuli, S., Guo, M., et al. The many faces of robustness: A critical analysis of out-of-distribution generalization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8340–8349, 2021. \n\n[8] Wang, H., Ge, S., Lipton, Z., and Xing, E. P. Learning robust global representations by penalizing local predictive power. In Advances in Neural Information Processing Systems, pp. 10506–10518, 2019.\n\n- Regarding the new discovery contribution. As I previously stated on the weakness section were the authors aware of the paper “Diversity-Aware Agnostic Ensemble of Sharpness Minimizers. Could the authors present the main differences between this method and SharpBalance."
},
{
"confidence": 2,
"rating": 5,
"review_id": "haBGktuqCH",
"review_text": "Ensemble methods and sharpness-aware optimization techniques are well-known strategies for improving generalization. This work identifies a trade-off between sharpness and diversity, observing that reducing sharpness can diminish diversity and harm ensemble performance. Through theoretical and empirical analysis of this sharpness-diversity trade-off, the authors present SharpBalance, an algorithm for training ensembles with sharpness-aware solutions without sacrificing diversity. Evaluation results on CIFAR-10/100, TinyImageNet, and their corrupted variants confirm the effectiveness of SharpBalance.\n\n- Ensemble methods and sharpness-aware optimization techniques are both prominent approaches for improving generalization. The aim of this work, which combines these two approaches, is well-motivated.\n- While the theoretical analysis uses the variance metric to indicate diversity, the experimental results show consistent trends across different diversity metrics. It suggests that the proposed analysis is widely applicable to the general concept of diversity.\n- Extensive empirical results effectively validate the theoretical analysis. The summary plots of the results are generally highly readable.\n\n- The evaluation results are centered exclusively on classification accuracy; since ensembling usually highlights both predictive accuracy and uncertainty, relying solely on accuracy to assess overall performance is insufficient. \n- Specifically, for the corrupted CIFAR benchmark, uncertainty metrics like negative log-likelihood or expected calibration error are more important than test accuracy, but these aspects are not currently considered.\n- It seems that all experiments were conducted exclusively with residual networks. It is essential to verify if the proposed analysis and algorithm are applicable to other architecture families as well.\n\n- It appears that the current emphasis is on logit-ensemble (lines 82-83). Does the same rationale apply when ensembling categorical probabilities (i.e., probability-ensemble)?\n- In the proposed SharpBalance algorithm, it seems that the training data and objective for the i-th member are defined using other members (such as members i+1, i+2, as illustrated in the figure). Does this imply that in practice, each member is trained sequentially?"
}
] |
wJAF8TGVUG | S-MolSearch: 3D Semi-supervised Contrastive Learning for Bioactive Molecule Search | Virtual Screening is an essential technique in the early phases of drug discovery, aimed at identifying promising drug candidates from vast molecular libraries.
Recently, ligand-based virtual screening has garnered significant attention due to its efficacy in conducting extensive database screenings without relying on specific protein-binding site information.
Obtaining binding affinity data for complexes is highly expensive, resulting in a limited amount of available data that covers a relatively small chemical space. Moreover, these datasets contain a significant amount of inconsistent noise. It is challenging to identify an inductive bias that consistently maintains the integrity of molecular activity during data augmentation. To tackle these challenges, we propose S-MolSearch, the first framework to our knowledge, that leverages molecular 3D information and affinity information in semi-supervised contrastive learning for ligand-based virtual screening.
% S-MolSearch processes both labeled and unlabeled data, trains molecular structural encoders, and generates soft labels for unlabeled data, drawing on the principles of inverse optimal transport.
Drawing on the principles of inverse optimal transport, S-MolSearch efficiently processes both labeled and unlabeled data, training molecular structural encoders while generating soft labels for the unlabeled data.
This design allows S-MolSearch to adaptively utilize unlabeled data within the learning process.
Empirically, S-MolSearch demonstrates superior performance on widely-used benchmarks LIT-PCBA and DUD-E. It surpasses both structure-based and ligand-based virtual screening methods for AUROC, BEDROC and EF. | https://openreview.net/pdf/46d73ed2879dccfa7cc4be660af5ec5d12d274b1.pdf | [
{
"confidence": 4,
"rating": 7,
"review_id": "QuCyzNay1D",
"review_text": "The paper introduces S-MolSearch, a framework for ligand-based virtual screening in drug discovery that addresses the challenges of limited and noisy binding affinity data. By utilizing molecular 3D information and semi-supervised contrastive learning, S-MolSearch processes both labeled and unlabeled data to train molecular structural encoders and generate soft labels for the unlabeled data, drawing on inverse optimal transport principles. The framework outperforms existing structure-based and ligand-based virtual screening methods, as evidenced by its superior performance on the LIT-PCBA and DUD-E benchmark datasets.\n\n- Well-written\n- Well-organized experimental settings and comparison methods\n\n- There is a lack of discussion on the reasons behind the performance differences and improvements, with only numerical comparisons of the experimental results.\n- There is insufficient experimentation and consideration regarding the time required for virtual screening.\n- There are no results for experimental metrics such as AUROC or BEDROC, which were used in previous studies.\n\n- Both S-MolSearch and existing methods experience a decline in performance as the % of EF increases. Additional discussion on the reasons for this phenomenon is needed.\n- Why do soft labels based on inverse optimal transport seem to have a significant impact on the DUD-E dataset but a lesser impact on the LIT-PCBA dataset?\n- What aspects of the semi-supervised approach in Table 4 do the authors think primarily contributed to the performance improvement compared to fine-tuning?\n- Is it possible to extend this method from a zero-shot setting to a few-shot setting? If so, how do the authors think its performance would compare to existing methods in that case?\n- In virtual screening, not only performance but also processing time is important. How does the screening time compare to that in existing studies?\n- How does the performance compare to existing models when using measurements like AUROC or BEDROC instead of EF #%?"
},
{
"confidence": 3,
"rating": 6,
"review_id": "szukWBUdqq",
"review_text": "The paper introduces \"S-MolSearch,\" a semi-supervised contrastive learning framework designed for ligand-based virtual screening in drug discovery. This framework uniquely leverages labeled binding affinity information to produce soft labels for unlabeled molecules, integrating 3D molecular structures and binding affinity data. The paper also proposes a novel semi-supervised learning paradigm that combines contrastive learning with Inverse Optimal Transport (IOT).\n\n1. The supervision idea is novel and useful, and the target application is very impactful with broad implications.\n2. The paper is well-written and the experiments are comprehensive.\n\n1.\tMemory Consumption Concerns: The model employs a parallel architecture with two f_\\theta encoders and one g_\\phi encoder, based on the Uni-Mol framework. Although utilizing pretrained models has shown significant performance benefits, the paper should address potential memory management strategies, especially for future applications involving molecules with a greater number of atoms.\n2.\tUtilization of 3D Structures: The paper promotes a novel semi-supervised contrastive learning paradigm, yet the core contribution does not seem to revolve around the innovative use of 3D structures, as this capability primarily stems from the Uni-Mol architecture. It would be beneficial if the authors could clarify any specific enhancements made to ensure the effective preservation and utilization of geometric information within the model. Absent such enhancements, clearer distinctions should be made regarding the role of 3D structures to prevent misconceptions about the paper presenting a new geometric deep learning technique.\n3.\tClarity in Section 3.4: The explanation of how $\\Gamma$, which approximates the distribution of $C$ under constraints from $U(p,q)$, relates to the continuous optimal transport problem is not clear. Moreover, the motivation and necessity of soft labels, beyond experimental justifications, needs further elaboration. The section would benefit from additional visual aids or high-level descriptions, akin to the clarifications provided in sections 3.3 and 3.5, to aid in comprehension.\n4.\tComponent Efficacy in Table 3: There appears to be a discrepancy in the impact of model components across different benchmarks—soft labels are pivotal for DUD-E, whereas pretraining is more crucial for LIT-PCBA, with soft labels showing minimal importance. Insights into this inconsistency would be valuable. Furthermore, an evaluation of how the Uni-Mol encoder alone performs on these tasks would provide additional context on the effectiveness of the proposed enhancements\n\nMinor points and typos:\nL153-154 is not clear.\nL162: It would be beneficial to include illustrations of $M_{sup}$ in the figures for clarity.\nFormula 2 and L 168: It is better to give intuitive explanations of $1_N$.\nL184: Inconsistent notation. $g(\\psi)$ or $g_\\psi$?\nL281: Misplaced comma.\n\nThe same as weakness."
},
{
"confidence": 4,
"rating": 6,
"review_id": "bXoOsM5K8N",
"review_text": "This paper proposed a Ligand-based Virtual Screening method S-MolSearch. which can leverages molecular 3D information and affinity information in semi-supervised contrastive learning.\n\n1. The method is able to leverage both labeled and unlabeled data simultaneously and achieves excellent performance on DUDE and Lit-PCBA benchmarks.\n2. The approach of using the principles of inverse optimal transport for semi-supervised learning is quite innovative and worth adopting.\n3. The ablation experiments are sufficient, and the experimental section is quite robust.\n\n1. In the method section, it is unclear to me whether during inference only encoder$g_{\\psi}$ is used, or both $\\psi$ and $f_{\\theta}$ are used simultaneously?\n2. If the application scenario involves a newly provided protein without reference molecules, how should ligand-based virtual screening methods handle this situation?\n\nRefer to weakness."
},
{
"confidence": 3,
"rating": 6,
"review_id": "vzJXp6c6hd",
"review_text": "The paper introduces a new method for ligand-based virtual screening based on contrastive learning and inverse optimal transport. Two molecule encoders are trained. The first encoder is trained using a contrastive loss function on the ChEMBL data by pairing compounds that are active toward the same protein, and compounds active toward different targets are treated as negative pairs. Next, the second encoder is trained by using the pseudo-labels produced by the first model. The proposed model is tested on two benchmark datasets, DUD-E and LIT-PCBA. Additionally, an ablation study is conducted, and the impact of the labeled data scale is visualized.\n\nOriginality:\n- The approach seems to be novel. I have not found any similar papers that use optimal transport for the ligand-based virtual screening task.\n\nQuality:\n- The theory described in the paper is formally proven in the Appendix.\n- The proposed method obtains excellent results in both tested benchmarks.\n- The quality of the learned representation is demonstrated in Figure 2.\n\nClarity:\n- The paper is written clearly and is easy to follow.\n- Figure 1 shows the idea of the model very clearly.\n\nSignificance:\n- The presented method is an interesting and effective way to utilize all the available public data to build a strong model for ligand-based virtual screening.\n\nQuality:\n- It would be interesting to see some qualitative examples of molecules that were found to be similar to the active compounds in the virtual screening process. Do the trained similarities correlate with the Tanimoto similarity?\n\nClarity:\n- Does the “sup” subscript in Section 3.4 correspond to the “label” subscript in Proposition 1? What is the difference between these two sets?\n\nMinor comments:\n- A typo in line 151, “we employs InfoNCE.”\n- In line 183, something is missing before “1”.\n\n1. How do you solve the cases in contrastive learning where one molecule binds to multiple targets? In this example, you need to be careful not to create a negative pair, where one molecule is the one binding to both targets and the other molecule binds to only one of them.\n2. How do you avoid treating two molecules as a negative pair if both can bind to the same target? For example, they are inhibitors of two similar proteins, which increases the chance of them binding to both at the same time."
}
] |
wIE991zhXH | Bandits with Preference Feedback: A Stackelberg Game Perspective | Bandits with preference feedback present a powerful tool for optimizing unknown target functions when only pairwise comparisons are allowed instead of direct value queries. This model allows for incorporating human feedback into online inference and optimization and has been employed in systems for tuning large language models.
The problem is fairly understood in toy settings with linear target functions or over finite small domains that limits practical interest.
Taking the next step, we consider infinite domains and kernelized rewards. In this setting, selecting a pair of actions is quite challenging and requires balancing exploration and exploitation at two levels: within the pair, and along the iterations of the algorithm.
We propose MaxMinLCB, which emulates this trade-off as a zero-sum Stackelberg game and chooses action pairs that are informative and have favorable reward values. MaxMinLCB consistently outperforms algorithms in the literature and satisfies an anytime-valid rate-optimal regret guarantee. This is owed to our novel preference-based confidence sequences for kernelized logistic estimators, which are of independent interest. | https://openreview.net/pdf/db5295cfa310af4c44f8d5f32aaa6fcf1c92114d.pdf | [
{
"confidence": 4,
"rating": 6,
"review_id": "wUpA1tiYmC",
"review_text": "This paper considers bandits with preference feedback. It first constructs a novel confidence set that covers the ground truth with high probability. Then from a Stackelberg game perspective, it proposes an efficient algorithm that enjoys tighter regret bound than SOTA.\n\n1. The technique used to construct the confidence set is interesting. The resulting confidence set is tighter.\n2. The Stackelberg game perspective is interesting, and allows the author(s) to design the algorithm with better exploration-exploitation trade-off as demonstrated in the experiment.\n\nThe major concern is the practical applicability of the algorithm. Seems that the proposed algorithm can hardly scale up to a higher dimension (e.g., dimension equals to 7). In the proposed algorithm, a complicated sequential optimization problem needs to be solved. Notably, the experiment only considers two-dimensional problem, in sharp contrast to recent works (e.g., 12-dimensional problem considered in Xu et al. [2024]).\n\n1. In the abstract, the review claims that the regret bound is 'rate-optimal'. Is there a matching lower bound?\n2. Can the results be generalized to other link functions?\n3. In line 168, should it be $\\sup_{|a|\\leq B}$ instead of $\\sup_{a\\leq B}$?\n4. Could the authors explain the linear growth of multiSBM in Fig. 1 (b)?"
},
{
"confidence": 3,
"rating": 7,
"review_id": "82qszbNQ4J",
"review_text": "This paper considers novel game-theoretic acquisition function for pairwise action selection with preference feedback. It is tailored to the setting with infinite domains and nonlinear kernelized rewards. The preference-based confidence sequences for kernelized utility functions are shown to be tight and anytime valid. The proposed algorithm MAXMINLCB is shown to satisfy a sublinear regret. Various simulations were conducted to showcase the advantage of the proposed method.\n\n1. Although preference-based bandit optimization with linear utility functions is fairly well understood, such approaches cannot capture real-world problems with complex nonlinear utility functions. This paper aims to close this gap. The considered problem is timing and interesting. \n\n2. The technical contribution is non-trivial. Although there have been attempts to prove convergence of kernelized algorithms for preference-based bandits, such works employ a regression likelihood model which requires them to assume that both the utility and the probability of preference lie in an RKHS. Moreover, a sample-efficient algorithm is lacking for such approaches. In contrast, this work uses a kernelized logistic negative log-likelihood loss to infer the utility function, and provide confidence sets for its minimizer.\n\n3. Some theoretical result, like Kernelized Logistic Confidence Sequences in Theorem 2, is also of independent interest. \n\n4. In spite of a theoretical paper, it is well written and is easy to follow.\n\n1. In practice, how to determine the hyper-parameters like $\\gamma_t$, $L$, and $B$ in (5)? Is there any data-driven way to select them?\n\n2. In the main Theorem 6, the regret bound is $\\gamma_T^{D}\\sqrt{T}$. The term $\\gamma_T^{D}$ is the T-step information gain of kernel, which is also a function of $T$. The authors claim that this rate improves that of Xu et al. (2024) by a factor of $T^{1/4}$. However, the cumulative regret bound in Theorem 5.2 of Xu et al. (2024) is of a similar order. Xu et al. (2024) also provided explicit regret upper bounds for various common kernels in Theorem 5.5. Hence, it is also interesting to provide an explicit form of $\\gamma_T^{D}$ for some common kernels, and to compare these regret upper bounds in a fair way. \n\n3. In Figure 1, the authors presented the result for the Ackley function, which shows a clear advantage of the proposed method. However, in more extensive simulations (e.g., Matyas function in Figure 6, ) in the appendix, the proposed method is outperformed by the competitors. It is helpful to provide some discussion on these results, and offer some insights on when the proposed method would work well. This is helpful for practitioners.\n\nsee Weakness"
},
{
"confidence": 3,
"rating": 6,
"review_id": "MqGtV6ARVj",
"review_text": "The paper examines the problem of bandit optimization with preference feedback in large domains and nonlinear (kernelized) rewards. It introduces MAXMINLCB, which adopts a game-theoretic approach to action selection under comparative feedback. Additionally, it proposes kernelized preference-based confidence sets, which can be utilized in related problems.\n\n(1) Rather than jointly selecting the arms in dueling bandits, the proposed method jointly optimizes both actions by choosing them as the equilibrium of a two-player zero-sum Stackelberg game. This approach enables a more efficient exploration/exploitation trade-off.\n\n(2) The regret guarantee presented in this paper is tighter by a factor of $T^{1/4}$ compared to Xu et al. (2024).\n\n(1) Although the paper uses a kernelized logistic model to approximate the rewards, this approach may remain too simplistic for capturing the complexity of rewards in real-world applications.\n\n(2) The paper lacks a comparison in the experiments with the related work by Xu et al. (2024).\n\n(3) In real applications, it is more common to rank between two state-action pairs. However, the paper does not consider contextual information and solely focuses on the multi-armed setting, which is less interesting and useful.\n\n(1) Can you implement the algorithms compared in Section 6.2 using the original confidence sets from the references?\n\n(2) Can this work be extended to contextual bandits with preference feedback?"
},
{
"confidence": 3,
"rating": 7,
"review_id": "Hn9bPeXEBJ",
"review_text": "This paper considers bandit optimization with preference feedback over continuous action spaces and kernelized reward function. The goal in this problem is to minimize the dueling regret against an optimal action over a finite time-horizon. Previous works on this problem are either restricted to finite action spaces or linear reward functions. The proposed algorithm casts the problem as kernalized logistic regression and designs confidence sets for the relative preference between two actions. It then proposes an action selection strategy based on a game theoretic Leader-Follower formulation that utilizes these confidence intervals. The paper provides a regret bound as well as empirical evaluation for the proposed algorithm.\n\nThe main contributions of the paper are two-fold:\n\n1. Expanding the existing literature on dueling bandits by studying kernelized reward functions under infinite and continuous action sets. This requires new techniques to bound the confidence intervals. \n\n2. Proposing a principled game-theoretic approach to action selection in dueling bandits that can be of further interest.\n\nIn my opinion these are two important contributions to the literature. Since these ideas are likely to be relevant to other learning problems with preference feedback such as RLHF, I think that the results in this paper have a good scope. The paper is well-written and the contributions are clear.\n\nExperimental evaluation can include other algorithms that are known to perform better than RUCB such as RMED (Komiyama et al., 2015) and Double Thompson Sampling (Wu and Liu, 2016).\n\nIt would be interesting to see if the current approach can be extended to other link functions beyond the sigmoid such as probit."
}
] |
wHFaAH3E8z | FasMe: Fast and Sample-efficient Meta Estimator for Precision Matrix Learning in Small Sample Settings | Precision matrix estimation is a ubiquitous task featuring numerous applications such as rare disease diagnosis and neural connectivity exploration. However, this task becomes challenging in small sample settings, where the number of samples is significantly less than the number of dimensions, leading to unreliable estimates. Previous approaches either fail to perform well in small sample settings or suffer from inefficient estimation processes, even when incorporating meta-learning techniques.
To this end, we propose a novel approach FasMe for Fast and Sample-efficient Meta Precision Matrix Learning, which first extracts meta-knowledge through a multi-task learning diagram. Then, meta-knowledge constraints are applied using a maximum determinant matrix completion algorithm for the novel task. As a result, we reduce the sample size requirements to $O(\log p/K)$ per meta-training task and $O(\log\vert \mathcal{G}\vert)$ for the meta-testing task. Moreover, the hereby proposed model only needs $O(p \log\epsilon^{-1})$ time and $O(p)$ memory for converging to an $\epsilon$-accurate solution. On multiple synthetic and biomedical datasets, FasMe is at least ten times faster than the four baselines while promoting prediction accuracy in small sample settings. | https://openreview.net/pdf/7fb400b0af5f12f29a6d981142457903acaa8378.pdf | [
{
"confidence": 3,
"rating": 6,
"review_id": "dvYX13qjX2",
"review_text": "This paper proposes a meta-learning method for estimating the precision matrix on a new task with small data.\nThe proposed method uses common edges estimated from multiple auxiliary datasets as meta-knowledge. Then, it estimates the precision matrix on the new task, assuming its true edges contain all the estimated common edges (meta-knowledge). Some theoretical guarantees are also provided.\nExperiments with synthetic and real-world datasets show the effectiveness and efficiency of the proposed method.\n\n- The paper is generally well-written and easy to follow.\n- Concrete algorithm and its theoretical guarantees are presented (but, I didn't read their proof).\n- Strong performance in terms of both accuracy and efficiency in the experiments.\n\n- As the authors stated in the Limitation section, the assumption of Eq. 8 might not be held in general machine learning tasks, although it fits well in biological domains.\n\nPlease see the Weaknesses."
},
{
"confidence": 3,
"rating": 7,
"review_id": "6MDj0we5Xu",
"review_text": "This paper introduces FasMe, a meta-learning approach for efficient precision matrix estimation in small sample settings. By leveraging meta-knowledge and maximum determinant matrix completion, FasMe reduces sample size requirements and improves computational efficiency. Experimental results show FasMe to be significantly faster and more accurate than existing methods, particularly in low-data environments.\n\n1) Paper investigates a key issue in precision matrix estimation and proposes a reasonable method to address the problem.\n\n2) Paper provides thorough theoretical and experimental analyses to justify the method’s ability to reduce the sample requirement and enhance learning efficiency.\n\n3) Paper has good representation.\n\nI have few doubts about the method and experiments presented in the article.\n\nIs the work primarily applicable to biological scenarios, or can it be widely used in other contexts as well?"
},
{
"confidence": 3,
"rating": 6,
"review_id": "rQ0ddaNf4B",
"review_text": "The authors propose a method to estimate sparse precision matrices from few samples. Theoretical properties of the proposed method are studied, and experiments on synthetic and brain fMRI data are presented.\n\nStrengths:\n* The paper is overall well written, and fairly easy to follow and comprehend.\n* Theoretical guarantees for sub-Gaussian distributed random variables are presented, and they seem to be novel contribution. \n* The experiments on the synthetic dataset clearly demonstrate improvement over the relevant competing methods.\n\nWeaknesses:\n* Currently quantitative results are presented for synthetic data. It would be nice to see more quantitative evaluations on benchmark datasets.\n\n* How are the meta learning tasks linked to learning the new precision matrix in the theoretical part? It would be nice if the authors can elaborate on the assumptions for the learnability of the new matrix.\n2. Why is N=0 in Eq. 13?"
}
] |
wH36UKML4x | Trained Models Tell Us How to Make Them Robust to Spurious Correlation without Group Annotation | Classifiers trained with Empirical Risk Minimization (ERM) tend to rely on attributes that have high spurious correlation with the target. This can degrade the performance on underrepresented (or 'minority') groups that lack these attributes, posing significant challenges for both out-of-distribution generalization and fairness objectives. Many studies aim to improve robustness to spurious correlation, yet nearly all require group annotation for training and/or model selection. This constrains their applicability in situations where the nature of the spurious correlation is not known, or when group labels for certain spurious attributes are either insufficient or completely absent. To meet the demand for effectively enhancing the model robustness under minimal assumptions about group annotation, we propose Environment-based Validation and Loss-based Sampling (EVaLS). It uses the losses from a trained model to construct a balanced dataset of high-loss and low-loss samples in which the training data group imbalance is mitigated. This results in a significant robustness to group shifts when equipped with a simple mechanism of last layer retraining. Furthermore, by utilizing environment inference methods for creating diverse environments with correlation shifts, EVaLS can potentially eliminate the need for group annotation in the validation data. In such a context, the worst environment accuracy acts as a reliable surrogate throughout the retraining process for tuning hyperparameters and finding a model that performs well across diverse group shifts. EVaLS effectively achieves group robustness, showing that group annotation is not necessary even for validation. It is a fast, straightforward, and effective approach that reaches near-optimal worst group accuracy without needing group annotations, marking a new chapter in the robustness of trained models against spurious correlation. | https://openreview.net/pdf/128fdcdc1fc3e96ce8bd902eb45b7eeb87c64f61.pdf | [
{
"confidence": 4,
"rating": 5,
"review_id": "CyizSs3Kl0",
"review_text": "This paper addresses the problem of subpopulation generalization, also known as spurious correlations. Building on the Last Layer Retraining (DFR) method, it removes the constraints on a small subset of annotations. The paper introduces the Environment-based Validation and Loss-based Sampling (EVaLS) method. Unlike DFR, EVaLS divides the validation set $D^{val}$ into two parts: (1) $D^{LL}$, where losses from an ERM-trained model are used as a proxy for identifying minority groups for retraining, and (2) $D^{MS}$, where environment inference methods are used for partitioning environments. The paper presents theoretical insights and empirical results demonstrating the effectiveness of EVaLS.\n\n* The paper is well-structured and presented in a clear and organized manner, making it easy to comprehend and follow along.\n* The proposed method is simple but effective and explores a relatively challenging area in existing literature (*i.e.* subgroup generalization without group annotations). \n* The authors provide some theoretical analysis to support their claims.\n\n* The novelty and contribution of the proposed method may be limited for the following reasons: 1) The paper combines multiple previously proposed methods (*i.e.* DFR [1], EIIL [2]) all at once, which inherently guarantees a nontrivial performance; (2) The primary technical contribution, at least from my perspective, is the loss-based sampling, which has been already explored extensively in the noisy label literature and has been used as tools for pseudo-labeling. \n* The paper fails to discuss recently proposed methods that also require no group annotations, such as SELF [3], BAM [4], and BPA [5]. In particular, SELF is also a direct follow-up of DFR. The authors are encouraged to discuss the limitations and strengths of loss-based schemes against the class-based schemes advocated by SELF and BAM.\n* More analyses can be included to provide further understanding of the selected loss-based samples. For example, given a threshold, how much percent of the high-loss and low-loss samples are indeed the minority and majority samples and how does this percentage change with the threshold?\n\n[1] Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations, ICLR 2023\n\n[2] Environment inference for invariant learning. ICML 2021\n\n[3] Towards Last-layer Retraining for Group Robustness with Fewer Annotations. NeurIPS 2023 \n\n[4] Bias Amplification Enhances Minority Performance. TMLR 2024\n\n[5] Unsupervised learning of debiased representations with pseudo-attributes. CVPR 2022\n\n* Is EVaLS sensitive to hyperparameters? \n* [minor] There seem to be typos in your corollary C.1.\n* Can the authors make clarifications on how the conditions in proposition 3.1 are met or relaxed in practice? How does the distribution of actual experimental benchmark datasets compare to your assumptions?"
},
{
"confidence": 3,
"rating": 6,
"review_id": "wJD1mvJ2KW",
"review_text": "To address the issue of spurious correlations when group labels are unavailable, this paper proposes a new method called EVaLS. It first creates a balanced training dataset using loss-based sampling. Then, it evaluates the accuracy of the balanced training set based on the inferred environments from the validation set, and selects models accordingly.\n\n1. The paper is well-written, and includes a rich set of experiments with necessary theoretical explanations.\n\n2. It is essential to discuss the multiple (unknown) spurious features case which has been overlooked in previous studies.\n\n1. Why is the approach of using high-loss points (considered as the minority group) more effective than directly using misclassified points (considered as the minority group) in methods like JTT? Intuitively, compared to misclassified points, high-loss points are more \"implicit\" and no obvious thresholds, which could potentially result in high-loss points actually belonging to the majority group, thus exacerbating the imbalance in resampling.\n\n2. If the author can show the balance level of samples obtained through loss-based sampling compared to directly using labels (misclassified points), it could further illustrate the advantages of loss-based sampling.\n\n3. In Section \"Mitigating Multiple Shortcut Attributes\", if color is treated as a known spurious attribute and shape as an unknown spurious attribute, how would the performance of EVaLS be affected? Based on my understanding, there is a possibility that simplicity bias could cause the model to prioritize learning the simpler feature, color, and struggle to learn the more complex shape attribute. Therefore, considering color as known and shape as unknown can better show the performance of EVaLS in handling complex spurious features.\n\nSee weaknesses."
},
{
"confidence": 4,
"rating": 6,
"review_id": "65p9B2C3Bu",
"review_text": "The paper studies how to improve the model’s robustness to multiple spurious correlations when the group labels (indicator for spurious correlation) are unknown in general. The proposed approach, EVaLS, leverages the loss from a base ERM model to sample a balanced subset to prevent learning from spurious correlations. In addition, a new synthetic dataset (Dominoes-CMF) for multiple spurious attributes is crafted. Empirically, the proposed approach sometimes has advantages over the rest of the baselines when using the same amount of additional information (group label).\n\n1. The main paper is generally well-written. \n2. The theoretical analysis in Section 3.3 (with derivations and proofs in Appendix) shows that for one-dimensional Gaussian distributions, choosing the tails on the two sides of the distributions creates balanced groups, even though the original data distribution is skewed. \n3. Environment inference technique is demonstrated to be useful for separating the dataset into groups with different distributions of the subpopulations and then for model selection. \n4. The proposed technique only requires last-layer retraining on part of the validation set, which is generally more efficient.\n\n1. Figure 2 attempts to illustrate more minority samples have high loss while the majority samples have low loss. However, in each of the plots, only the % of one of the minority or majority groups is shown. The illustration can be improved by showing the % of both majority and minority groups in the same plot, and showing the actual distribution of the loss for the groups. \n2. Though the idea is straightforward, it is unclear how the loss-based instance sampling is actually implemented. It is helpful to provide an algorithm or pseudocode to improve the presentation. \n3. The theoretical analysis is generally sound but limited to a case without discussing the use of the loss (which may not be Gaussian) and the spurious correlations (which involve at least two dimensions of core and spurious features [1]). \n4. The experimental results are less polished and sometimes the advantages are not so clear over other baselines. Some results are missing for datasets such as UrbanCars and MultiNLI. Only a few baselines are compared for the new dataset in Table 2. There is also no convincing and fine-grained analysis (e.g., ablation study) to understand how the proposed approach ensures data balancing and improves group robustness. \n5. The paper initially focuses on improving group robustness when multiple spurious correlations are present, but the experimental results are lacking for these more challenging datasets. \n\n[1] Sagawa, Shiori, Aditi Raghunathan, Pang Wei Koh, and Percy Liang. \"An investigation of why overparameterization exacerbates spurious correlations.\" In *International Conference on Machine Learning*, pp. 8346-8356. PMLR, 2020.\n\n1. Since it’s an essential prerequisite for loss-based sampling, how well are the majority/minority groups separated in the logit space?\n2. How is the number of balancing samples $k$ chosen? How balanced are the samples when they are selected from the optimal $k$? \n3. I am curious about why the result for Civilcomment dataset is so much higher than the other baselines. Is the evaluation consistent with the baseline methods?\n4. Minor typo: line 233 should be without loss of generality (w.l.o.g.). \n5. Other minor issues with the experiments: The best-performing results in each category of approaches should be bold. The column header “best” should be “average” in Tables 4 and 5."
}
] |
wGjSbaMsop | Algorithmic Collective Action in Recommender Systems: Promoting Songs by Reordering Playlists | We investigate algorithmic collective action in transformer-based recommender systems. Our use case is a collective of fans aiming to promote the visibility of an underrepresented artist by strategically placing one of their songs in the existing playlists they control. We introduce two easily implementable strategies to select the position at which to insert the song and boost recommendations at test time. The strategies exploit statistical properties of the learner to leverage discontinuities in the recommendations, and the long-tail nature of song distributions. We evaluate the efficacy of our strategies using a publicly available recommender system model released by a major music streaming platform. Our findings reveal that even small collectives (controlling less than 0.01\% of the training data) can achieve up to $40\times$ more test time recommendations than songs with similar training set occurrences, on average. Focusing on the externalities of the strategy, we find that the recommendations of other songs are largely preserved, and the newly gained recommendations are distributed across various artists. Together, our findings demonstrate how carefully designed collective action strategies can be effective while not necessarily being adversarial. | https://openreview.net/pdf/5ff95834271c7f7d9900560d509c88ba7ff05215.pdf | [
{
"confidence": 4,
"rating": 5,
"review_id": "hoEx7Ym36l",
"review_text": "This paper explores the possibility of boosting the recommendation of a song in an automatic playlist continuation system by using a collective strategy of adding the song into the training playlists of the APC at a specific position. The paper shows that adopting a strategy that targets low frequency context makes it possible to very significantly boost the exposition of the song in the output of the APC.\n\n- Very interesting idea which is novel in the domain of music recommendation\n- Quite surprising experimental result that is worth sharing (possibility of boosting exposition of a song with the DirLof strategy).\n\n- Limited scope:\n - The idea is tested in the limited context of automatic playlist continuation and is likely not adaptable to broader applications.\n - Only one APC model was tested, so the results may be specific to this model. It would have been a good idea to test the impact of the collective action on other models (such as the baseline proposed in the reference paper for the APC model). Also, it’s unclear whether the effect would be robust to hyperparameter changes in the APC model.\n - the method is tested in a static context (not in the real world), so the actual impact of the method on usage (for instance, whether a user exposed through the APC to a song boosted by the collective strategy would listen to it or not) is not tested (this would require access to the production of a streaming music service, though)\n- Very limited insights on the design of the efficient strategy (DirLof) and why it works. The presentation of the strategy is actually quite unclear, while it’s central to the paper.\n- There may be other simpler baselines that would be worth testing. The paper shows that inserting the song at the end of the playlists is less performance than inserting the song randomly, which suggests that inserting it earlier in the playlist sequence may help. So a baseline that would always insert the song at the beginning of the sequence could have decent results (the first place is likely to be avoided as it would mean that the song would never appear as the target in the training of the APC, but low positions that appear regularly as targets in the training could be considered).\n\n- Could the authors rephrase / clarify the definition of the DirLof strategy and rationale behind it? something which is very surprising is that “the collective selects the anchor songs s0 with the smallest overall song frequency”. But it’s likely that smallest song frequency is for song appearing only once among playlists and these song are then mostly noise in playlists and it’s hard to get why those anchors would help improving recommandation of the inserted song. Also, statistics on songs with low frequency are likely much more difficult to obtain (the statistics will be much less reliable than for popular songs).\n- It’s very unclear what the distributions over playlists are (P, P0, P*). It seems it’s almost used as a synonym of dataset, and the expectations are actually just empirical expectations. I’d like the author to clarify this point and to explain why it’s necessary to introduce such notations.\n- In table 2, it’s unclear how statistics are estimated when there is no training data to compute them from (while there is still a significant amplification)."
},
{
"confidence": 4,
"rating": 4,
"review_id": "p8tUyZzJDW",
"review_text": "They proposed a strategy for the streaming platform users to collectively act in order to promote targeted songs. The promotion efficacy is measured by the targeted songs' recommendation frequency boost at testing time. This strategy is approved to be effective through simulation experiment. Another finding is that this strategy has minimum impact to the performance of the recommendation system as a whole, i.e., by preserving user experience to other non-targeted songs.\n\n* This is a novel idea, presented in an interesting domain with a good amount of related work.\n* The writing of the paper is clear. The motivation is sound. \n* The experiment performed by the author successfully verified the efficacy of the collective strategy.\n\n* Limited scope. See limitations below.\n* Lack of technical novelty and contribution. The evaluation result would be a good report but I would recommend the author to seek publication in a different conference.\n\nThe context selection methods discussed in section 3.2 seems pretty arbitrary to me. What reason or motivation does it make you choose the `InClust` or `DirLoF` approach? How would you approve these are guaranteed to work?\n\nWhat if I say a good approach could be to find the most similar song (embedding similar higher than x) that is at least y popular and place our targeted song z ( z could be -1, 1, 2, 3) positions after it?"
},
{
"confidence": 5,
"rating": 8,
"review_id": "1bMh28ld9O",
"review_text": "The paper shows that strategic collective action by a small fraction of the population can lead to significant amplification of a particular song in a recommender system. The authors propose two strategies for the collective (for a transformer-based song recommender) that achieve this amplification.\n\n- The setup is original and focuses on strategic collective action by a fraction of users in a recommender system and how this can increase a target song’s reach.\n- The performance loss for the platform is negligible, the collective’s strategies are not adversarial (for e.g. fake profiles, artificial perturbations) and based on 1-edit distance to the original playlist. The paper shows that recommendations are largely preserved.\n\n- The experiments follow the MF-Transformer in [7], to make the paper self-contained it would be beneficial to have a description of $\\phi(.)$ and of $g(.)$ and the loss function in Section 2.1 or the Appendix. \n- I found the strategies in Sec 3.2 hard to parse, perhaps a figure showing the original playlist and the possible changes a user in the collective can do under the two strategies would be helpful. \n- Minor: I think the notation h(.) is overloaded for the song/playlist embedding in 2.1 and for the strategy mapping which inserts s* into a playlist. \tAlso, fig 6 could use different colors for the different strategies.\n\n- Can you clarify the intuition and communication required by the collective for the two playlist modification strategies in 3.2? \nInClust seems to place the target song before the most popular songs in the collective’s playlist whereas DirLoF places the target song after a low popularity song? \nBoth strategies provide amplification in the experiments. How much communication among the collective do both require, is one modification easier practically?\n- Section 1.1 mentions “Our strategies exploit the properties of the attention function in the transformer model, without requiring knowledge of the model parameters”, can you elaborate?"
},
{
"confidence": 4,
"rating": 8,
"review_id": "fOYe6BczlY",
"review_text": "This research work proposes a novel solution to promote songs on music streaming platforms strategically. \n\nUnder the following assumptions:\n\n1. Fans can collaborate to promote a specific song by collectively reordering playlists.\n2. The visibility of a song in a playlist affects its recommendation frequency.\n3. Users are influenced by the position of songs in playlists when making listening choices.\n4. The impact of collective action on song visibility is measurable and significant.\n\nThe authors suggest that fans strategically reorder playlists to promote a targeted song, thereby increasing its visibility in the recommender system. By leveraging algorithmic collective action, even small groups of fans can substantially impact the recommendation frequency of the promoted song. This strategy aims to enhance the visibility (capability of being discovered) of songs and artists, which will benefit both fans and musicians in the music streaming industry. \n\n The evaluation focuses on quantifying the amplification of recommendations achieved by strategically placing songs in playlists, using metrics such as recommendation probability and change in the number of recommendations for a song. The evaluation also includes the impact on the recommendations of other songs and the overall performance of the recommender system. The analysis of results reveals that the collective action strategies can lead to a substantial increase in the recommendation frequency of the targeted song, with up to 25x higher recommendation probability compared to average songs.\n\nThe main contributions are:\n\n1. The paper introduces **two innovative collective** action strategies where participants strategically insert a target song into their playlists to promote an emerging artist. These strategies aim to increase the recommendations of the targeted song at test time, thereby boosting the artist's visibility on the platform.\n\n2. The research demonstrates that even **small collectives**, controlling less than 0.01% of the training data, **can achieve significant amplification of recommendations** by strategically placing songs in playlists. This finding highlights the effectiveness of algorithmic collective action in promoting songs without major disruptions to the user experience.\n\n3. Preservation of Other Recommendations, as the study reveals that while promoting a specific song through collective action, the recommendations of other songs are largely preserved. This indicates that the proposed strategies can enhance the visibility of targeted songs without significantly compromising the overall recommendation system's performance.\n\n- Its innovation is a significant strength as it provides a new approach to increasing the visibility of emerging artists in music streaming platforms.\n- Empirical Validation and Real-World Application: the research is empirically validated using an open-source APC model deployed on Deezer, a platform with millions of users. \n- important result: the study demonstrates that even small collectives can achieve substantial amplification of recommendations by strategically placing songs in playlists\n- The findings show the potential for diverse artist promotion, which can make fairer use of the platforms but also fights against the long-tail problem in recommender systems. It can also help the serendipity effect.\n\n- The paper assumes that users are influenced by the position of songs in playlists when making listening choices. This assumption **may oversimplify user behavior and overlook other factors** that influence song recommendations and user engagement, potentially leading to biased results.\n\nEven if the problem is not exactly the same, could you relate your work with the one described in this cite: Walid Bendada, Guillaume Salha-Galvan, Thomas Bouabça, and Tristan Cazenave. 2023. A Scalable Framework for Automatic Playlist Continuation on Music Streaming Services. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '23). Association for Computing Machinery, New York, NY, USA, 464–474. https://doi.org/10.1145/3539618.3591628 ?"
}
] |
wGP1tBCP1E | Diffusion Models are Certifiably Robust Classifiers | Generative learning, recognized for its effective modeling of data distributions, offers inherent advantages in handling out-of-distribution instances, especially for enhancing robustness to adversarial attacks. Among these, diffusion classifiers, utilizing powerful diffusion models, have demonstrated superior empirical robustness. However, a comprehensive theoretical understanding of their robustness is still lacking, raising concerns about their vulnerability to stronger future attacks. In this study, we prove that diffusion classifiers possess $O(1)$ Lipschitzness, and establish their certified robustness, demonstrating their inherent resilience. To achieve non-constant Lipschitzness, thereby obtaining much tighter certified robustness, we generalize diffusion classifiers to classify Gaussian-corrupted data. This involves deriving the evidence lower bounds (ELBOs) for these distributions, approximating the likelihood using the ELBO, and calculating classification probabilities via Bayes' theorem. Experimental results show the superior certified robustness of these Noised Diffusion Classifiers (NDCs). Notably, we achieve over 80\% and 70\% certified robustness on CIFAR-10 under adversarial perturbations with \(\ell_2\) norms less than 0.25 and 0.5, respectively, using a single off-the-shelf diffusion model without any additional data. | https://openreview.net/pdf/62eb54961c7645deb0bc1355cc5f8a275a3c9c1e.pdf | [
{
"confidence": 4,
"rating": 7,
"review_id": "TDoWrV0b9L",
"review_text": "This paper derives an upper bound of the Lipschitz constant for diffusion classifiers. Then, it proposes Exact Posterior Noised Diffusion Classifier (EPNDC) and Approximated Posterior Noised Diffusion Classifier (APNDC) by deriving ELBO upper bounds on $\\log p (x_\\tau)$ and thereby enabling classifying noisy images. The APNDC achieves state-of-the-art certified robustness.\n\nThe theory is cool and the math is intriguing. I like this direction because it leverages the shared Gaussian structure in diffusion models and randomized smoothing while circumventing the challenges of attacking diffusion models. The empirical evaluation results (especially Figure 2a) are also impressive.\n\nIn my opinion, some of the contents are not explained very clearly. Please see below and the contents of the \"Questions\" section.\n\n- Table 4 is nice. However, I wish it was in the main text instead of the appendix, because the current main text misses the discussion on how to calculate the certified robustness for the proposed models.\n- Figure 2 doesn't present the certified radius with a conventional diffusion classifier, as derived in Eq. (11). Since (11) is an important contribution of this work, I believe it should be included.\n- I would like to see an ablation study on $\\sigma_\\tau$, but could not find this result.\n\nOverall, this is still a nice paper.\n\n- Line 145: Why are the mentioned quantities bounded in the range $[0, 1]^D$? Do you clip $h_\\theta (x_t, \\sigma_t)$ to $[0, 1]^D$? How about $\\|\\| h_\\theta (x_t, \\sigma_t) \\|\\|_2^2$?\n- Eq (12): The notation $q$ was introduced in Eq (1) to represent probabilities in the forward diffusion process. So, how to compute $q (x_t | x_{t+1})$? It would also be nice to add subscripts to each of the nested expectations, so that it's clear what variables the expectations are taken over. \n- Also Eq (12): does the method become computationally cheaper when $\\tau$ increases? Is it correct that different values of $\\tau$ can reuse some computation, so if you set $\\tau$ to 0, you simultaneously get the ELBO bound for $\\tau = 0, 1, \\ldots, T$?\n- Remark 3.4: when would it make sense to use $\\tau = 0$? When $\\tau$ is $0$, is $\\sigma_\\tau$ also 0? Does this mean randomized smoothing is not used?\n- Line 224: the APNDC reconstruction loss $\\|\\| h_\\theta (x_\\tau, \\sigma_\\tau) \\|\\|_2^2$ sees great resemblance to the training objective of consistency models. Does this intuitively imply that consistency models are more suitable or less suitable for APNDC? I see a short discussion about consistency models in the appendix, but it does not fully address my curiosity.\n- Line 247 mentions that $\\sigma_\\tau \\in \\\\{ 0.25, 0.5, 1.0 \\\\}$. What are the corresponding $\\tau$ values? Which setting is used for which results? Are the different radii in Table 1, Table 2, and Figure 2 evaluated with the same $\\sigma_\\tau$ or different ones?"
},
{
"confidence": 5,
"rating": 8,
"review_id": "2RYawwmGjU",
"review_text": "The authors investigate the certified robustness of diffusion classifiers. For this purpose, they first show that these classifiers have O(1) Lipschitzness and subsequently achieve tighter robustness bounds through Bayes' theorem and the ELBO.\n\nS1: Using diffusion models to generate large amounts of synthetic data is one of the most promising approaches to improve empirical and certified robustness in recent years. The authors utilize diffusion models directly to achieve high certified robustness. \n\nS2: While prior work has investigated the robustness of diffusion classifiers, they do not provide certified guarantees. This gap is addressed in this work.\n\nS3: The work provides both relevant empirical and theoretical contributions\n\nW1: References could be ordered by appearance (minor)\n\nW2: The nature of diffusion classifiers induces a considerable computational overhead compared to standard classifiers. However, the authors try to address this issue through their sift-and-refine algorithm. Still a comparison between different methods w.r.t. inference time would have been informative. (could also include standard classifiers). Note that I would not consider large computational cost as a negative point concerning paper acceptance I just believe that a comparison would be helpful for the reader. Still the appendix provides some information w.r.t. time complexity so I view this as a minor issue. \n\nW3: Appendix D is very short and could be incorporated into the paper (at least in the camera-ready version)\n\nQ1: Could the authors provide a computational cost comparison between different methods?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "KRdPAgqBv1",
"review_text": "This work proves that diffusion classifiers possess inherent robustness to adversarial attacks by demonstrating their O(1) Lipschitzness and establishing their certified resilience. By generalizing these classifiers to handle Gaussian-corrupted data and using evidence lower bounds for likelihood approximation, the research demonstrates the superior certified robustness of Noised Diffusion Classifiers (NDCs).\n\nThe paper showcases the robustness of the proposed Noised Diffusion Classifiers (NDCs), achieving high certified robustness on the CIFAR-10 and ImageNet 64x64 datasets. The study also provides a proof of O(1) Lipschitzness for diffusion classifiers.\n\n1. The proposed method combines two existing techniques, diffusion classifiers and randomized smoothing, which is not sufficiently novel. The paper needs to better highlight what sets this approach apart from existing methods and how it fundamentally advances the field. Although the authors attempt to establish a theoretical framework, the derivation of the Lipschitz constant and its implications are not sufficiently detailed, leaving unanswered questions about the robustness guarantees.\n\n2. The experimental evaluation relies heavily on the small CIFAR-10 and ImageNet 64x64 datasets. Expanding the experiments to include larger datasets, such as ImageNet-1K, would provide a more comprehensive assessment.\n\n3. The paper discusses techniques to reduce time complexity but does not convincingly demonstrate the practicality of the proposed methods with experimental results, such as throughput or inference latency. A more detailed analysis and comparisons of computational efficiency, especially in relation to existing methods, are needed.\n\nPlease address the weaknesses mentioned above."
},
{
"confidence": 4,
"rating": 7,
"review_id": "YqreEvSEyi",
"review_text": "This paper presents a theoretical analysis of the enhanced robustness in diffusion-based classifiers and introduces a generalized Noised Diffusion Classifier, EPNDC. The authors utilize the Evidence Lower Bound (ELBO) of each conditional log-likelihood $\\log p(x_\\tau | y) $and Bayes' theorem as the logits for each class. They identified that EPNDC is time-consuming due to the iterative computation of two conditional ELBOs. To address this, they leverage the ELBO of an ensemble of EPNDC to approximate the expected likelihood as logits without additional computational cost. Additionally, they developed variance reduction and sift-and-refine techniques to reduce time complexity. Experimental results demonstrate that APNDC achieves significantly better robustness without requiring extra training data, fewer diffusion steps, and a reduced number of samples needed to estimate the Lipschitz bound.\n\n1. The entire paper is logically structured with a clear progression, enabling readers to understand it well. From Algorithm 1 to Algorithm 2 to Algorithm 5, the authors continuously explore problems, improve algorithms, and provide thorough analysis and theoretical proofs.\n\n2. The experiments are comprehensive, and compared to the benchmark, EPNDC shows significant improvements in certified accuracy. This demonstrates EPNDC's high scalability in handling large datasets with numerous categories.\n\n1. Some causal relationships are unclear or lack citations, requiring further explanation from the authors. For instance, in Line 156: What does the \"nabla operator\" refer to? It is neither explained nor cited. In Lines 161-164: \"However, similar to the weak law of randomized smoothing, such certified robustness has limitations because it assumes the maximum Lipschitz condition is satisfied throughout the entire perturbation path. As a result, the robust radius is less than half of the reciprocal of the maximum Lipschitz constant.\" The causal relationship here is unclear and needs further clarification.\n\n2. Although the diffusion classifier is highly scalable and robust, its clean accuracy on ImageNet is still far behind the current state-of-the-art (90%+). More details can be found at [https://paperswithcode.com/sota/image-classification-on-imagenet](https://paperswithcode.com/sota/image-classification-on-imagenet).\n\n1. Refer to weaknesses. \n\n2. Eq (15) uses the diffusion model $h_\\theta$ one more time than Eq (12). Why does APNDC not increase the computational overhead compared to EPNDC (Line201)? Please explain further."
}
] |
wFzIMbTsY7 | Decision Mamba: Reinforcement Learning via Hybrid Selective Sequence Modeling | Recent works have shown the remarkable superiority of transformer models in reinforcement learning (RL), where the decision-making problem is formulated as sequential generation. Transformer-based agents could emerge with self-improvement in online environments by providing task contexts, such as multiple trajectories, called in-context RL. However, due to the quadratic computation complexity of attention in transformers, current in-context RL methods suffer from huge computational costs as the task horizon increases. In contrast, the Mamba model is renowned for its efficient ability to process long-term dependencies, which provides an opportunity for in-context RL to solve tasks that require long-term memory. To this end, we first implement Decision Mamba (DM) by replacing the backbone of Decision Transformer (DT). Then, we propose a Decision Mamba-Hybrid (DM-H) with the merits of transformers and Mamba in high-quality prediction and long-term memory. Specifically, DM-H first generates high-value sub-goals from long-term memory through the Mamba model. Then, we use sub-goals to prompt the transformer, establishing high-quality predictions. Experimental results demonstrate that DM-H achieves state-of-the-art in long and short-term tasks, such as D4RL, Grid World, and Tmaze benchmarks. Regarding efficiency, the online testing of DM-H in the long-term task is 28$\times$ times faster than the transformer-based baselines. | https://openreview.net/pdf/f79cbc369ef6968176c7cc958c79839cb99e59b0.pdf | [
{
"confidence": 5,
"rating": 10,
"review_id": "fa59IP0Nn9",
"review_text": "This paper investigates an emerging foundation model, Mamba, in Reinforcement Learning (RL) scenarios and compares it with Transformer in terms of effectiveness and efficiency. The authors find that in-context RL methods with Mamba as the backbone are generally more efficient than Transformer, but there is no significant improvement in effectiveness. Then, this paper proposes a Hybrid Mamba (HM) with the merits of transformers and Mamba in high-quality prediction and long-term memory. Finally, this paper conducts experiments on three benchmarks to exhibit its improved effectiveness and efficiency.\n\n1.\tThe paper is commendably well-written and coherent, effectively explaining complex ideas in an accessible manner. The authors explored the potential of the widely discussed model Mamba in the context of RL and compared it with Transformer in terms of effectiveness and efficiency.\n2.\tThe authors proposed a novel hybrid model that inherits the merits of both Transformer and Mamba in a goal-conditional manner. The main advantage of using the hybrid structure is that when the time horizon is very long, as in the D4RL tasks, several episodes/trials are required for good in-context learning, as in the larger Dark Room and Tmaze environments.\n3.\tHM improves training and inference speed by reducing the horizon of the transformer model. This can be particularly important in applications such as robotics, which require high-frequency control.\n4.\tThe experimental evaluation, meticulously designed to include several baselines and diverse tasks, demonstrates the algorithm's strengths.\n\n1.\tThe baseline AD (Mamba) in Figure 2 and the baseline DM in Figure 3, which appear to be AD (Transformer) and DT variants, are crucial for the readers' understanding of how Mamba replaces the Transformer architecture. However, the lack of explanation of these two baselines in the experimental setup section might confuse readers.\n2.\tSome experimental settings are not explained clearly. In Section 5.3, the authors do not explain what GPU device they used. Although the device is introduced in Appendix A, it is recommended that it be explained clearly in the main text.\n\n1.\tIn Table 1 and Table 2, the author used AD (Mamba) as the primary baseline. However, in Figure 3, the author used the DM baseline instead. What is the main difference between AD (Mamba) and DM?\n2.\tThe experiments demonstrated that the online testing of HM in the long-term task is 28 times faster than the transformer-based method. However, can this hybrid structure also inherit Mamba's high efficiency in terms of training cost?"
},
{
"confidence": 4,
"rating": 4,
"review_id": "0U27B892i7",
"review_text": "This paper presents Hybrid Mamba (HM), a method that combines the Mamba model and Transformer to enhance reinforcement learning (RL) performance. HM leverages Mamba to generate high-value sub-goals, which then condition the transformer, leading to significant improvements in online testing efficiency and task-solving capabilities.\n\n1. The paper is well-written and clear to read.\n2. HM significantly accelerates testing speed, achieving up to 28 times faster results than baseline methods.\n3. HM demonstrates superior performance across various benchmarks, such as D4RL, Grid World, and Tmaze.\n\n1. This paper claims to present a in-context RL approach. The motivation of this paper is concerned with the problems encountered with the no-gradient updates in-context approach (line 28), where the policy network does not require parameter updates. However, this paper uses a global update approach, which is closer to gradient-based and conditional-based offline RL. It seems to contradict the original intention of this paper. \n\n2. HM benefits from using a powerful subgoal encoder (Mamba in this case) and conditioning the policy network with subgoals. The performance improvement is expected and unsurprising due to the advantages inherent in conditional-based RL algorithms. Hence, it is necessary to further explain the unique contributions of combining Mamba and causal transformer in this paper.\n\n3. If the sub-goal encoder are replaced with other advanced and efficient sequence encoders (e.g., flash-attention1/2 [1,2], x-lstm [3]), would it also yield better or more efficient performance? \n\n4. The experiments demonstrating HW's efficacy in capturing long-term dependencies are unconvincing. Achieving good results in tasks with an arbitrarily horizon (e.g., Tmaze) does not necessarily prove effective long-term memory embedding. It is crucial to test the stability and performance of HM with varying horizon lengths or other length-based settings. For example, Mamba’s original paper [4] demonstrated the ability to capture long-term dependencies through the scaling laws.\n\n5. Could the authors clarify in which specific aspects HM's training time is faster than DT's? Since HM appears to be a combination of Mamba and DT.\n\n6. There are parts of the paper that are not clearly explained. For instance, in lines 228-233, it is mentioned that the transformer predicts a c-step action sequence (named $a_1$ here) through the sub-goal $z_t$ and another c-step action sequence (named $a_2$) through valuable sub-goals from offline data. How are $a_1$ and $a_2$ subsequently updated or processed? \n\n7. (minor) The paper contains some typos and inconsistencies in tense usage. For example, in the related work section, the section on Mamba uses the present tense, while the section on in-context RL uses the past tense. These should be corrected for consistency. In addition, what's the meaning of the different gaussian distribution figures in Figure 1?\n\n*Reference:*\n\n[1] Dao T, Fu D, Ermon S, et al. Flashattention: Fast and memory-efficient exact attention with io-awareness. NeurIPS 2022.\n\n[2] Dao T. Flashattention-2: Faster attention with better parallelism and work partitioning. ICLR 2024.\n\n[3] Beck M, Pöppel K, Spanring M, et al. xLSTM: Extended Long Short-Term Memory. arXiv 2024.\n\n[4] Gu A, Dao T. Mamba: Linear-time sequence modeling with selective state spaces. arXiv 2023.\n\nPlease see weakness. If I have misunderstood some parts of the paper, I welcome corrections and further discussion."
},
{
"confidence": 5,
"rating": 7,
"review_id": "RXOiXhJg9a",
"review_text": "This paper investigates to utilize the Mamba [1] architecture for In-Context RL task. Addressing this task with Transformer architecture is effective while it is very inefficient due to the quadratic computation overhead of Transformer. The Mamba can reduce this overhead dramatically while sustain the performance somewhat. The application of State-Space Models (SSMs) to In-Context RL task is studied in [2], but different from [2], they combinationally utilize Mamba and Transformer as high-level memory and low-level (short-term) memory. Additionally, as Mamba predicts the sub-goal for the Transformer short-term memory, they improved the performance. Through this modeling, they can achieve better performance than previous works while improving the efficiency.\n\n[1] Gu, Albert, and Tri Dao. \"Mamba: Linear-time sequence modeling with selective state spaces.\" arXiv preprint arXiv:2312.00752 (2023).\n\n[2] Lu, Chris, et al. \"Structured state space models for in-context reinforcement learning.\" Advances in Neural Information Processing Systems 36 (2024).\n\n- Appropriate modeling is applied in this study. While the effectiveness of hybrid modeling of SSMs and local Attention has been previously explored in [1], the authors effectively implement this concept for the In-Context RL task with new functionalities, such as predicting high-value sub-goals.\n- The introduction and methodology sections are well written. The motivation is clearly articulated, and the logical flow of their method proposal is coherent. The empirical analysis comparing Mamba and Transformer in RL tasks convincingly demonstrates the need for more advanced modeling.\n- The paper provides extensive empirical analyses. It shares experimental results on multiple benchmarks, including ablation studies and performance changes with varying hyperparameter values.\n\n[1] De, Soham, et al. \"Griffin: Mixing gated linear recurrences with local attention for efficient language models.\" arXiv preprint arXiv:2402.19427 (2024).\n\n- The high-level encoding is done by encoding the intervalled trajectories (e.g., every $c$ -th trajectory), which might miss important information in the middle of the interval.\n- The section on Hybrid Mamba with Valuable Sub-goals is initially confusing, especially regarding the relationship between Mamba’s sub-goal prediction and the collected valuable sub-goals. Discussing this relationship at the beginning of the Valuable Sub-goal section could help readers understand the content more easily.\n- One of the experimental results differs from my expectations, but the paper does not provide an analysis for this. I will address this in the Questions section.\n\n- I am curious why AD (transformer) shows worse performance than HM. I thought AD (transformer) performance would be the upper bound of HM while HM is more efficient. However, AD (transformer) performance is generally worse than HM in your tests, especially for Grid World in Figure 2. Why is AD (transformer) performance poor in Grid World? Did you use a smaller context size for the Grid World test? If not (using the same context size), what could be the reasons for the significant performance gap?"
},
{
"confidence": 4,
"rating": 3,
"review_id": "g4MnMSbKBh",
"review_text": "The paper proposes Hybrid Mamba (HM) for in-context RL. Existing in-context RL methods are predominantly based on the Transformer architecture. Transformers come with quadratic complexity of self-attention and are computationally costly. Consequently, the authors propose a hybrid architecture that uses Mamba to compute sub-goals from long-context, which are fed into a low-level Transformer policy. The authors conduct experiments on grid-worlds and D4RL to evaluate their method.\n\n**Relevance**\n\nThe paper aims at deploying the Mamba architecture for in-context RL, which is very relevant given the quadratic complexity of the Transformer architecture.\nThis results in clear benefits in terms of time complexity.\n\n**Experimental results**\n\nEmpirical results on simple gridworld environments and D4RL seem convincing and their method exhibits significant gains compared to Transformers.\n\n**Presentation**\n\nThe methodology raises some questions and should be improved, in particular:\n - What is the reasoning behind sampling the sub-goal from a multi-variate Gaussian?\n - How does this compare to using a fixed representation? (e.g., similar to CLS token)\n - Why is the done-flag in Hybrid Mamba necessary? Do other methods (e.g., AD [1]) use this as well?\n - What does “Extra high-value states” mean?\n - What is the intuition behind removing actions from the Mamba context?\n - What effect would dropping actions have in other methods?\n\nFurthermore, the construction of “valuable sub-goals” is unclear.\nOne way to improve clarity would be to shorten the section on preliminaries and instead add more details to the Method section.\nFigure 2 and Table 2 are missing the performance curves/scores for HM without valuable subgoals.\nFinally, Figure 1 can be improved to enhance clarity.\n\n**Significance of results**\n\nThe authors evaluate primarily on simple grid-world environments and rather simple robotics tasks. However, it is unclear how well HM generalizes to more complex tasks as used in other works [2].\n\n**Evaluation**\n\nThe authors change their evaluation methodology from improvement curves on gridworlds (Figure 2) to average performance scores on D4RL (Table2).\nOn D4RL, HM seems to clearly outperform other methods.\nHowever, the authors do not show in-context improvemenst which raises the question whether HM actually learns to improve in-context. Can the authors clarify, why no in-context improvement curves are shown for D4RL?\n\n**Ablation studies**\n\nSome ablation studies are missing and would add more depth to understanding the proposed method, in particular:\n- What is the impact on performance of including the done-flag in Mamba?\n- What effect does it have on other methods?\n- What is the impact on performance of removing the action condition in HM?\n- What effect does the same intervention have on other methods?\n\n\n [1] Laskin et al., In-context Reinforcement Learning with Algorithm Distillation, ICLR 2023\n [2] Raparthy et al., Generalization to New Sequential Decision Making Tasks with\nIn-Context Learning, ICML 2024\n\n- Did the authors consider techniques such as key-value caching for improving inference speed of Transformers for results reported in Table 4?\n- Why is Mamba worse in effectiveness (Table 1)? What is a particular (theoretical) reason for this? Why does Mamba shorten the training time?\n- How does performance generally change, when making the models bigger? Do bigger models help on these tasks? How large are the considered models?\n- How well does the construction of valuable sub-goals generalize to other environments (e.g., with sparse rewards)?\n- How do in-context improvement curves look like on D4RL?"
}
] |
wDirCeTIoz | Communication Efficient Distributed Training with Distributed Lion | The Lion optimizer has been a promising competitor with the AdamW for training large AI models, with advantages in memory, computation, and sample efficiency. In this paper, we introduce Distributed Lion, an innovative adaptation of Lion for distributed training environments. Leveraging the sign operator in Lion, our Distributed Lion only requires to communicate binary or lower-precision vectors
between workers to the center server, significantly reducing the communication cost.
Our theoretical analysis confirms Distributed Lion's convergence properties. Empirical results demonstrate its robustness across a range of tasks, worker counts, and batch sizes, on both vision and language problems. Notably, Distributed Lion attains comparable performance to standard Lion or AdamW optimizers applied on aggregated gradients, but with significantly reduced communication bandwidth. This feature is particularly advantageous for training large models. In addition, we also demonstrate that \mavolion{} presents a more favorable performance-bandwidth balance compared to existing efficient distributed methods such as deep gradient compression and ternary gradients. | https://openreview.net/pdf/12306ef9133c7e08f53a436d98a8c343b914f091.pdf | [
{
"confidence": 2,
"rating": 6,
"review_id": "WcWMS0HWuI",
"review_text": "The paper introduces Distributed Lion, a variant of the Lion optimizer, tailored for distributed training environments. Lion, known for its memory and computational efficiency, is adapted to reduce communication costs between workers and a central server. This is achieved by communicating binary or low-precision vectors rather than high-precision floating-point vectors. The paper presents theoretical convergence properties and empirical results that demonstrate Distributed Lion’s robustness and efficiency across various tasks, worker counts, and batch sizes. It shows comparable performance to standard Lion or AdamW optimizers but with significantly reduced communication bandwidth.\n\n+ Innovation in Communication Efficiency: The use of binary or low-precision vectors for communication significantly reduces bandwidth requirements, which is a critical factor in distributed training.\n+ Theoretical Validation: The paper provides a solid theoretical foundation confirming the convergence properties of Distributed Lion.\n+ Empirical Evidence: Extensive experiments demonstrate the robustness and efficiency of Distributed Lion across a variety of tasks, making a strong case for its practical applicability.\n\n- Incompatible with Allreduce: after converting the gradients to binary or low-precision, Allreduce cannot be used for gradient synchronization. One of my concerns is about its communication efficiency in real-world distributed systems, especially training with a high number of workers.\n- Computation Overhead: While the communication cost is reduced, the overhead of converting updates to binary or low-precision vectors and back might offset some of the gains in certain scenarios. It helps if the end-to-end training throughput comparison is reported.\n\nWhat is the fundamental difference between distributed Lion and SIGNUM-like algorithms?"
},
{
"confidence": 5,
"rating": 5,
"review_id": "u8zvfjhfKH",
"review_text": "Large-scale AI model training has increasingly higher requirements on time, cost and environmental impact, so it is crucial to develop efficient optimizers. As an emerging optimizer, Lion optimizer has advantages in memory, computation and sample efficiency compared with AdamW. Distributed Lion: The paper proposes Distributed Lion, which is an innovative adaptation of Lion optimizer in distributed training environment. Using symbolic operations in Lion, Distributed Lion only requires binary or low-precision vectors to be communicated between working nodes and central servers, significantly reducing communication costs.\n\n1. Distributed Lion significantly reduces communication overhead by communicating only binary or low-precision vectors between workers, which is particularly beneficial for large-scale distributed training.\n2. The paper provides theoretical analysis to prove the convergence of Distributed Lion.\n3. Experimental results show that Distributed Lion can achieve comparable performance to the standard Lion or AdamW optimizer while reducing communication bandwidth.\n\n1. The actual updating on local worker parameters is gradients, while the communicated message is signs. While the theoretical analysis shows this updating can guarantee the convergence, the actual updating style looks like the quantization. The important baselines like QSGD and SignSGD are missed. \n2. The performance of Distributed Lion can be sensitive to hyperparameter choices, especially those related to communication and aggregation strategies.\n3. The code is not provided. Thus the reproducibility of the experiments is weakened.\n4. The experiment performance on the CIFAR-10 is very low. Considering that the well-known validation performance of CIFAR-10 can be achieved as 94%, the proposed results are around 90%. Why the performance decreases?\n5. The important baseline SGD with momentum is not provided.\n6. The convergence curves on training with ImageNet and OpenWebText are not provided. This makes it hard to identify the convergence speedup between different optimizers.\n7. The wall-clock time is not provided. The quantization operation and the majority vote require extra time, it will be better to show this optimizer can reduce the real-world throughputs.\n\n1. How do you ensure that the hyper-parameters of different optimizers are set as a suitable combination for them? Suitable hyper-parameter settings can ensure the fair comparison.\n\nIf the above weaknesses and questions are addressed, I'm happy to raise the score."
},
{
"confidence": 4,
"rating": 6,
"review_id": "i3NB1843Su",
"review_text": "This paper extends the Lion optimizer to data parallel distributed training. Unlike optimizers like SGD and Adam, the binary update in Lion can be exploited to minimize the communication. They investigate two cost effective methods for the communication of binary updates; averaging and majority vote. Experimental results show that both methods yield competitive results to global Lion and AdamW.\n\nThe convergence analysis provided in Section 3 gives some reassurance to this non-conventional optimization method. Results are promising and experimental conditions seem adequate.\n\nThe proposed method is a trivial extension of Lion to data parallel distributed training, so the only interesting contribution seems to be the convergence analysis. \n\nThe main contribution of this work is supposed to be the reduction of communication overhead, but there are no results showing the actual breakdown of the training time. Therefore, it is not possible to determine whether the reduction of communication volume is actually contributing to the reduction of the overall training time. Since the results seem to vary quite a bit for different models and datasets, such information is useful for determining whether the experiments are conducted for configurations that actually show a significant impact on the training time. There remains a possibility that the current method does not work as well for extremely large models trained with ZeRO 3 data parallelism, which is where the communication overhead really becomes a problem.\n\nHow different are the global binary updates between averaging and majority vote?\nAre the results similar because they are similar or despite their large difference?"
},
{
"confidence": 3,
"rating": 7,
"review_id": "kMoURNX8HA",
"review_text": "This paper proposes Distributed Lion, a new variant of Lion optimizer for distributed training. The proposed algorithm only requires to communicate binary or lower-precision vectors between workers to the center server, significantly reducing the communication cost. The theoretical analysis proves the convergence of the proposed algorithms. The empirical results show that the proposed algorithms have comparable model performance on CV/NLP applications but with significantly less communication overhead compared to the baselines.\n\n1. This paper proposes Distributed Lion, a new variant of Lion optimizer for distributed training.\n\n2. The proposed algorithm only requires to communicate binary or lower-precision vectors between workers to the center server, significantly reducing the communication cost.\n\n3. The theoretical analysis proves the convergence of the proposed algorithms. The empirical results show that the proposed algorithms have comparable model performance on CV/NLP applications but with significantly less communication overhead compared to the baselines.\n\n1. According to Assumption 3.1, the convergence requires i.i.d. local datasets, while real-world distributed training typically uses non-i.i.d. local data.\n\n2. In the empirical results, there seems to be no wall-clock time for training is reported. Note that the overall goal of communication reduction is to reduce the training time. Thus, it is important to report loss/acc vs. wall-clock time in the experiments.\n\n1. Is it possible to provide a convergence analysis based on non-i.i.d. data?\n\n2. For the experiments, are the local dataset on each worker i.i.d. or non-i.i.d.?\n\n3. Since the proposed algorithm can compress the communication to an extreme extend, I wonder whether it could also be applied to the federated learning scenario, where the local datasets are not only non-i.i.d., but also highly heterogeneous.\n\n4. Is there any empirical results reporting wall-clock time of training?"
}
] |
wDDvJzvvBR | Learning Spatially-Aware Language and Audio Embeddings | Humans can picture a sound scene given an imprecise natural language description. For example, it is easy to imagine an acoustic environment given a phrase like "the lion roar came from right behind me!". For a machine to have the same degree of comprehension, the machine must know what a lion is (semantic attribute), what the concept of "behind" is (spatial attribute) and how these pieces of linguistic information align with the semantic and spatial attributes of the sound (what a roar sounds like when its coming from behind).
State-of-the-art audio foundation models, such as CLAP, which learn to map between audio scenes and natural textual descriptions, are trained on non-spatial audio and text pairs, and hence lack spatial awareness. In contrast, sound event localization and detection models are limited to recognizing sounds from a fixed number of classes, and they localize the source to absolute position (e.g., 0.2m) rather than a position described using natural language (e.g., "next to me"). To address these gaps, we present ELSA (Embeddings for Language and Spatial Audio), a spatially aware-audio and text embedding model trained using multimodal contrastive learning. ELSA supports non-spatial audio, spatial audio, and open vocabulary text captions describing both the spatial and semantic components of sound. To train ELSA: (a) we spatially augment the audio and captions of three open-source audio datasets totaling 4,738 hours and 890,038 samples of audio comprised from 8,972 simulated spatial configurations, and (b) we design an encoder to capture the semantics of non-spatial audio, and the semantics and spatial attributes of spatial audio using contrastive learning. ELSA is a single model that is competitive with state-of-the-art for both semantic retrieval and 3D source localization. In particular, ELSA achieves +2.8\% mean audio-to-text and text-to-audio R@1 above the LAION-CLAP baseline, and outperforms by -11.6° mean-absolute-error in 3D source localization over the SeldNET baseline on the TUT Sound Events 2018 benchmark. Moreover, we show that the representation-space of ELSA is structured, enabling swapping of direction of audio via vector arithmetic of two directional text embeddings. | https://openreview.net/pdf/a2ecfb85ce32cb9d1d5454e92a10f65a79ed4f7d.pdf | [
{
"confidence": 4,
"rating": 6,
"review_id": "Z4TqWr7oBA",
"review_text": "This paper presents an approach to learning spatially aware language representations. The authors propose a contrastive representation model that integrates spatial context into language representations, aiming to enhance the performance of tasks that require spatial reasoning. The model combines visual and textual data to create embeddings that are sensitive to spatial relationships. The contributions include an architecture of the proposed model, extensive experimental results demonstrating improved performance on spatial reasoning benchmarks, and an analysis of the model's ability to generalize across different spatial contexts.\n\n1. The paper makes a strong contribution to spatial reasoning. The integration of spatial context into text-to-audio generation models is an important and underexplored area, and this work offers a novel and effective solution.\n2. The experimental setup is rigorous, with well-designed experiments that effectively validate the model's performance.\n3. The paper is well-written, with clear explanations of the methodology and results. \n4. The findings have significant implications for improving spatial language understanding in various applications.\n\n1. The reliance on synthetic datasets may limit the generalizability of the findings. The authors could explore the way to train the model on in-the-wild data.\n2. The current interpretation experiments (Sec. 5.4) only study a four-class classification (\"left,\" \"right,\" \"up,\" \"down\"), which is insufficient for real-life scenarios. For instance, spatial audio applications often require more nuanced classifications, such as distance perception (e.g., strong/weak reverb in indoor/outdoor settings), which are critical for capturing and representing spatial information. The authors should consider extending the experiement to handle a wider range of spatial attributes to enhance its applicability in diverse settings. For example, the authors should consider using prompts like \"xxx is making a sound in the distance\" and \"xxx is making a sound nearby\" to figure out if the results are different.\n3. The paper could benefit from a more detailed error analysis, identifying common failure cases and understanding why the model fails in certain scenarios. This analysis would provide insights for further improvement and refinement of the model.\n4. While the model performs well on tasks like retrieval and source localization, its ability to generalize to spatial text-to-audio generation remains to be seen.\n\nSee weakness."
},
{
"confidence": 4,
"rating": 7,
"review_id": "elmNx1php2",
"review_text": "This paper describes a method for learning to represent spatial audio (and text). The proposed model is trained on synthetically spatialized audio data with corresponding text prompts. The authors evaluate the system on audio captioning/retrieval and localization tasks, showing that the proposed model effectively represents both semantics and locality of audio.\n\nThe paper is well written, clearly organized, and easy to follow. The proposed method makes intuitive sense and appears to be effective. The empirical evaluation is reasonably thorough and the choice of tasks and baseline models seem appropriate. Spatial audio is a growing area (in the context of machine learning / event detection / representation learning), and I think the work described here does fill an unaddressed area of the literature in an interesting way. Overall I think they did a great job here.\n\nI don't have much to fault here, but there are a few points that I think could be expanded to improve clarity and help readers understand the contribution here. I'll elaborate on these points in the questions section, but the high-level gloss is:\n\n- While the spatial representation part of the work (ie FOA-derived input) is explained well, there is almost no explanation of how the spatialization was implemented.\n- There is little (if any) qualitative analysis of the results, only aggregate scores reported in tables.\n\n- How was the spatialization implemented? I expect this was done via standard methods (ISM implemented by pyroomacoustics? something else?), but there is no mention of this in either the main text or the appendix. Additionally, I think some of the details from the appendix (Table A.2) should be mentioned directly in the main text, such as the range of room sizes, angle distributions, etc.; these details are important, and do not take much space. (If you do need to sacrifice anything, I don't think the definition of log-mel transformation is as critical to include since it is standard.)\n- Since TUT2018 is a finite vocabulary dataset, it would be informative (and entirely possible) to see a per-class and per-environment breakdown of the evaluations reported in table 1. This would be informative because it's not necessarily a given that your spatialization is equally effective across categories (or rooms / room sizes). If the model does turn out to perform consistently across categories - great! If not, it may suggest a weakness in either the spatial rendering or prompt generation. (If you do compute these results, it may or may not make sense to store in the appendix, depending on how interesting the results are.)"
},
{
"confidence": 3,
"rating": 6,
"review_id": "X44w6mPuTB",
"review_text": "The paper presents ELSA (Embeddings for Language and Spatial Audio), a novel model designed to learn spatially-aware audio and text embeddings using multimodal contrastive learning. The primary aim is to address the limitations of existing audio foundation models, which lack spatial awareness, and sound event localization and detection models, which are constrained to a fixed number of classes and absolute positional descriptions. The authors spatially augment several classical open-source audio datasets in order to train ELSA. Results show that ELSA is able to capture spatial attributes and semantic meaning of the audio.\n\n- The focus of this paper is on learning spatial audio embeddings associated with natural language description, which is a very interesting and rewarding problem for which there is a lack of models.\n- These authors synthesize large amounts of non-spatial audio data under various spatial configurations, which is a valuable contribution to the field of spatial audio understanding.\n\n- For this paper, my biggest concern is the generalizability of the model to real scenarios. \n While the synthetic dataset is extensive, there is a risk that the model might not generalize well to real-world scenarios due to potential biases in simulated environments. To show the performance of model generalization to real scenarios, the experiments only on a small real-world dataset appear too thin. Would it be possible to test ELSA in other real scenarios, for example, in some of the tasks in the latest DCase competition, e.g. Sound Event Localization?\n- For paper writing, too much important information is put in appendices, such as the structure figure of the whole model. Perhaps the layout of the writing could be adjusted to make it easier to read.\n- The citation format of the article is highly problematic and needs to be standardized.\n\nThe experiments in Table 2 confuse me a lot. In Sec. 5.2, the authors mentioned that \"The ELSA text embeddings for such captions are extracted from the pre-trained encoder and compared in a zero-shot fashion with ELSA audio embeddings for samples from the test set using cosine similarity. We classify the match as correct if the spatial attribute in the closest audio sample matches the spatial attribute of the query caption\". \n However, the number of classes of a spatial attribute is very limited (For instance, there are only two classes \"far\" and \"near\" for the \"distance\" attribute), which means there are only two captions that will be used for the \"distance\" attribute? Wouldn't there be very few captions being used for testing totally? \n Hopefully, the authors can explain the experimental configuration a bit more.\n- To train ELSA on single-channel audio, the authors repeat the single channel 4 times to fake a 4-channel FOA audio and compute Intensity Vectors. However, the way IV is calculated possibly doesn't make sense for this kind of faked 4-channel audio. Why is it designed this way? Why not try to design a separate feature extractor for single-channel audio?\n- It is natural to understand computing bearing information from spatial audio, which is essentially a bit similar to calculating the \"Time Difference of Arrival\" based on different channels. But how to understand that the model can get distance information from spatial audio? In other words, where does the information about distance come from?"
},
{
"confidence": 5,
"rating": 7,
"review_id": "Y83hvd8139",
"review_text": "The paper presents ELSA (EMbeddings for Language and Spatial Audio), a spatially aware-audio and text embedding model. The training data is created by synthesizing spatial audio in ambisonic format and augmenting text captions with spatial information. A small real world data is also collected for evaluations. The model training itself largely follows standard CLIP/CLAP training by using contrastive losses. Additional losses for direction and distances are added for the spatial part. Evaluations are done on both semantic retrieval tasks and spatial tasks.\n\n– The paper addresses a key part of multimodal contrastive embeddings. Sounds contain a significant amount of spatial information and humans naturally rely on directional information from sounds. Considering this it is expected that embeddings with spatial information are created. The paper is a good step in the right direction. \n\n– For the most part, the paper is well done. Spatial audio can present several challenges with respect to data (more so in multimodal settings, training approach). Considering the challenges around learning from spatial audio, the paper presents a good approach for learning spatially-aware language embeddings. The experiments are also reasonably good. \n\n– The paper is also well written and mostly clear.\n\n-------\nScore increased after rebuttal.\n\nThere are a few weaknesses which are worth addressing in the paper. \n\n– For table 2, I would be curious to see what CLAP on its own can achieve. It would be good to contrast this zero-shot classification on the spatial task. \n\n– How were the non-spatial audio-text pairs used in training (as shown in Table 3, last row) ?\n\n– Using non-spatial audio-text seems crucial for good semantic retrieval. This is evidenced by A.6 as well where the models training on just spatial audio-text pairs do not do well on semantic retrieval task. This is a bit surprising. The CLIP loss is still present in training, the semantics are also intact in spatial audio-text pairs. Why should there be a performance drop in that case ? it would be good to provide a good discussion and justification\n\n– In Table A.7, the performance of the model trained on spatial Clotho and Audiocaps is better on RWD data than even on Clotho and Audiocaps itself. That is a bit surprising. We would expect that the model would be better in it’s own domain. The difference also is pretty big. \n\n– The discussion in Section 5.4 is a bit adhoc. I would suggest not referring to anecdotal observations. The experiments could be better designed. \n\n– Several of the classification experiments end up using 3-4 layers MLP. I think a more shallower model (maybe even just linear classifier) would provide a better confirmation of what information the embeddings store. Otherwise such deeper networks are able to push the numbers on their and it’s not clear how good the embeddings are. \n\n– Some form of clustering and distance visualization would be good. It has been incorporated in some form in Table 2, but it would be good to explicitly show how the distances between embedding represent the spatial information. \n\n– All the spatial mapping in terms of the language is very discrete (A.2). The range for distance, direction etc. can appear a bit arbitrary and forced. While this is perhaps a good first attempt, a more continuous form of “spatial-language” is desirable. Another thing could be a perception driven approach can also be taken where the boundaries are decided by what people generally perceive as left or right w.r.t sound direction.\n\nPlease address the weaknesses above"
}
] |
wBzvYh3PRA | FactorSim: Generative Simulation via Factorized Representation | Generating simulations to train intelligent agents in game-playing and robotics from natural language input, user input, or task documentation remains an open-ended challenge. Existing approaches focus on parts of this challenge, such as generating reward functions or task hyperparameters. Unlike previous work, we introduce FACTORSIM that generates full simulations in code from language input that can be used to train agents. Exploiting the structural modularity specific to coded simulations, we propose to use a factored partially observable Markov decision process representation that allows us to reduce context dependence during each step of the generation. For evaluation, we introduce a generative simulation benchmark that assesses the generated simulation code’s accuracy and effectiveness in facilitating zero-shot transfers in reinforcement learning settings. We show that FACTORSIM outperforms existing methods in generating simulations regarding prompt alignment (i.e., accuracy), zero-shot transfer abilities, and human evaluation. We also demonstrate its effectiveness in generating robotic tasks. | https://openreview.net/pdf/c3c4eed43ecec8fe574f69437c9137f8c41b7797.pdf | [
{
"confidence": 3,
"rating": 6,
"review_id": "ldQp8QFXvj",
"review_text": "This work presents FACTORSIM, a framework that converts any language specification into a complete simulation for training RL agents. FACTORSIM decomposes the input prompt into steps and uses a factored Partially Observable Markov Decision Process (POMDP) to minimize the context needed for each generation step. It also introduces a method to benchmark generative simulation and demonstrate the capability for FACTORSIM to be used in robotics setting.\n\n1. Develop a robust pipeline for constructing game environments from language descriptions, which could significantly enhance the scalability of training generalist agents.\n2. Formalize the framework as a Partially Observable Markov Decision Process (POMDP), reducing the need for full context during generation and improving outcomes.\n3. Demonstrate the potential of this method to generalize to embodied scenarios.\n\n1. The evidence for generalizing to embodied scenario is limited.\n2. The successful rate in Table 1 and Figure 3 is low. Could there be some potential way to improve it?\n\nI have one primary concern: how could this be applied to the field of robotics and embodied AI?\n\n**From this concern, several questions arise:**\n\n- How is the background (or scenario) generated within this pipeline? In a 2D game setting, detailed descriptions generate the scenarios, but this can become extremely tedious as scenes become more complex, such as in an embodied environment. While language-guided scene generation could be a solution, how will it fit into the POMDP framework?\n- The framework addresses three main components: controller, model, and view (rendering). In robotics, these aspects are typically handled by a physics simulation. How will this framework further contribute to the field of robotics? Currently, the paper shows potential for task generation in tabletop scenarios only.\n- I am still unclear on how the Robotics Task Generation part is achieved by this pipeline.\n\n**Some suggestions:**\n\n- While it might be challenging to address this concern with experiments during the rebuttal period, more discussion on approaches and challenges would be beneficial.\n- Any additional experiments that could demonstrate the pipeline's usage in robotics or embodied AI could help."
},
{
"confidence": 4,
"rating": 6,
"review_id": "I2vEqc8TkT",
"review_text": "The paper proposes a LLM prompting method to generate full game / robot simulations in code based on text descriptions. Given a long text description, the method first utilizes an LLM to decompose it into multiple sentences, and then use them to iteratively generate and update simulation code. For each iteration, the code is generated and updated separately as three modules, i.e., controller, model and view. The update happens in a factorized way - the authors use the LLM to identify relevant state and context to avoid feeding the full generated code into LLM.\n\nIn experiments, the method is evaluated on game and robot simulation code generation benchmarks. The method shows superior results against other LLM prompting baselines in generating valid simulation code that aligns with text description.\n\n- The proposed method exploits the structure of simulation to modularize and factorize code generation. This strategy significantly improves LLM's capability to generate full simulation code.\n- The method is comprehensively evaluated on game and robot benchmarks.\n- The paper is well written and easy to follow.\n\nThe major contribution of the paper seems to be a prompting technique crafted for the specific task of simulation code generation. While such a technique does improve performance on the task, my concern is it is neither fundamental nor sufficiently novel. The proposed prompting technique highlights two key designs:\n- modularize simulation code generation manually, which aligns with the common practice to manually decompose a complex task into sub-tasks for LLMs to handle more effectively.\n- extract relevant portion of code for LLM to consume and update, which is also an implementation-wise design that many works have already incorporated.\n\nWhile the paper writes factorized POMDP formulations, they don't seem to make a difference on how the prompting method is implemented. So I'm concerned that the contribution of this paper is more as a practical application rather than a general method or framework.\n\nI'm curious what the failure modes of FactorSim is like."
},
{
"confidence": 3,
"rating": 4,
"review_id": "b9ZZyIxBtg",
"review_text": "The paper proposed a factorized approach to generate simulated games via LLM code synthesis. The code idea is that one doesn't need to generate the entire code at once, but rather generate different part of a POMDP game, such as controller, model, and view. The generated simulation allows RL policies to train on top. The authors introduced a benchmark to evaluate the proposed framework and show good results in terms of prompt alignment, transfer ability and human evaluation.\n\nThe paper investigates an important problem, simulation generation. The evaluation over the mentioned environments is solid, spanning from automated tests to human evaluations.\n\n1. The paper is poorly written. I have hands-on experience with almost all important concepts mentioned in the paper, yet still have a hard time understanding the paper, and have to read again and again including some code. Rather than talking about abstract terms like POMDP / factorization first, I think the authors can start easy with intuitions and explanations. The figure can also be improved. The main method figure shall spend more time showing what's special about \"Factored POMDP\" compared to prior methods. The benchmark claim should have its own section. The motivation is not clearly narrated either. The world model section in related work doesn't seem to fit there.\n\n2. On of the main contributions the authors listed in the introduction section is a benchmark. However, I think this benchmark seems to lack the technical depth I was expecting as a standalone contribution. I feel it's just a set of small metrics rather than benchmark.\n\n3. The paper just lacks the level of technical contribution that meets my criteria for a Neurips paper. While there are many other prompting papers like CoT, ToT, the problem the paper is trying to solve is also very specific. \n\n4. While I have experience with both LLM and robotics, I believe the authors should not put Robotics as primary area, but NLP or code synthesis community.\n\nIn figures like figure 6, is the human pass rate based on the previous stage e.g. only executable code. \n\nIt seems that on open source model like llama 3, gensim with CoT is very close to factor sim. Can you explain the insights?"
},
{
"confidence": 3,
"rating": 6,
"review_id": "MCEFt85gR1",
"review_text": "The paper introduces an LLM-based method for generting code for simulations. After generating the simulation of famous games based on their manual and description, the authors show that policies trained in these environments transfer well to the real games.\n\n- **S.1 Great results.** I think that the results from Fig.3 are very impressive. Zero-shot transfer is very hard, and doing so much more reliably than vanilla GPT-4 is impressive.\n- **S.2 Overall good idea.** The idea to deconstruct the game development task into M-V-C makes a lot of sense to me. I just thought that's already kind of captured in the POMDP formulation. \n- **S.3 Good presentation.** The overall presentation and writing are good, although there is much left to be desired in terms of implementation details.\n\n- **W.1 Implementation details and relationship to formulas.** I'm happy that there was code provided with the submissio and I hope that it will be released publicly because based purely on the main body of the paper, the method is not reproducible. Including the prompts in the appendix helps but I wish they were commented a bit more on why certain phrases and sections are there. And I'm wondering if the theory that's presented in the paper holds water wrt the actual instructions in the appendix. Because as far as I understand, there aren't any restrictions on what code the LLM can generate for each component, right? Also, the paper mentions graphs every so often and I don't know how they fit into this. I also think the context selection is crucial to your method and from the main body of the paper, it's completely unclear how that's implemented.\n- **W.2 Missing examples.** Along a similar idea, I'd have loved some examples throughout the paper to illustrate what some of these instructions actually mean.\n- **W.3 Unclear robotic experiments.** The robotic experiments seem to be more of an afterthought in the paper, and it's unclear what existing assets there are, what control code is assumed given, what camera parameters are assumed, etc. \n- **W.4 Unclear input mapping.** The appendix mentions that the controller is given or that button presses always mean the same thing. I completely don't understand what's meant by that.\n\nOverall, I think the paper shows a great idea and is probably beneficial to the community, but some work should go into tweaking the main body of the paper and making the method more clear and reproducible.\n\nI don't have any major questions wrt the work. There were a couple of points throughout the paper in the methods section, where I asked myself why this is relevant, but then this was cleared up a paragraph later."
}
] |
wBtmN8SZ2B | Learning Structured Representations with Hyperbolic Embeddings | Most real-world datasets consist of a natural hierarchy between classes or an inherent label structure that is either already available or can be constructed cheaply. However, most existing representation learning methods ignore this hierarchy, treating labels as permutation invariant. Recent work [Zeng et al., 2022] proposes using this structured information explicitly, but the use of Euclidean distance may distort the underlying semantic context [Chen et al., 2013]. In this work, motivated by the advantage of hyperbolic spaces in modeling hierarchical relationships, we propose a novel approach HypStructure: a Hyperbolic Structured regularization approach to accurately embed the label hierarchy into the learned representations. HypStructure is a simple-yet-effective regularizer that consists of a hyperbolic tree-based representation loss along with a centering loss, and can be combined with any standard task loss to learn hierarchy-informed features. Extensive experiments on several large-scale vision benchmarks demonstrate the efficacy of HypStructure in reducing distortion and boosting generalization performance especially under low dimensional scenarios. For a better understanding of structured representation, we perform eigenvalue analysis that links the representation geometry to improved Out-of-Distribution (OOD) detection performance seen empirically. | https://openreview.net/pdf/51b651ea18ebf5913ecbb3b9f73255a5ce047e16.pdf | [
{
"confidence": 3,
"rating": 5,
"review_id": "U5bXQIsVBo",
"review_text": "The paper introduces a novel regularization method, HypStructure, which utilizes hyperbolic geometry to improve the embedding of hierarchical relationships within feature representations. This approach enhances the learning of structured representations, reducing distortion and boosting generalization in low-dimensional scenarios. It also demonstrates superior performance in Out-of-Distribution (OOD) detection across various datasets through extensive empirical evaluation. Additionally, the paper includes an eigenvalue analysis that provides deeper insights into the structured representations, correlating positively with improved OOD detection performance. This advancement extends structured representation learning to hyperbolic spaces, achieving more discriminative and interpretable features that effectively capture the inherent hierarchies in complex datasets.\n\n1. The paper is the first to formally characterize properties of hierarchy-informed features via an eigenvalue analysis, and also relate it to the OOD detection task.\n2. The paper is easy to read and follow, making complex concepts accessible. The use of clear definitions, structured methodology sections, and detailed discussions helps in understanding both the theoretical underpinnings and practical implications of HypStructure. Visual aids and empirical results are presented in a manner that clearly supports the claims made.\n3. The significance of this work lies in its potential impact on a range of applications that require understanding and leveraging hierarchical relationships in data, such as image recognition and OOD detection.\n\n1. The main concern of this paper is the novelty. I believe the method proposed by the author in this work has been explored in many previous works. For instance, in \"Hyperbolic Image Embeddings,\" \"Hyperbolic Contrastive Learning for Visual Representations beyond Objects\", etc. Although the paper characterizes properties of hierarchy-informed features via an eigenvalue analysis, the contribution is not significant enough to be accepted. \n2. The writing is also not good enough for me. For instance, two examples starting from line 107 are not necessarily included in the formal paper; it is better to be placed in the supplementarity materials. There are also some repetitive expressions, for instance, in line 351 and line 353 (While the current work).\n3. In summary, I believe the technical contribution of this paper is not significant enough to be accepted.\n\nSee weaknesses."
},
{
"confidence": 4,
"rating": 5,
"review_id": "AW5TjjNDqW",
"review_text": "The paper presents a novel approach, HypStructure, for learning structured representations. Comparing with the existing method, the proposed method adds an regularizer calculated from hyperbolic geometry. This approach aims to reduce distortion and improve generalization performance, particularly in low-dimensional scenarios. Extensive experiments are conducted on both the classification task and the out-of-distribution detection task.\n\nThe paper is well organized. The paper extends and studies the existing L2-CPCC to the hyperbolic space, which effectively reduces distortion and enhances the representation of hierarchical relationships in the data. The paper also conducted comprehensive experiments as well as detailed theoretical analysis of the eigenspectrum of structured representations.\n\nIf my understanding of the proposed loss term is correct, $L_flat$ is not calculated in the hyperbolic geometry. Have you tried the $L_flat$ with hyperbolic network or hyperbolic geometry. I hope the authors could provide more explanation of the combined loss in different geometries. \n\nIn Section 2.1, it mentions that $D_i$ is the subset of data points with a specific class label, and $d(D_i, D_j)$ is the distance between the feature vectors of $D_i$ and $D_j$. However, it is not mentioned how the vectors for $D_i$ and $D_j$ are calculated. Is it simply the average of all the feature vectors in the subset? \n\nFor Example 1 in Section 2.2, tree $T$ and nodes $G, C, D, E$ are referenced in a way that implies there should be a figure accompanied by the example. While Figure 1 is referenced shortly before this, it is meant to be accompanied by Example 2. Is there a figure that has been omitted here?\n\nAlso, I would recommend proofreading the paper to correct all grammatical errors. For example, in the paragraph of Section 2.1, the first sentence “Using a composite objective as defined in Equation (2), we can enforce the distance relationship between a pair of representations in the feature space, to behave similarly as the tree metric between the same vertices” should be corrected to “Using a composite objective as defined in Equation (2), we can enforce the distance relationship between a pair of representations in the feature space to behave similarly to the tree metric between the same vertices.” This version removes the unnecessary comma and corrects “behave similarly as…” to “behave similarly to…”.\n\nsee my comments in the Weaknesses section."
},
{
"confidence": 3,
"rating": 6,
"review_id": "HT67UEyRMa",
"review_text": "This work introduces a regularization scheme based on Cophenetic Correlation Coefficient to more appropriately embed semantic label hierarchical structure in the representation. The method exploits the hierarchical benefits of hyperbolic space reformulating the CPCC regularization term to operate on the Poincare ball. The proposed method sees improvement in empirical performance demonstrating the effectiveness of the approach to learn a more separable embedding space for classification.\n\n⁃\tThe authors present the work clearly with effective use of visual and writing structure. All figures/diagrams are useful in supporting the narrative and findings.\n\n⁃\tThe method is simple, highly generalizable, and leads to improved performance on benchmark tasks. It can therefore, be seen as an advantageous tool in hyperbolic learning that could possibly lead to impact and adaptation by practitioners in the field.\n\n⁃\tThe theoretical and analysis are generally good, with eigenspectrum analysis supporting your claims of hierarchical structure for the most part. This is a useful analysis that provides confidence in the findings supported by appropriate proofs.\n\n⁃\tExtensive details to support replication are provided.\n\n⁃\tFrom the visualizations presented of the embedding space, notably the UMAP, your embeddings seem to have collapsed to the boundary in many places limiting the inherent hierarchy of the embeddings, this results in a limited hierarchy being represented. This in turn, leads me to question the extent of hierarchies learnt, when discussing the core intention of the work, and the claims made. One would expect that greater performance could be achieved if this had been addressed. I am aware that boundary collapse is still an unsolved problem, but careful tuning can limit its effects.\n\n⁃\tThe approach is simple but arguably not significantly novel given it is a hyperbolic reformulation of CPCC with minimal changes. With that being said, these simple methods do work somewhat well in practice and are useful to progressing the field. \n\n⁃\tThe use of a Euclidean linear evaluation is a confusing direction. You are aiming to learn a hyperbolic embedding space that preserves hierarchy, yet for downstream tasks you employ a Euclidean classifier, why? You will lose the desirable properties you are aiming to capture.\n\n⁃\tFurther experimentation on different hyperbolic models and downstream tasks would have helped demonstrate the generalization of the regularization to all of hyperbolic learning. Although, this cannot be expected in the rebuttal, it would have helped support the findings to present the work a more generalized approach.\n\nSee weaknesses."
},
{
"confidence": 4,
"rating": 5,
"review_id": "G4Uq1hU6fl",
"review_text": "The paper introduces HypStructure, a novel approach for learning structured representations using hyperbolic embeddings, which are well-suited for modeling hierarchical relationships due to their tree-like structure. The method incorporates a hyperbolic tree-based representation loss and a centering loss to embed label hierarchies into feature spaces with minimal distortion.\n\nExperiments demonstrate HypStructure's effectiveness in improving classification accuracy, especially in low-dimensional scenarios, and enhancing Out-of-Distribution (OOD) detection performance without compromising in-distribution accuracy.\n\n1. Although it is already theoretically proved in the related work of [70, 72] that it is not possible to embed a tree in Euclidean space without loss, it is still informative to see that In section 2.2, example 1 and example 2 give good counter-examples to show this property. \n\n2. The paper is well-written and easy to follow, and the proposed model is simple yet effective. \n\n3. The paper provides a formal eigenvalue analysis that links the geometry of the learned representations to their performance\n\n1. Sections 2.1, 2.2, and 3.1 are all from existing literature, which limited the contribution of the paper. Although the operations described in Section 3.1 are common hyperbolic operations, this section still lacks proper reference to the related papers.\n\n2. In HypCPCC, the authors proposed two alternatives of the loss,\n * Map Euclidean vectors to Poincaré space then average.\n * Averaging the Euclidean vectors then map to Poincaré space.\nIn the 1st alternative, The use of Klein weighted average incurs extra computation, Is it worth doing so?, In the 2nd alternative is exactly the same as [r2], which also calculates the prototypes for each class in hyperbolic space and then map to Poincaré space, [r2] also deploys supervised constructive learning, but the reference is missing and comparison is not stated.\n\n3. The statement in Theorem 5.1 is incorrect, an entry of $K$, denoted as $r^h$ should be a vector, but the theorem stated that $\\lambda_0 = 1 - r^1$, which does not make sense.\n\n3. Incorrect (but fixable) definition in line 708, the proof used $\\| u \\| = \\| v \\| = 1-\\epsilon $ but in the proof the authors used the fact that $\\| u \\| = \\| v \\| = 1-\\epsilon^2 $\n\n4. Incorrect proof in Corollary A.1, the last row of the proof does not hold, Poincaré distance cannot be the same as Euclidean distance, \"growing in the same trend\" does not mean \"proportional to\".\n\n[r2] Long, Teng, et al. \"Searching for actions on the hyperbole.\" CVPR2020\n\n1. What is the rationale behind choosing the Klein weighted average for the first alternative in HypCPCC, considering the extra computation it incurs?\n2. Can the authors provide a comparison to [r2], which is (a special case of) their second alternative of HypCPCC?"
}
] |
wAqdvcK1Fv | Energy-Based Modelling for Discrete and Mixed Data via Heat Equations on Structured Spaces | Energy-based models (EBMs) offer a flexible framework for probabilistic modelling across various data domains. However, training EBMs on data in discrete or mixed state spaces poses significant challenges due to the lack of robust and fast sampling methods. In this work, we propose to train discrete EBMs with Energy Discrepancy, a loss function which only requires the evaluation of the energy function at data points and their perturbed counterparts, thus eliminating the need for Markov chain Monte Carlo. We introduce perturbations of the data distribution by simulating a diffusion process on the discrete state space endowed with a graph structure. This allows us to inform the choice of perturbation from the structure of the modelled discrete variable, while the continuous time parameter enables fine-grained control of the perturbation. Empirically, we demonstrate the efficacy of the proposed approaches in a wide range of applications, including the estimation of discrete densities with non-binary vocabulary and binary image modelling. We also introduce the first application of EBMs to tabular data sets with applications in synthetic data generation and calibrated classification. | https://openreview.net/pdf/b940a6b863bcbcdb355543eae11b90bc3e6fd5e2.pdf | [
{
"confidence": 4,
"rating": 6,
"review_id": "6SMjnAs4CZ",
"review_text": "This article, \"Energy-based modelling for discrete and mixed data via heat equations on structured spaces\", proposes to perform the training on EBM, using the Energy Discrepancy (ED) loss, in the case where having multi-modal dataset mixing eventually continuous inputs but also discrete (categorical) ones. The work describes into details how to parametrize in different setting the inclusion of discrete variables, and they apply it to various datasets.\nThe main contributions are the design of the continuous time Markov chain transition probability that lies at the heart of the ED approach and the application to tabular dataset for which generative approach is usually hard.\n\nThe authors show how their method can be efficiently used on Tabular dataset. In particular, they apply to several dataset and show that in average the EBM trained with Energy-Discrepancy using a discrete implementation of the Markov chain transition probability outperform the concurrent approach, ref[Xu et al 2019]. \n\nThe authors also show experimental results on image modelling.\n\nThe authors extend the formalism of Energy Discrepancy to the case of including discrete state in addition to continuous features. Whether or not this justifies an entire publication can be debated, Although it should be emphasized that the datasets under consideration are quite original.\n\nIt might be because I'm not an expert on ED, but while traditional EBM relies on MCMC to compute the gradient, ED does not. However, it is not clear to me if sampling the EBM trained in such way need MCMC to be generative ? If so, the article should provide more details on the implementation. They should also check that the trained model is at equilibrium (the generated samples corresponds to the equilibrium properties of the model).\n\nMore importantly, the comparison for the tabular dataset is only done at the level of the AUC curves. Can at least the authors compare the frequency and correlations amongst the generated samples and the true ones ?\n\nThe authors said that their work is one of the first dealing with tabular data, I at least find this one dealing with EBM and tabular dataset: https://arxiv.org/abs/1912.09382, the authors might check if it is relevant w.r.t. their work. Also, this article https://arxiv.org/abs/2405.15376 deals with generative and categorical variables for EBM."
},
{
"confidence": 4,
"rating": 6,
"review_id": "1B7xDcHrjH",
"review_text": "This paper extends the Energy Discrepancies framework introduced by Schroder et al. to the setting of discrete data. In order to do this, the authors first describe ways to perturb discrete data by modeling the perturbation process as a CTMC. They describe suitable perturbation choices for different types of discrete data (e.g. binary, categorical, ordinal, cyclical) and describe different considerations for the time parameter in the CTMC. They then propose an approach that performs a Monte-Carlo estimate of the contrastive potential needed for the Energy Discrepencies loss and compare their method to existing methods for training discrete EBMs.\n\nOriginality: Energy discrepancy is a relatively new approach. While the original paper proposed some extensions to discrete data, this paper goes into extending energy discrepancy to discrete data in much more depth and includes new mathematical and experimental analyses. \n\nClarity: Overall, the paper is well written. \n\nQuality: I believe the paper is technically sound. \n\nSignificance: The authors show on toy examples that their method outcompetes contrastive divergence. The authors appear to generally outperform two methods proposed in 2019 along with contrastive divergence. While I have some minor concerns about these baselines, outperforming these baselines is at least demonstrating some empirical benefit of this approach.\n\nClarity: The clarity can be improved a bit (see my questions below). \n\nSignificance: Despite demonstrating that the method can work empirically, I have some concerns with the overall significance. It seems that while the method works well on toy examples, the results are less impressive on real-world image modeling tasks. I am unfamiliar with the field of tabular data modeling and therefore, cannot properly assess the significance of the results. Beyond contrastive divergence the main baselines is a method from 2019 with 2,000 citations. Are there better baselines to compare against among these 2,000 citations?\n\nIn section 3 on lines 93-95 the authors describe two key criteria for defining useful perturbation processes. The first criterion is described as “the negative samples obtained through $q$ are informative for training the EBM when only finite amounts of data are available.” I struggled to understand precisely what this statement meant. What are examples of processes that are more and less informative? \n\nI am confused about the connections to the heat equation, which is likely due to my own lack of understanding but may also indicate that the clarity could be better. My understanding is that we need to define a process that perturbs our data, $p(y | x)$. Such processes have been described in the ML literature, which the authors cite and can be solved through the matrix exponential. While normally this matrix exponentially may be hard to solve, since the noise process is applied independently across the dimensions, it should scale with O(S^3). It was unclear why small timesteps were introduced and why Euler integration was needed. Was the point that for some problems S^3 is too big and so for these problems we will restrict ourselves to small timesteps in order to avoid computing the matrix exponential? Overall, I was left confused about why we need to talk about heat equations at all and why we don’t just describe this as a CTMC with initial conditions? Is there something that the heat equation view is really buying us? \n\nLines 176-181 describe the subsection “Localization to random grid”. Related to my comment above about the lack of clarity regarding when the negative samples are “informative for training” the authors say that adding uniform noise to categorical variables “introduce too much noise to inform the EBM about the correlations in the data set.” Can the authors make this statement more precise in the text? I think I can intuitively see that if make random uniform perturbations at each dimension then you are sampling from a uniform distribution and this will be uninformative in some sense. However, I think this notion needs to be explicitly connected to the optimization objective / loss functional in order to make this clear. Furthermore, it is not clear why this isn’t solved by taking smaller timesteps so that only a few dimensions will on average be changed. Can the authors please clarify this?\n\nOverall, I am confused about the choice of time parameter and I think this needs to be better written in the manuscript. Section 3 establishes that as $t$ goes to infinity, “the energy discrepancy loss converges to maximum likelihood estimation.” The authors describe why maximum likelihood may have statistical advantages for finite data sets but then immediately move on in Section 3.1 to small timescales. This seemed like an odd transition and immediately made me ask “why not just use large times since this is maximum likelihood?” I suspect the answer lays in Section 4.2 where it becomes apparent that the contrastive potential must be estimated with Monte-Carlo sampling. It seems that larger timescales induce higher entropy distributions that would require more MC samples to approximate the expectation on line 192? \n\nFor the related works, I felt the “Contrastive loss functions” paragraph needs more discussion. Energy discrepancies seems very closely related, if not exactly, to contrastive loss functions for training energy-based models. Can the authors please provide a more thorough comparison of these different methods?\n\nSimilarly, I did not see a discussion on pseudolikehood loss functions. For small timesteps, the loss function seems very closely related to pseudolikelihood estimation and it seems that when MC sampling must be used in this method to approximate an otherwise intractable integral, that the MC sampling can be seen as an MC approximation to pseudolikelihood? \n\nThe authors make a point of saying that ED is much more efficient that CD and point to timing experiments in the appendix. However, it appears that authors are only reporting timing for M=4 samples when in the paper M=32 is used. If I extrapolate and multiply the ED time by 8 (since 32 = 8*4) then ED is more expensive then all of these methods. Can the authors please clarify this? I suggest changing Table 6 to M=32 if that is what is used in the paper. \n\nThe biggest experimental win seems to come from the Tabular Dataset. I am not very familiar with this area so I have a limited ability to evaluate the significance here. While the results seem reasonable I have two questions: 1) since the baseline methods were published in 2019 are there more sensible baselines to compare with? I again emphasize that I am not requiring that the authors’ method is SOTA – it is okay if other methods beat their method. 2) Since these methods have mixed continuous and discrete data can the authors do a separate benchmark that only models the discrete columns? I think it would be helpful to tease apart whether the best strength of this method is in modeling mixed continuous-discrete data or also purely discrete data. \n\nI was confused by the statement that method is sensitive to the assumption that the data distribution is positive on the whole space. Why is this more of an issue for ED than other EBM training techniques? Intuitively, it seems that you can always avoid the assumption that the data distribution is positive by just assuming that the energy in these regions is so high that the probability that you sample these regions is vanishingly low. Either way, can the authors point me to where this assumption of a low-dimensional manifold is investigated in the paper / SI?"
},
{
"confidence": 3,
"rating": 6,
"review_id": "OHChQ9CxXT",
"review_text": "The paper proposes a suite of methods for training energy-based models for discrete and mixed data using the Energy Discrepancy loss, a recently proposed method for training EBMs. Compared to contrastive divergence, it does not require MCMC sampling from the model distribution during training, improving training efficiency. This is done by simulating a diffusion process on the discrete states and effectively using those noisier samples as the negative samples. The paper introduces a connection between the new method and maximum likelihood estimation, showing that energy discrepancy as applied to discrete state spaces can converge to the negative log-likelihood. In experiments, the new method behaves favourably compared to contrastive divergence-based methods on synthetic data sets, on average better than baselines on real-world tabular data sets, and comparably to many competing methods generation on discrete image modelling. An application of the trained EBM on classification and improving uncertainty quantification compared to a direct classifier is also shown.\n\n- The paper proposes a relevant extension to recently published work. Especially Theorem 1 does not seem obvious, and the paper may open up the use of the Energy Discrepancy loss to a much wider variety of use-cases. \n- The method is also quite simple, and seems simple to implement. \n- The paper connections to recent work on discrete diffusion models, and proposes a variety of methods to estimate the energy discrepancy loss. \n- The results are good compared to standard contrastive divergence based methods\n- The paper is well written, and I found it easy enough to understand even without prior knowledge on the Energy Discrepancy method.\n\n- As noted in the limitations, the application to data such as images seems to be challenging as the noisy negative samples may not give very useful training signal in this case. \n- Although the energy discrepancy method has already been proposed and published in previous work, I found the justification for the method slightly confusing while reading this paper. What is Theorem 1 exactly saying? (see questions) The loss also is, in practice, approximated with something slightly different than the proposed loss, which seems conceptually a bit confusing. However, this is not a major concern given that the base method has been proposed and published in previous work.\n\n- How should I interpret the left side of the equation in Theorem 1, and the fact that the right side approaches zero with large enough t? How does this link ED to maximum likelihood, exactly? \n- What is Avg. Rank in Table 1?\n\nOverall, the paper seems like a solid contribution in advancing the training of this branch of energy-based generative models. However, I was not aware of ED before reading this paper, and am not very up-to-date on the most recent work on energy-based models. As such, I give tentatively a weak accept."
},
{
"confidence": 3,
"rating": 6,
"review_id": "7SLStw2dxK",
"review_text": ": The paper introduces a novel method for training energy-based models (EBMs) on discrete and mixed data using heat equations on structured spaces. This method employs the Energy Discrepancy (ED) loss function, which eliminates the need for Markov chain Monte Carlo (MCMC) by using graph-structured perturbations. The proposed approach is evaluated on several applications, including discrete density estimation, synthetic data generation, and calibrated classification, demonstrating significant improvements in training efficiency and model performance.\n\nThe paper successfully extends the Energy Discrepancy method to discrete and mixed data with solid theoretical analysis, addressing the challenge of robust and fast sampling in these space. The designed experiments demonstrate the method's ability to accurately capture complex data structures and generate high-quality synthetic data, highlighting its practical applicability.\n\n1.\tDespite the method's solid contributions and experimental design, the motivations behind each step and their presentations are not very clear, making it hard to follow. For instance, in Section 3.1, the paper discusses different structured and unstructured categorical values, introducing the four types {cyc, ord, unif, abs}. However, it is not clear why these specific structures are chosen. Are they meant to cover all categorical values comprehensively, or are they the most common in tabular data? Providing a clearer rationale would help readers understand the choices made.\n\n2.\tThe scalability of the proposed method in such scenarios is a significant concern. An analysis or discussion on how the method handles large categorical values would be beneficial. This could include potential modifications or considerations to ensure that the method remains efficient and practical when applied to datasets with large categorical variables. What’s more, I strongly recommend moving these algorithms from the appendix into the main body of the paper. This would make the paper easier to follow and more accessible to readers who need to understand the detailed workings of the method.\n\nSee weakness"
}
] |
w6vbfSC1y0 | Self-Calibrated Tuning of Vision-Language Models for Out-of-Distribution Detection | Out-of-distribution (OOD) detection is crucial for deploying reliable machine learning models in open-world applications. Recent advances in CLIP-based OOD detection have shown promising results via regularizing prompt tuning with OOD features extracted from ID data. However, the irrelevant context mined from ID data can be spurious due to the inaccurate foreground-background decomposition, thus limiting the OOD detection performance. In this work, we propose a novel framework, namely, \textit{Self-Calibrated Tuning (SCT)}, to mitigate this problem for effective OOD detection with only the given few-shot ID data. Specifically, SCT introduces modulating factors respectively on the two components of the original learning objective. It adaptively directs the optimization process between the two tasks during training on data with different prediction uncertainty to calibrate the influence of OOD regularization, which is compatible with many prompt tuning based OOD detection methods. Extensive experiments and analyses have been conducted to characterize and demonstrate the effectiveness of the proposed SCT. The code is publicly available at: https://github.com/tmlr-group/SCT. | https://openreview.net/pdf/ebce8bd933169b8ae49746ddfdd82a8b0895df0d.pdf | [
{
"confidence": 5,
"rating": 3,
"review_id": "vsures9stI",
"review_text": "This paper first reveals the relationship between the quality of out-of-distribution (OOD) features and the prediction uncertainty of in-distribution (ID) data. Then, the paper introduces modulating factors to weight the ID loss and OOD loss, with the weights being related to the ID data prediction confidence. The experiments are carried out on standard datasets.\n\n1. The analysis of the relationship between OOD feature quality and ID prediction confidence is well-reasoned; lower ID confidence indeed affects the accuracy of foreground-background separation.\n2. Weighting the loss components is a straightforward approach, making it easier to understand.\n3. The writing is clear and easy to comprehend.\n\n1. Overall, the technical contribution of this paper is relatively incremental, primarily focusing on how to weight the two loss components.\n2. The effectiveness of the proposed method is quite limited. For example, as shown in Table 2, the improvement in averaged AUROC under the 16-shot scenario is minimal, only around 0.3%, and there is even a slight decrease in results on ID-like data.\n3. There is a lack of comparison with existing state-of-the-art (SOTA) methods. For instance, the results reported in this paper are not as good as those of NegLabel[1] (AUROC 94.21 > 93.37), which is a zero-shot method that does not require training and training samples. The results for ID-like method reported in this paper are also lower; the official paper reports 94.36 AUROC under 4-shot, while this paper reports 92.14 AUROC under 16-shot.\n4. More exploration is needed regarding the settings of function $\\phi$ and $\\psi$ in Equation 4.\n5. The statement in lines 158-159 is somewhat unclear. Should it be that inaccurate OOD features hinder the effective learning of better OOD detection?\n\n[1] Jiang, Xue, et al. \"Negative label guided ood detection with pretrained vision-language models.\" ICLR (2024).\n\nSee weaknesses"
},
{
"confidence": 4,
"rating": 5,
"review_id": "CM6tJmD7CM",
"review_text": "This paper presents a novel few-shot approach to regularizing prompt tuning-based OOD detection methods called Self-Calibrated Tuning (SCT). SCT is specifically built to address the problems of incorrect OOD features being used in prompt tuning-based OOD detection methods. More specifically, by weighting regions of the image based on model confidence, SCT can better alleviate these issues in prompt tuning-based OOD detection methods. The resulting SCT method shows strong empirical improvements across a wide range of OOD detection methods.\n\nStrengths:\n- The paper is well written and the authors provide a clear and concise motivation justifying the use of SCT.\n- The author provides a timely analysis of the problem of incorrect OOD features extracted from ID data.\n- SCT shows strong empirical performance across a wide range of traditional OOD detection methods and prompt tuning-based OOD detection methods.\n- Additionally, given the nature of prompt tuning-based OOD detection methods, SCT can act in the more relevant few-shot setting.\n\nWeakness:\n- A primary concern of the reviewer is the lack of evaluations against the more traditional CIFAR set of benchmarks for OOD detection.\n- Additionally, the empirical performance gain of SCT (table 2) in combination with other prompt-tuning-based methods, seems minimal.\n\nThe reviewer would like to see some additional evaluations of SCT in the traditional CIFAR setting of OOD detection. The reviewer would also like to point out some small inconsistencies in the bolding for Table 2 (IDLike+SCT)."
},
{
"confidence": 4,
"rating": 4,
"review_id": "bYMyB3lkDg",
"review_text": "Based on the observation that CLIP undercalibration will affect the existing prompt-tuning-based method's OOD regularization, i.e. samples with uncertain True-Class Probability (referred to as ID uncertainty in this paper) may provide false OOD features and harm to negative training used in the existing methods; therefore, the author proposes a simple training strategy Self-Calibrated Tuning (SCT) weighted with the ID uncertainty, which can help improve FPR95 through experimental verification.\n\n- The author also observed and attempted to study the important CLIP calibration problem.\n\n- The paper is relatively easy to understand overall.\n\nMy main concerns about this work are that the work is relatively incremental and empirical; there is insufficient discussion on the pros and cons of field-related methods (including other paradigms); the experiments are not sufficient, rigorous, and analyzed; and the method's effect on improving common benchmarks is rather one-sided, etc. The details are as follows:\n\n[Method]\n\n- (Inaccurate motivation verifications) If I understand correctly, the misclassified ratio on the horizontal axis in the right panel of Fig. 2 refers to $p(pred\\neq gt), pred=\\arg\\max_yp(y|x)$ while your ID uncertainty refers to sth like True-Class Probability, marginal softmax $p(y=gt|x)$. These are not two identical things, though there is a certain correlation: only within some ranges, TCP could indicate accuracy [1]. Therefore, I think the author may have a biased understanding here, and the experimental results cannot fully reflect the motivation of the work: when \"uncertain\" ID samples are used as OOD regularization, some ID data are misdetected as OOD (FPR). If I have misunderstood, could the author clarify? Or maybe additional correct experiments are needed?\n\n- (No calibration verifications) The claim and results in the work show that SCT helps with CLIP calibration, but there is no visualization of the calibration after training to illustrate the point (e.g. could add the before and after calibration comparison like in Fig. 2).\n\n- (Lack of discussion of weighted training; modulating function rationale) The idea of weighted loss is very direct and easy to think of. Previous work should also be mentioned and discussed. For example, [2] is based on the feature representation paradigm and uses activation scale as the ID-ness (similar to ID uncertainty in the context) indicator for weighted training to improve OOD detection. In comparison, I do not quite understand why $\\phi$ must be monotonically decreasing w.r.t. $p$ (e.g. $1-p$) and not monotonically increasing (e.g. $p$), because the weighting method in [2] is monotonically increasing, and the result is also improved. Could the authors elaborate on this?\n\n- (How about post-hoc CLIP calibration?) Usually, calibration is divided into two types: training and post-hoc [3] (calibration related works are lacked in the paper). The former is used in this paper. The latter may be explored in OOD feature extraction methods, e.g. changing the rank operation (Eq. (6) & Fig. 3(d)). The author may lack discussion in this aspect.\n\n[Experiments]\n\n- (No much AUROC improvement) I understand that the method in this work is mainly to improve FPR95, but unilaterally only improving FPR95 does not seem to be comprehensive enough, because AUROC is also an equally important indicator and methods need to be proposed to improve it.\n\n- (Lack of CIFAR results) Although the comparison method LoCoOp has not been experimented on CIFAR, CIFAR is indeed another important benchmarks in the field of OOD detection, and I think it is necessary to supplement it.\n\n- (Discussions with simpler yet more effective pre-trained features + post-hoc?) I just would like to know what the author thinks about the (potential) advantages of the prompt-tuning-based method studied in the paper compared to the post-hoc method. After all, post-hoc does not require additional training and uses the basic ResNet backbone; the FPR95 and AUROC on the main task of Tab. 1 on ImageNet-1k have reached **20.05** and **95.71** respectively, which are much better than the results reported in the paper (26.47, 93.37).\n\n- Could the authors clarify on what validation set are the hyperparameters tuned?\n\n- (Interpretations of the ablations.) Figure 3(b) shows that the results of selecting other regularization functions are very different, and the paper (L294-298) does not provide any analysis. I am curious about how the author could try to interprete these ablation study experiment results. Similarly, the quality of OOD features extracted by different extraction methods also varies greatly, which seems very empirical (Fig. 3(d)).\n\n- Table 1 is suggested to include results of combining more newer post-hoc methods (e.g. ASH (Djurisic et al., 2022), Scale [2]) and fine-tuned methods, which will give readers a more comprehensive sense.\n\n[Presentation]\n\n- The paragraph introducing the OOD features (L189) should be moved forward, or at least before refer to Fig. 1, which will give readers a clearer background.\n\n- Why is the left panel in Fig. 2 not arranged in ascending order of softmax output? The arrangement of 0.02, 0.89, 0.04, and 0.67 affects reading. What is it trying to say? It would be better to display the classes and images together for clarity.\n\nReferences:\n\n[1] Corbière, Charles, et al. \"Addressing failure prediction by learning model confidence.\" NeurIPS, 2019.\n\n[2] Xu, Kai, et al. \"Scaling for Training-Time and Post-hoc Out-of-distribution Detection Enhancement.\" ICLR, 2024.\n\n[3] Guo, Chuan, et al. \"On calibration of modern neural networks.\" ICML, 2017.\n\nPlease see the weaknesses."
},
{
"confidence": 5,
"rating": 7,
"review_id": "mEPca8mQxy",
"review_text": "In response to challenges in OOD detection using CLIP-based methods, this paper introduces Self-Calibrated Tuning (SCT), a novel framework that addresses issues with unreliable OOD features extracted from ID data. SCT dynamically adjusts the influence of OOD regularization during model training based on the prediction uncertainty of ID samples. By introducing modulating factors into the learning objective, SCT directs the model's attention more effectively towards classification tasks, especially when training with low-confidence data. This adaptive approach improves the calibration of OOD features extracted from high-confidence ID data, enhancing the overall OOD detection performance of prompt tuning methods. Empirical evaluations on ImageNet-1k demonstrate SCT's effectiveness.\n\n1. This paper is well-motivated and well-written. In particular, authors propose to adaptively adjust the importance of OOD features and introduce SCT, which are motivated by the following finding: performance of prompt tuning based methods is significantly affected by the uncertainty of the given ID data. \n2. Authors have a comprehensive review of the whole research literature.\n3. Authors conduct a large amount of experiments and the experimental results demonstrate the effectiveness of SCT on both official benchmarks and hard OOD detection tasks. \n4. In summary, I think SCT could become a great contribution towards OOD detection community.\n\nNone in particular\n\n1. My concern is mainly about the computational cost and training cost of SCT, since it involves operations on dense/local features. \n2. My second concern is about the rationality of using pre-trained models (CLIPs .etc) to perform OOD detection tasks, because the concepts in both ID and OOD datasets are probably seen during pre-training stage. I want to know the authors' opinions towards the benchmarking and research paradigm."
},
{
"confidence": 2,
"rating": 4,
"review_id": "RWUCm7uvv2",
"review_text": "This paper focuses open-set detection method based on CLIP. The authors propose an additional weighting mechanism based on the LoCoOp method to alleviate the problem that the outlier related regions extracted by the LoCoOp method are not trustworthy in some cases.\n\nOutlier detection with VLM is an interesting research direction.\n\nThe contribution over LoCoOp is incremental. The only difference is an extra reweighting term based on the current prediction score. And the reweighting mechanism is purely based on heuristics - for example, $1-p(y|x)$ for $L_{ce}$ implicitly enforce hard sample mining.\n\nMinor: \nThe intuition in Figure 1/4 is not clear to me. The shown examples validate that LoCoOp can detect and mask-out the inlier-related regions well. Also, the GT label should be annotated.\n\nPlease clarify the novelty and new insights."
}
] |
w6q46IslSR | Training Dynamics of Transformers to Recognize Word Co-occurrence via Gradient Flow Analysis | Understanding the training dynamics of transformers is important to explain the impressive capabilities behind large language models.
In this work, we study the dynamics of training a shallow transformer on a task of recognizing co-occurrence of two designated words. In the literature of studying training dynamics of transformers, several simplifications are commonly adopted such as weight reparameterization, attention linearization, special initialization, and lazy regime. In contrast, we analyze the gradient flow dynamics of simultaneously training three attention matrices and a linear MLP layer from random initialization, and provide a framework of analyzing such dynamics via a coupled dynamical system. We establish near minimum loss and characterize the attention model after training. We discover that gradient flow serves as an inherent mechanism that naturally divide the training process into two phases. In Phase 1, the linear MLP quickly aligns with the two target signals for correct classification, whereas the softmax attention remains almost unchanged. In Phase 2, the attention matrices and the MLP evolve jointly to enlarge the classification margin and reduce the loss to a near minimum value. Technically, we prove a novel property of the gradient flow, termed \textit{automatic balancing of gradients}, which enables the loss values of different samples to decrease almost at the same rate and further facilitates the proof of near minimum training loss. We also conduct experiments to verify our theoretical results. | https://openreview.net/pdf/d62bb7ca05c4ddb2e68d06f57b06fae99492728a.pdf | [
{
"confidence": 3,
"rating": 6,
"review_id": "lnuP3zy3mU",
"review_text": "This paper investigates the training dynamics of a single-layer transformer followed by a single MLP layer on a synthetic binary classification task, where the objective is to identify the co-occurrence of two specific tokens in the input sequence. They analyze the gradient flow dynamics for the case that all the attention parameters (key, query, value) and the linear layer all trainable and show that the model can achieve low loss despite the non-convexity of the objective. They identify two phases in the training, 1) the MLP aligns with the two target tokens at the start of the training and the model learns to classify all the samples correctly, 2) the attention and MLP parameters update to increase the classification margin and drive the loss to zero. They also run a small scale numerical experiment in their synthetic setup tp confirm their analysis.\n\nThe paper makes no restricting assumptions on the weights of the transformer model and performs the analysis on the joint optimization of all the parameters.\n\nAlthough the paper and its proof are notation-heavy, the authors have broken down the complexity of the proof and notation in the main body to clarify the steps needed to prove the results.\n\nThere are some restrictive assumptions on the synthetic data model: The vocabulary set $d$ is considered to be larger than the number of training tokens, which is not the case in realistic setups. Thus, some tokens are not visited at training time. Also, they assume, apart from the two target tokens, the remaining tokens appear at maximum once in the training set. \n\nThe proof outline in the main body helps in understanding the high-level steps involved. However, it could still benefit from additional clarifications on some intermediate steps. For instance, in phases 1 and 2, it's mentioned how the alignment of the MLP with the target tokens $G^{(t)}(\\mu_{1,2})$ behaves during training. However, it's not clear how this connects to the evolution of the attention scores in phase 2 as stated in Lemma 4.7.\n\n1. As far as I understand, in your synthetic task, in the first phase of training, effectively only the MLP weights are learning the task. That is, the model can achieve 100% accuracy only by aligning the MLP weights with the relevant tokens $\\mu_1,\\mu_2$. So, the attention layer is not needed for identifying the co-occurrence of the tokens in this setup?\n\n2. Can you also report validation and accuracy plots in your synthetic experiments? Does the validation loss decay at the same rate as the training loss as stated in Thm 3.3?\n\n3. Regarding the proof sketch:\n\n a) The alignment of parameters with the target tokens is discussed in the main body. Can you also clarify how the gradients related to irrelevant tokens evolve? In particular, regarding the tokens that do not appear in the training set (since $nL\\leq d$), does the model learn not to attend to them at test time?\n\n b) I find it confusing that the softmax output remains close to $1/L$ long in the training (line 320) and assigns uniform attention to all tokens in the sequence. Does this statement hold for all training samples? If yes, then how does the model learn to attend to the target tokens?"
},
{
"confidence": 3,
"rating": 6,
"review_id": "ZKujkeWAoO",
"review_text": "This paper studies the training dynamics of a single hidden layer transformer network (self-attention + linear MLP) trained on a binary word cooccurrence task. Specifically, given a data matrix $X \\in R^{d \\times L}$ representing L \"words\" (each column of X is a word vector of dimension d), the model must output +1 if words 1 and 2 both occur in X, and -1 otherwise. The paper shows that a transformer layer is able to learn this task, and that the training occurs in two stages: First, the linear MLP layer learns to classify data points correctly by positively aligning with the embeddings for words 1 and 2 (but without making large changes to attention matrices). Second, it drives the loss down further by using the attention matrices to positively correlate q,k,v for words 1 and 2, and anti-correlate the q,k,v for a common word (denoted word \"3\" in the paper) relative to words 1,2. After these phases, both the training and generalization losses go to zero (as long as embedding dimension is large enough).\n\nOverall, I found the results interesting and insightful, though not very surprising, and the practical implications of these results were not very clear to me. Thus, I currently recommend weak accept. Importantly, my primary research area is not learning theory, so my knowledge of the related work is relatively limited, and thus my review confidence is relatively low.\n\n- It is interesting to see that the training dynamics for this word cooccurrence task can be analyzed rigorously, with relatively few assumptions.\n- The theoretical results are validated with a nice synthetic experiments, that demonstrates that the two phases predicted by the theory do occur in practice.\n\n- This word cooccurrence task is very simple, and thus it is not surprising that a single transformer layer can easily learn it.\n- Only full gradients are considered, whereas transformers are typically trained with mini-batch Adam(W).\n\n- What other tasks (beyond word cooccurrence) do you think could be analyzed with this methodology?\n- What are the implications of this result to more complex/realistic tasks, like next token prediction?\n- If mini-batch Adam is used during training, do the two phases still occur?\n- Can you add more details about the experimental setup to the main paper?\n- Can you add more discussion about the automatic balancing of gradients, and its significance, in a more central part of the text (e.g., section 3, not 4)?"
},
{
"confidence": 4,
"rating": 5,
"review_id": "6zSftEZmEB",
"review_text": "This article delves into the gradient flow dynamics for detecting word co-occurrence, demonstrating that the gradient flow approach can achieve minimal loss. The training process commences with random initialization and can be delineated into two distinct phases.\n\n- This article noticed an interesting phase transition during training in this special setting and demonstrates it with solid calculation and experiments.\n- A new property of gradient flow is noticed and contributes to prove near minimum training loss together with the analysis of softmax.\n\nThe setting of empirical experiments is also simple and ideal and readers may have no idea if this is a general phenomenon during training for detecting word co-occurrence.\n\n- In line 151 and 152, it is confusing why concentration theorems lead to the specific probability in (i) of Assumption 2.3.\n- Lack of explanation for $\\langle w_{j_1}^{(t)},w_{j_2}^{(t)} \\rangle$ in line 194.\n- It is not obvious why \"the samples with only one target signal may be classified incorrectly as co-occurence\" in line 282.\n- The notation in line 169 is somewhat misleading."
}
] |
w67vRHZF13 | Unified Generative and Discriminative Training for Multi-modal Large Language Models | In recent times, Vision-Language Models (VLMs) have been trained under two predominant paradigms. Generative training has enabled Multimodal Large Language Models (MLLMs) to tackle various complex tasks, yet issues such as hallucinations and weak object discrimination persist. Discriminative training, exemplified by models like CLIP, excels in zero-shot image-text classification and retrieval, yet struggles with complex scenarios requiring fine-grained semantic differentiation. This paper addresses these challenges by proposing a unified approach that integrates the strengths of both paradigms. Considering interleaved image-text sequences as the general format of input samples, we introduce a structure-induced training strategy that imposes semantic relationships between input samples and the MLLM’s hidden state. This approach enhances the MLLM’s ability to capture global semantics and distinguish fine-grained semantics. By leveraging dynamic sequence alignment within the Dynamic Time Warping framework and integrating a novel kernel for fine-grained semantic differentiation, our method effectively balances generative and discriminative tasks. Extensive experiments demonstrate the effectiveness of our approach, achieving state-of-the-art results in multiple generative tasks, especially those requiring cognitive and discrimination abilities. Additionally, our method surpasses discriminative benchmarks in interleaved and fine-grained retrieval tasks. By employing a retrieval-augmented generation strategy, our approach further enhances performance in some generative tasks within one model, offering a promising direction for future research in vision-language modeling. | https://openreview.net/pdf/92d9a4d22bb9998d8f043e2b98b85d4d012ff3c7.pdf | [
{
"confidence": 2,
"rating": 6,
"review_id": "zn1ZqJQEkp",
"review_text": "This paper proposes a novel learning paradigm to learn MLLMs based on interleaved image-text corpora. \nIt introduces a structure-induced training strategy that imposes semantic relationships between input samples and\n the MLLM’s hidden state.\nThis work apply the dynamic time warping framework to calculate the semantic similarity between different image-text sequences.\nThen, a discriminative loss is applied to sequence similarity matrices calculated based on raw inputs and MLLM hidden states. \nThe framework can also leverage the capabilities of multiple vision and language encoders to more accurately calculate the similarity matrices.\nExperiment results show that the new learning paradigm demonstrates good performance on basic multimodal comprehension benchmarks, \ncomplicated multimodal comprehension benchmark DEMON, cross-model information retrieval, and retrieval-augmented generation.\n\n1. This paper is well-written and easy to follow. \n2. This paper proposes a novel learning paradigm based on interleaved image-text corpora.\n\n1. This paper did not discussed the impact of including interleaved image-text pairs in MLLM learning. For example, how will it affect the performance on basic visual-language benchmarks (Table 1) and image-text retrieval. Will there be any negative effects?\n\n1. Can sugar better leverage the multi-modal in-context examples or better understand interleaved image-text content, is there any evaluation for that?\n2. What is exactly the amount of interleaved image-text sequences (from MMC4) and image-text pairs (from other datasets) used to train Sugar. \n3. What is the context window size of Sugar?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "mDWkdiz0qr",
"review_text": "The paper addresses the limitations of Vision-Language Models (VLMs) by proposing a unified approach that combines generative and discriminative training paradigms. This new method leverages interleaved image-text sequences and introduces a structure-induced training strategy. It aims to enhance the MLLM's ability to capture global semantics and fine-grained details, effectively balancing generative and discriminative tasks. The approach uses dynamic sequence alignment within the Dynamic Time Warping framework and integrates a novel kernel for fine-grained semantic differentiation. Extensive experiments demonstrate that this method achieves state-of-the-art results in various generative and discriminative tasks.\n\n- The paper introduces a novel method that successfully integrates generative and discriminative training paradigms, addressing the weaknesses inherent in each when used independently.\n- The authors clearly articulate the challenges faced by existing VLMs and provide a well-defined solution.\n\n- While the paper shows impressive results, there is limited discussion on the potential limitations and areas where the model might underperform.\n- The paper primarily focuses on specific benchmarks. It would be beneficial to discuss how well the proposed method generalizes to other types of vision-language tasks not covered in the experiments.\n\n- Can you provide more detailed ablation studies to understand the contribution of each component of the proposed method, such as the dynamic sequence alignment and the GAK?"
},
{
"confidence": 4,
"rating": 5,
"review_id": "zUbbsHOgJl",
"review_text": "This paper proposed a method for unifying generative training and discriminative training of multi-modal LLMs. Generative training mainly uses auto-regressive formulation while discriminative training mainly performs contrastive representation matching. The goal of this paper is trying to use discriminative training to improve multi-modal LLMs.\n\nThe paper unifies generative training and discriminative training by introducing a Dynamic Sequence Alignment module which aligns similar text and image data on the hidden states of a multi-modal LLM. In addition, Detailed Semantics Modeling is proposed to effectively distinguish detailed semantics.\n\nThe paper conducts evaluation on a wide range of benchmarks.\n\nThe motivation is clear and the paper is easy to follow.\n\nThe concept of unifying generative training and discriminative training is interesting.\n\nIt's unclear what is dynamic time warping framework.\n\nThe performance improvement of the proposed method sugar is not significant. Compared with some baselines, such as VILA and LLaVA-1.5, Sugar performs worse than them on many tasks, as shown in Table 1. This raises concerns about the effectiveness of the proposed method. \n\nThis could be meaningless to align a visual token and a text token in an MLLM model since the LLM is trained with next-token prediction instead of contrastive learning like CLIP. The current token is conditioned on previous tokens. I can't think of a reasonable explanation for this mechanism. It **could be** meaningful to align visual tokens. In addition, the experiment results also suggest that this method is not effective as expected. \n\nWhat is the evaluation protocol? Does Sugar train on each benchmark first then evaluate or directly zero-shot evaluation? In the former case, will Sugar lose generative ability after training with discriminative task data?\n\nWhat is the training recipe of the proposed method?"
}
] |
w50ICQC6QJ | Discovery of the Hidden World with Large Language Models | Revealing the underlying causal mechanisms in the real world is the key to the development of science. Despite the progress in the past decades, traditional causal discovery approaches (CDs) mainly rely on high-quality measured variables, usually given by human experts, to find causal relations. The lack of well-defined high-level variables in many real-world applications has already been a longstanding roadblock to a broader application of CDs. To this end, this paper presents Causal representatiOn AssistanT (COAT) that introduces large language models (LLMs) to bridge the gap. LLMs are trained on massive observations of the world and have demonstrated great capability in extracting key information from unstructured data. Therefore, it is natural to employ LLMs to assist with proposing useful high-level factors and crafting their measurements. Meanwhile, COAT also adopts CDs to find causal relations among the identified variables as well as to provide feedback to LLMs to iteratively refine the proposed factors. We show that LLMs and CDs are mutually beneficial and the constructed feedback provably also helps with the factor proposal. We construct and curate several synthetic and real-world benchmarks including analysis of human reviews and diagnosis of neuropathic and brain tumors, to comprehensively evaluate COAT. Extensive empirical results confirm the effectiveness and reliability of COAT with significant improvements. | https://openreview.net/pdf/d7bdc070a6044df2e284ee1476561ea96fa74dae.pdf | [
{
"confidence": 2,
"rating": 6,
"review_id": "QtxXeDpucn",
"review_text": "This paper presents Causal representatiOn AssistanT (COAT), which introduces large language models (LLMs) to bridge this gap. LLMs are trained on massive observations of the world and have shown great capability in extracting key information from unstructured data. Thus, employing LLMs to propose useful high-level factors and craft their measurements is natural. COAT also uses CDs to find causal relations among the identified variables and provides feedback to LLMs to iteratively refine the proposed factors. This mutual benefit enhances both LLMs and CDs.\n\n1. Interesting topic. Employing LLMs to propose useful high-level representations for causal discovery.\n2. Develop two benchmarks for the unstructured causal discovery. AppleGastronome and Neuropathic.\n3. Derives the first metrics that measure the causal representation learning capabilities of various LLMs.\n\n1. ‘We will release an anonymous link during the discussion period.’ I will consider raising my score if the code is reasonable.\n2. The contribution of LLM in COAT is a little small. I assume LLM is used as a representation tool to learn the conceptual level attributes, including iterative refining. The causal structural learning can still be considered as the downstream task.\n3. COAT will inherit the shortcomings of downstream causal structure learning algorithms.\n\n1. How can we ensure that LLM does not introduce erroneous prior knowledge for reasoning?"
},
{
"confidence": 4,
"rating": 5,
"review_id": "V5lJaRrPaW",
"review_text": "The paper tackles the problem of discovering relevant features for recovering the underlying causal graph in the absence and/or in lieu of a human domain expert. The proposed method, COAT, first queries an LLM through a prompt elucidating the task (for eg., discovering relevant features that affect a product review using a few text reviews), then the proposed variables are fed into another LLM that assigns a value to each of these variables, thus outputting structured/tabular data that can be used for causal discovery. Finally, the tabular data is used in conjunction with a traditional causal discovery algorithm (FCI and LiNGAM in this case) to retrieve a causal graph with respect to a target variable (for eg., review score) using the proposed variables. The process repeats until the proposed variables form a markov blanket for the target variable w.r.t the raw unstructured input data (for eg., the text reviews), progressively expanding the markov blanket in each iteration. Additionally, the LLM can receive feedback at the end of each iteration in the form of samples that the proposed variables cannot sufficiently explain. In particular, the authors propose clustering the samples w.r.t the latent variables induced by the LLM and picking the samples in the cluster with the largest conditional entropy.\n\nInitial theoretical analysis of the proposed method implies that the proposed method is able to identify the markov blanket for a target variable using the proposed variables given that enough iterations of COAT are performed.\n\nThe authors evaluate COAT empirically over two synthetic datasets and three real-world datasets. They compare COAT against two simple baselines 1) factors being directly proposed by the LLM based on the prompt without further iterations 2) factors being proposed by LLM when queried using both the prompt and some samples of raw observations. The second baseline is essentially COAT without the LLM receiving any feedback after each iteration. The experiments are conducted using 10 different LLMs and primarily one causal discovery algorithm (FCI), with additional experiments on one dataset using LiNGAM. Additionally, the paper proposes two novel metrics for quantitatively assessing the performance of LLMs for feature proposals to be used for causal discovery.\n\n\nUpdate: I moved my rating up in the hope that teh authors will add the experiments as they promised to the final version. We have no way of enforcing it but hopefully, the authors will follow up on their promise.\n\nThe paper addresses the important problem of causal discovery and employs an effective two pronged approach involving LLMs and traditional causal discovery algorithms. This approach leverages the strengths of both the LLMs and the causal discovery algorithms i.e, ability to respond to complex prompts and unstructured data with high-level and possibly noisy information, and robust causal discovery with strong theoretical guarantees although requiring strong assumptions on the faithfulness of the data and causal mechanisms, respectively. Overall, I believe this is a promising direction wherein the two components complement each other effectively.\n\nThe empirical evaluation is sufficient in terms of the large number of LLMs considered and the moderate amount of datasets evaluated. The results, based on the chosen metrics, sufficiently demonstrate the effectiveness of the proposed method over the simple baselines.\n\nFinally, the paper is well-written and clearly explains the steps involved in each iteration. The further explanations provided in the appendix also aid in this.\n\nThe theoretical aspects of the proposed algorithm are exaggerated in the introduction. Given the strong assumptions of “sufficiently powerful” LLMs, “sufficiently diverse” examples and further assumptions pertaining to the chosen causal discovery method, the propositions, while appreciated, are rather straightforward. In particular, it would be far more interesting to theoretically analyse the impact of modules involving the LLMs themselves, such as the chosen prompt template, quality of factor annotations and responsiveness of LLMs to feedback regarding causal discovery, even though some of these are evaluated empirically. Also, an analysis on the rate of convergence of COAT would be beneficial.\n\nSecondly, while the modularity of the proposed approach facilitates utilising a cross product of LLMs, causal discovery methods and feedback mechanisms, it also necessitates extensive ablation studies. In particular, the paper would be strengthened by a thorough ablation of the initial prompts and feedback. In particular, a discussion and ablation on the chosen prompt template and its effect on the proposed factors, or lack thereof, is needed. A robust template would allow more seamless adoption of the proposed method. Finally, the chosen baselines are far too simple to make any strong claims on the effectiveness of COAT. Comparing against some of the methods covered in the related work section would help bolster this claim.\n\nThe paper addresses an important and timely problem and proposes a simple and intuitive solution, leveraging the strengths of LLMs and traditional causal discovery methods. While the experiments demonstrate the effectiveness of the proposed method over two simple baselines, stronger baselines and more ablations on prompts and factor annotations would strengthen this claim. Theoretical analysis is limited to the well-studied causal discovery aspect of the pipeline while making strong assumptions on the powerfulness of the LLMs, diversity and faithfulness of the raw observational data, and the number of iterations being sufficiently large, seems rather unsurprising."
},
{
"confidence": 4,
"rating": 8,
"review_id": "xr73jSl76W",
"review_text": "This work proposes COAT (Causal representation AssistanT), a novel framework to leverage LLMs to assist with causal discovery from unstructured data. COAT aims to combine the advantages of LLMs and causal discovery algorithms. To do so, COAT employs LLMs to identify high-level variables and parse unstructured data into structured data. On the other hand, causal discovery algorithms read the parsed data to identify causal relations. To improve the reliability of the results, COAT also constructs feedback from the causal discovery results to iteratively improve the high-level variable identification. The authors conduct extensive case studies ranging from synthetic data to realistic data, and find COAT effectively helps with discovering meaningful causal structures that well explain the target variable.\n\n1. This work identifies a crucial and timely problem for how to advance the causal tasks including causal learning and reasoning with foundation models likes LLMs;\n2. COAT is novel, interesting and well-motivated. The authors also provide theoretical discussion to justify its soundness;\n3. COAT is model-agnostic and robust to the choice of LLMs, and input data modalities;\n4. The authors construct several benchmarks, present comprehensive case studies, and conduct extensive experiments to verify their claims. The improvements over direct prompting LLMs are significant.\n\n1. The authors should provide more comparisons with advanced prompting techniques such as CoT.\n2. More discussions should be provided on the hyperparameters used in COAT, such as the group size in feedback.\n3. Model names are inconsistent. The name in Fig 4(c) is not the same with other names.\n4. GPT-4 reasoning in Fig 7(c) is unclear in meaning.\n\nPlease refer to \"Weaknesses\"."
},
{
"confidence": 4,
"rating": 5,
"review_id": "25NpGC5ozw",
"review_text": "This paper combines the power of LLMs with that of causal discovery by proposing a Causal representatiOn AssistanT (COAT) approach. Specifically, it considers datasets with textual descriptions, and tries to identify the Markov blanket with respect to a target variable (such as customer ratings and medical diagnosis). The key contribution is discovery of the causal factors through a pipeline that uses both LLMs and a causal discovery algorithm.\n\nI find this an interesting and practical paper that combines the advantages of LLMs – such as the vast amount of knowledge that they encode – and that of causal discovery approaches. The ideas around combination are generally simple but novel, and I believe the approach could potentially be valuable in a suite of applications, although the extent of the value is unclear from the paper.\n\nA major limitation of the work is the empirical evaluation, even though it comes across on the surface as being extensive. I sympathize with the authors about benchmarks for causal discovery, but it seems they have used GPT-4 to generate the textual description of the data, and then used LLMs in their COAT procedure. This is clearly a synthetic dataset that can be problematic. Even the “realistic” benchmarks do not come across as sufficiently realistic, based on my understanding and the lack of details in the main paper.\n\nI don’t understand why key aspects of the evaluation were moved to the appendix. I find it impossible to fully evaluate the work based solely on the contents of the main paper. I understand the need to make space and to move things to the appendix, but it’s never suitable to move key aspects such as the description of the benchmarks and the key results that show value of the work. This has impacted my assessment of this work and I have had to decrease my score because of the authors’ choices around appendix content.\n\nA related weakness is the lack of any attempt at describing limitations, of which there are clearly many.\n\nCould the authors share more about the scope of the work? Are there some other restrictions on the problem setting, besides needing a discrete label y? My assessment is that there is a gap between the scope mentioned in the problem setting and what is described in the experiments, which seems more restricted. Perhaps the authors can clarify.\n\nIdentifiability is mentioned loosely on page 2, with some technical references, but seems to have been used in an imprecise way here. The connection here seems tenuous at best.\n\nThere is a comment on pg. 3: “Note that the target variable Y serves as a guider, and no specific relation between x and Y is assumed.” What is a guider? And what do you mean no relation between x and Y is assumed? I thought the entire point is to do causal discovery: x is a function of z, and Y is a function of z. This line seems incorrect.\n\nSection 3.2 would be much easier to follow with some illustrative examples. The content in Fig. 2 is too abstract to be really useful. I think the authors missed a trick here.\n\nThe meaning of C and p are unclear to me from what is described on pg. 6. How does one assess the significance of Proposition 3.3?\n\nThe details about benchmarks are incredibly important, and it should be easy for anyone to understand at least a high-level sense of a benchmark – basic things like the number of data points, for instance. Please fix Section 4 accordingly.\n\nWhat is OT in Section 4.2? Is it the same as OT in the next section? Define MB, NMB, OT somewhere. I don’t see them mentioned anywhere clearly, although I understand from context that MB means Markov blanket.\n\nAre Table 3 and Fig. 5 in the Appendix? If so, then mention that.\n\nI’m not convinced that the “realistic” benchmarks are realistic. It’s too bad I can’t gauge this from the main text.\n\nPlease add a detailed limitations section. Mention all the limitations around evaluation in particular, as well as the significant risks of relying on LLMs for causal discovery.\n\nMinor comments: line 181: “an potential” should be “a potential”; line 208: “Ability” should be “ability”; lines 211 and 212: seems there is a grammatical error here; line 216: what are “shared notations”?"
}
] |
w4AnTVxAO9 | Can Language Models Learn to Skip Steps? | Trained on vast corpora of human language, language models demonstrate emergent human-like reasoning abilities. Yet they are still far from true intelligence, which opens up intriguing opportunities to explore the parallels of humans and model behaviors. In this work, we study the ability to skip steps in reasoning—a hallmark of human expertise developed through practice. Unlike humans, who may skip steps to enhance efficiency or to reduce cognitive load, models do not inherently possess such motivations to minimize reasoning steps. To address this, we introduce a controlled framework that stimulates step-skipping behavior by iteratively refining models to generate shorter and accurate reasoning paths. Empirical results indicate that models can develop the step skipping ability under our guidance. Moreover, after fine-tuning on expanded datasets that include both complete and skipped reasoning sequences, the models can not only resolve tasks with increased efficiency without sacrificing accuracy, but also exhibit comparable and even enhanced generalization capabilities in out-of-domain scenarios. Our work presents the first exploration into human-like step-skipping ability and provides fresh perspectives on how such cognitive abilities can benefit AI models. | https://openreview.net/pdf/4a83957d2b46e1316f6bcdc680cbad91fa6b7a65.pdf | [
{
"confidence": 4,
"rating": 5,
"review_id": "wO9SteHJN4",
"review_text": "The paper explores the ability of language models to skip steps in their reasoning processes. The authors introduce a controlled framework to stimulate step-skipping behavior by iteratively refining models to generate shorter and accurate reasoning paths. The study demonstrates that models can develop this ability under guidance, leading to increased efficiency without sacrificing accuracy. The paper presents empirical results showing enhanced generalization capabilities in out-of-domain scenarios after fine-tuning on expanded datasets that include both complete and skipped reasoning sequences.\n\n- The empirical results are robust in three domains, showing benefits in efficiency of the proposed method.\n- The paper is clearly written and well-organized, making it easy to follow the authors' methodology and findings.\n\n- Only one backbone model is considered. Experiments across model families and model sizes should be considered to show the generalization ability of the proposed methods.\n- The OOD test is actually the harder in-domain test. I am curious about the across domain effect of the proposed method For example, how's the effect of training on \"Analog of Algebra\" and test on \"Multi-digit Addition\", given that the skip ability should be a general ability across different domains?\n- In methodology: \"We begin with a training dataset D0, which contains detailed full-step reasoning answers to the questions.\" -> How the full-step reasoning data created?\n\nsee weakness"
},
{
"confidence": 4,
"rating": 6,
"review_id": "6SVQFuC9zW",
"review_text": "This paper proposes an iterative training method that helps sequence models learn to skip steps. The method starts from a training set with full-length solutions or mixed with some skipped-length solutions. At each stage a model learns these solutions with the instruction “Solve it in n steps” and is prompted to generate shorter answers. Correct shorter answers are added to the training set. The effect of this approach is tested using LlaMa-7B on three tasks, including algebraic evaluations, multi-digit addition, and a direction inference task.\n\nThe proposed skip reasoning pipeline is interesting and was evaluated against a diverse set of tasks with different levels of OOD generalization tests.\n\nThe authors conducted detailed analyses to understand the effect of the training pipeline, e.g., Figure 5 with the multi-digit addition is very informative.\n\nThe overall presentation is very clear and easy to follow.\n\nMy main concern is the generalizability of this method. As shown in the paper, the model largely benefited from the warm start setup that includes some skipped problems in the first training set, and has trouble generalizing to problems requiring more steps. One interesting generalization test would be to train on a mixture of all three tasks, but withhold adding skipped step instances for one task, and see if the model can generalize skipping steps on the withheld task.\n\nThe shorter answers generated during the iterative process also don't seem quite “generated by the model itself”, as filtering out correct answers would require oracle knowledge. Is it assumed that correct answers are any exact subset of the full-length solution?\n\nThe accuracy metric measures final answer accuracy, are intermediate steps correct?\n\nI'm unsure if it makes sense to read too much into the average step metric when accuracy is low.\n\nIn figure 4, what does the accuracy look like for only problems where the model skipped steps?\n\nWhat do you think make multi-digit addition and directional inference more difficult than the algebra task? Especially that accuracy and average step for OOD problems are still pretty bad for multi-digit addition even with the warm start and a few iterations in.\n\nWhat is the change in the ratio between D_init and D_skipped over the iteration process?\n\nWhat’s the range of i (in n-i) of the added skip-step instances in D_skipped under warm start?\n\nLine 221 has incorrect figure reference."
},
{
"confidence": 4,
"rating": 4,
"review_id": "KDZXiDtT1B",
"review_text": "This paper proposes to teach LLMs to deliberately skip steps when doing complex tasks involving multi-step reasoning. The authors use self-generated inference paths with fewer steps to fine-tune the models, which is similar to self-distillation. The authors conduct experiments on a few controlled tasks show that the proposed approach can effectively reduces the reasoning steps while maintain performance.\n\n1. The idea of teaching LLMs to skip steps following the human reasoning process is intuitive and makes sense.\n2. The proposed method is overall technically sound and well described.\n3. The paper is in general well-written and easy to follow.\n4. Experimental results confirm the effectiveness of the proposed approach, at least on these \"artificial\" tasks.\n\n1. The experiments are not solid because the tasks considered in the experiments are very artificial and not representative for real-world reasoning tasks. The paper could be made much stronger by conducting tasks/datasets such as GSM8K/MATH or coding tasks, instead of simple reasoning tasks. Without the empirical study on realistic tasks, it is hard to confirm the contribution and usefulness of the proposed metric.\n\nN/A"
},
{
"confidence": 4,
"rating": 7,
"review_id": "oziVAmlX9d",
"review_text": "The paper proposes a method for training an LLM to solve reasoning problems using fewer verbalized reasoning steps than it is naturally encouraged to by a fixed training dataset. The resulting model is shown to maintian or improve performance on in-distribution data and OOD data testing extrapolation w.r.t. length or compositionality, while using fewer reasoning steps at inference time. Analysis shows that performance gains are concentrated around problems requiring an intermediate number of reasoning steps, rather than very few reasoning steps. Experiments are conducted with Llama-2-7b on three synthetic datasets. The method itself works by using warm-start data with mixed length reasoning demonstrations, followed by bootstrapped training data created by controlling model generations with control codes (instructions) combined with filtering model generations for correctness to create new gold data.\n\n- Very important: The idea of shortening reasoning steps, particularly to mimic human reasoning that is variable in its verbalized length, is a very interesting and practical direction.\n- Very important: Results are positive and promising for model generalization at an increased level of efficiency. Particularly interesting are results suggesting that model performance can increase on difficult OOD data by virtue of skipping some reasoning steps. Initially, CoT was found to improve OOD generalization, but it seems that this iteration on CoT could improve OOD performance even more in some situations.\n- Important: The paper is overall straightforward to read and understand, with only a few exceptions.\n- Of some importance: The connection to easy-to-hard generalization was interesting to me. That this method could improve OOD performance, specifically length/compositional generalization, was very interesting.\n\n- Important: What are the instructions at inference time? Do you require a ground-truth number of reasoning steps to run the model at inference time? If so, this important detail is missing from the paper and could make the method difficult to use in practice for problems if it is not know how difficult they are in advance. Would the method be robust to misspecified instructions at inference time?\n- Important: I find it a little confusing to reconcile the results of Sec. 5.1 with Sec. 5.2. Sec. 5.1 makes it look like using fewer steps greatly hurts model performance, while Sec. 5.2 makes it seem like using fewer steps does not hurt performance (specifically the Warm start rows, relative to Cold start baselines).\n- Of some importance: The data is a little artifical. There are existing reasoning and compositional reasoning benchmarks that could be appropriate for this work (though they could require stronger models), including SCAN (https://arxiv.org/abs/1711.00350), GSM8k, StrategyQA, and MATH datasets. However, this is not a major weakness as using clean, controlled datasets is advantageous for studying these kinds of phenomena and they enable automatic construction of warm start data.\n\n- Why keep the cold start data in the training data if the bootstrapped data is good or better? Do you have ablations that suggest what mixture of the data is best?\n- Suggested experiment: if you could have two models that are similar except for one being better at long-context reasoning, it would be interesting to see how your method affects each model. The reason for this is that compressing reasoning length could be beneficial by virtue of reducing the context length, rather than some other inherent benefit like allowing the model to spend more computation on harder steps. Such an experiment would help disambiguate if the improvement comes from shortening context length or from using fewer steps.\n- Note L.68-69 is heavily disputed by follow work on ToM, e.g. https://arxiv.org/pdf/2310.19619\n- Just so you’re aware, some highly related work has appeared contemporaneously: (1) https://arxiv.org/pdf/2405.14838, (2) https://arxiv.org/pdf/2407.06023\n- L.34: use an em-dash rather than single dash here\n- L.221: Fig7(a) should read Fig4(a)"
}
] |
w3JCTBRduf | Optimization Can Learn Johnson Lindenstrauss Embeddings | Embeddings play a pivotal role across various disciplines, offering compact representations of complex data structures. Randomized methods like Johnson-Lindenstrauss (JL) provide state-of-the-art and essentially unimprovable theoretical guarantees for achieving such representations. These guarantees are worst-case and in particular, neither the analysis, ${\textit{nor the algorithm}}$, takes into account any potential structural information of the data. The natural question is: must we randomize? Could we instead use an optimization-based approach, working directly with the data? A first answer is no: as we show, the distance-preserving objective of JL has a non-convex landscape over the space of projection matrices, with many bad stationary points. But this is not the final answer.
We present a novel method motivated by diffusion models, that circumvents this fundamental challenge: rather than performing optimization directly over the space of projection matrices, we use optimization over the larger space of $\textit{random solution samplers}$, gradually reducing the variance of the sampler. We show that by moving through this larger space, our objective converges to a deterministic (zero variance) solution, avoiding bad stationary points.
This method can also be seen as an optimization-based derandomization approach, and is an idea and method that we believe can be applied to many other problems. | https://openreview.net/pdf/e71a9aee432154e3433f241defc8602cd50276dd.pdf | [
{
"confidence": 3,
"rating": 8,
"review_id": "5E87bXBiG0",
"review_text": "This work shows that a deterministic optimization procedure can find a matrix $A$ that satisfies the Johnson Lindenstrauss guarantee. That is, a matrix $A$ maps a set of $n$ vectors to a lower dimensional space while preserving all pairwise distances up to some chosen multiplicative distortion. Typically, $A$ is constructed by sampling it from a random matrix distribution with i.i.d. entries. The authors prove that attempting to directly optimize the entries of $A$ through an optimization procedure by minimizing the maximum distortion is prone to being stuck at local minima. However, the authors show that by optimizing the mean of each entry and the entry-wise variance of the distribution $A$ is sampled from, one can maintain a fixed probability of $A$ being a JL-embedding while at the same time guaranteeing the entry-wise variance ‘sigma’ goes to zero. They then show that, when ‘sigma’ is sufficiently small, one may use the optimized expectation of $A$ as the embedding matrix while only slight increasing the maximum distortion, thereby deterministically finding the desired JL embedding matrix $A$. They show that $\\rho$-SOSPs (second order stationary points) have sufficiently low variance when $\\rho$ is small, and finally show that a method for finding $\\rho$-SOSPs suffices to solve the designed optimization problem.\n\nOverall, the paper is clearly written and well-motivated. The intuition of the approach and analysis is easy to follow.\n\nThe key idea of optimizing the parameters of a random matrix distribution to preserve the JL-embedding property while reducing the entry-wise variance seems like an innovative approach. The authors point out the original space of matrices is contained in this larger probabilistic space, since a deterministic matrix $A$ is equivalent to having mean $A$ and zero variance. Hence, this can be seen as a probabilistic relaxation of the original matrix optimization problem. I have not seen this type of relaxation used in the field of matrix sketching or more generally randomized numerical linear algebra before, and I believe it may be useful for other problems in the area. I am not very familiar with diffusion models, so I cannot speak on the novelty of the approach regarding that area.\n\nThe empirical results are also strong in the sense that they show this procedure for constructing a JL embedding tends to achieve a much lower distortion factor than randomized constructions for a fixed dimension.\n\nThe iterative method to find the matrix $A$ takes $\\operatorname{poly}(n, k, d)$ steps, i.e., the complexity is proven to be polynomial but not explicitly determined. Since the paper is primarily theoretical with only limited experiments, it is unclear how efficient this method is in practice.\n\nWhile the results seem very interesting theoretically, the paper could be strengthened by pointing out some practical applications where this improved deterministic JL embedding would be useful. In the applications I am familiar with, oblivious JL embeddings are needed due to the large number of points in the high-dimensional space (e.g., preserving k-means loss). The authors point to embeddings in deep learning as motivation. It is unclear to me as to how the authors expect progress in understanding deterministic JL embeddings to relate to these embeddings in deep learning. Additional clarification of this point would be helpful.\n\nIn the conclusion, you mention the potential for this approach in applications beyond the Johnson Lindenstrauss setting. In your approach for the JL setting, you upper bound the failure probability of the distortion guarantee via the union bound in eqn. (3). This formulation of the objective function seems difficult to translate to other sketching guarantees on $A$ (e.g., projection cost preservation, L2-subspace embedding, affine embedding). Is there any intuitive reason why it may be possible to formulate a relaxed differentiable objective function when the embedding guarantee must hold over an uncountable number of points?\n\nHow does learning a JL embedding relate to learning embeddings for application areas discussed in the introduction? In particular, how do you see the results of this paper affecting that line of work? As mentioned above, I think it would be helpful to expand on the link between your result and the motivation of deep learning embeddings given in the intro."
},
{
"confidence": 3,
"rating": 5,
"review_id": "hSRlBtrfZY",
"review_text": "The paper proposes to calculate the embedding matrices used in the statement of the Johnson-Lindenstrauss lemma using optimization instead of randomization. The proposed algorithm is a Hessian descent. Authors prove that the algorithm finds the matrix of minimum distortion. Numerical results display the findings\n\nJL is a well celebrated result used for proving existence of optimal (low distortion) embeddings. It is stated in the formal result of JL that such embeddings can be found in polynomial time. But we often rely on randomization to exhibit them. It is useful to have an algorithm to calculate the embeddings. The paper tackles a well motivated problem and their presentation is clear and clean.\n\nThe paper lacks complexity analysis of the algorithm. The algorithm proposed requires a full eigenvalue decomposition at every step. It is prohibitive to use this method in any practical scenario. A discussion on the complexity and how to scale the method up (using randomized methods??) would be nice.\n\nThe paper's main claim is that mean distortion embeddings are computationally well studied: spectral methods (SVD / eigenvalue) methods calculate those. Authors claim that the min (instead of mean) distortion method is what they want to find. Can authors explain why the relaxation of f* to f using the probability bound in Eq (2) and invoking union bound to obtain (3) does not reduce the max to a sum. The entire promise of the method is to work with the max directly to minimize distortion. It appears that relaxation of max to a sum using the union bound drops the nice property which was the primary motivation of the work. Can you explain what I am missing of misinterpreting?"
},
{
"confidence": 3,
"rating": 5,
"review_id": "8JIVTvYSea",
"review_text": "The paper considers using the optimization method to \"learn\" the Johnson Lindenstrauss transform. The paper first shows that the naive objective may not be good enough -- there are stationary points that are sub-optimal. Instead, they consider the way that optimize the random Gaussian space rather than the concrete matrix. Then the authors give an optimization method and show that using this way, every stationary point is a good point and claim that this gives way to deterministic learns the JL matrix. Finally, the paper gives an experiment that shows the advantages of the proposed method.\n\nThe theoretical analysis of this paper is very interesting. To my knowledge, there are very few results about analyzing the landscape of the learned sketching matrix and this paper gives a strong analysis. The experiments also show the advantages of the proposed method. The presentation of the paper is also good.\n\nI am still confused about some parts of the paper. I can raise the score if the authors can adjust this. (see the below question)\n\n1. I am still a little confused about the main conclusion of the results of this paper. That is -- it gives a deterministic construction of the JL lemma, or it gives a better optimization way and it works well empirically? (as the author mentioned, the bound of the JL lemma can not be improved)\n \n2. The equation (4) is about probability, and B.1 says they use Monte Carlo sampling, however, would it means that the proposed method still contains some randomness part? \n\n3. In the experiment section, the paper compares the proposed method with the JL lemma. It will make this stronger if the comparison with equation (1) is also given."
},
{
"confidence": 3,
"rating": 7,
"review_id": "vf2tLWSKRq",
"review_text": "This paper investigates the problem of using optimization-based approaches to learn Johnson Lindenstrauss(JL) embedding. The authors proposed a new framework to achieve the JL guarantee via optimization, instead of the traditional randomized methods. Similar with diffusion models, the authors proposed a novel method that uses optimization in an extended space of random Gaussian solution samplers, which circumvents direct optimization in non-convex landscape. Their approach uses second-order descent, gradually reduces the variance without increasing the expected distortion of the sampler, then can identify a specific projection matrix with the Gaussian distribution. Overall, theoretical guarantees and empirical results demonstrate that this method efficiently achieves a deterministic solution that satisfies the JL guarantee.\n\nThe paper is well-written. The state of the art is well discussed by an extensive literature review. The proposed method combining optimized-based approaches and Johnson Lindenstrauss embeddings is an innovative contribution to the field. \n\nThe paper is technically sound, provides rigorous theoretical analysis and proofs.\n\nIt would be helpful to understand the main results if section 4 could be more organized, such as using subsections.\n\n1. Could you explain the statement in lines 215-216? Are the values of 1/(3n) and 1/3 derived based on the chosen value of $\\epsilon$?\n\n2. Line 336, there is a typo “appplicability”. \n\n3. Regarding the notation $\\rho$-second-order stationary points($\\rho$-SOSP), the paper uses $\\rho$-second-order stationary points in some sections and uses $\\rho$-SOSP in others."
}
] |
w2L3Ll1jbV | Adversarially Robust Multi-task Representation Learning | We study adversarially robust transfer learning, wherein, given labeled data on multiple (source) tasks, the goal is to train a model with small robust error on a previously unseen (target) task.
In particular, we consider a multi-task representation learning (MTRL) setting, i.e., we assume that the source and target tasks admit a simple (linear) predictor on top of a shared representation (e.g., the final hidden layer of a
deep neural network).
In this general setting, we provide rates on~the excess adversarial (transfer) risk for Lipschitz losses and smooth nonnegative losses.
These rates show that learning a representation using adversarial training on diverse tasks helps protect against inference-time attacks in data-scarce environments.
Additionally, we provide novel rates for the single-task setting. | https://openreview.net/pdf/7fadc60234b4e01a1ccac1ccc252854801c52c8d.pdf | [
{
"confidence": 4,
"rating": 7,
"review_id": "aLNDiU7aCB",
"review_text": "In this study, the authors explore adversarial multi-task representation learning, where a predictor and feature extractor are trained on multiple source tasks with adversary and then another predictor following the feature extractor is trained on a target task with adversary. They provide bounds on the excess risk under mild assumptions, showing that these bounds decrease with larger training sample sizes for individual source tasks $n$, the total sample size of all source tasks $nt$, and the sample size of the target task $m$. These results suggest that large sample sizes and diverse source tasks contribute to robust learning in adversarial transfer learning. Additionally, the input and feature dimensions increase these bounds. The excess risk decreases more rapidly when using smooth and non-negative losses compared to Lipschitz losses from a sample size perspective. Furthermore, based on the multi-task results, the authors consider excess risk in a single-task setting.\n\nThe authors first derive Theorem 1 for Lipschitz loss and Theorem 4 for non-negative losses based on [38]. Rather than directly addressing the adversarial loss class, they consider the inflation of the sample space $S$ by an adversarial attack $A$, examining its coverage by balls and standard volume arguments. These results bound the Rademacher complexities of function classes in source and target tasks with adversary (Theorems 2 and 5).\n\nAdditionally, a new reduction method from a multi-task setting to a single-task setting (Theorem 3) may aid future work in both adversarial and non-adversarial settings.\n\nThe problem settings and assumptions regarding data distribution (Lines 140--145), function properties (Assumptions 1--4), and the fat-shattering dimension and size of the inflated dataset (Theorems 2 and 5) are mild. The derived results, such as the order in terms of sample size and input or representation dimensions, seem appropriate. The bounds are interpretable and offer important insights for adversarial transfer learning: diverse source tasks and sample sizes facilitate robust transfer learning.\n\nMany prior studies emphasize the importance of sample complexity in adversarial training. However, obtaining sufficient training samples for a target task is not always feasible. This study theoretically provides valuable guidance for such situations from the perspective of transfer learning.\n\nMoreover, the derived upper bounds have the same order (growth rate concerning dimensions and sample sizes) as prior work on the non-adversarial setting [38]. This indicates that even in adversarial training, it is sufficient to prepare training samples similarly to standard training, ignoring constant and logarithmic terms, which is a positive outcome for the community.\n\nOne might (easily) predict this result from [38]. Under Assumption 4, the sample complexity of the perturbed dataset can be regarded as the finitely scaled sample complexity of the original dataset (as the authors exploited this concept). From the perspective of covering number and Dudley's integral, this leads only to logarithmic differences in orders. It might not be very difficult to conclude that the same order controls the bounds of the excess risk even in adversarial transfer learning as in standard transfer learning. Nonetheless, I acknowledge the authors' effort in providing a formal proof, even if the results are predictable.\n\nThe looseness of the bound is also a weakness, though it is a natural property of Rademacher complexity-based bounds. For example, the bound in Theorem 1 includes two worst-case Rademacher complexities $\\hat{R}(\\ldots, n)$ and $\\hat{R}(\\ldots, m)$, and $\\sup_h R(\\ldots)$ (the worst-case in terms of the hypothesis class of representation). This looseness may be due to the mild assumptions. Tighter bounds for more restrictive cases might enhance the interpretability of the derived bounds.\n\nThe authors assume each source task has a common sample size $n$. If each source task has a different sample size, which affects the first term of the bounds: the maximum or the average sample size?\n\nMinor comments:\n- In Lines 46 and 47, there is unnecessary space.\n- In the equation under Line 47, $\\nu$ and $\\epsilon$ are still not defined.\n- Eq. (2) (and (6) in the Appendix) misses $(x_1), \\ldots, (x_t)$. Additionally, $g \\in G$ should be $q \\in Q$.\n- Eq. (3) and Line 323 might not need $\\sup$."
},
{
"confidence": 2,
"rating": 6,
"review_id": "6G942amfPM",
"review_text": "This paper conducts theoretical studies on adversarially robust transfer learning, which is to learn a model with small robust error on a downstream (target) task from a model pretrained on multiple other (source) tasks. Considering the specific multi-task representation learning (MTRL) setting, this paper provides rates on the excess adversarial (transfer) risk for Lipschitz losses and smooth non-negative losses, showing that a representation derived from adversarial pretraining can assist in defending against adversaries on downstream tasks.\n\n1. This paper theoretically shows bounds on the excess transfer risk for the adversarial loss class for both Lipschitz losses and smooth nonnegative losses, demonstrating the benefits of adversarial pretraining on source tasks for downstream tasks in transfer learning.\n\n1. The proposed theoretical results are interesting, but empirical experiments are missing to support the presented theories, such as the benefits of adversarial pertaining to downstream tasks and that it takes fewer samples to learn a good predictor for downstream(target) tasks with adversarially robust representations learned from related source tasks,\n2. As the paper introduces some additional empirical assumptions, such as assumption 4 which requires adversarial attack functions to be bounded within the known input domain, some practical examples or empirical experiments will be helpful to justify it.\n\n1. What attacks are applicable to this work? $\\|\\cdot\\|_2$ attack, $\\|\\cdot\\|_1$ attack, or $\\|\\cdot\\| {\\\\infty}$ attack?\n1. What is $g, \\mathcal G$ in equation 2? (line 173)"
},
{
"confidence": 3,
"rating": 6,
"review_id": "XiDqqErHpq",
"review_text": "The paper studies adversarially robust transfer learning, wherein, given labeled data on multiple (source) tasks, the goal is to train a model with small robust error on a previously unseen (target) task. The paper considers a multi-task representation learning (MTRL) setting, i.e., assuming that the source and target tasks admit a simple (linear) predictor on top of a shared representation (e.g., the final hidden layer of a deep neural network). The paper provides rates on the excess adversarial (transfer) risk for Lipschitz losses and smooth nonnegative losses. These rates show that learning a representation using adversarial training on diverse tasks helps protect against inference-time attacks in data-scarce environments.\n\nThe paper has good originality, quality, clarity, and of important significance.\n\nNo experiments are provided.\n\n1.What's the experimential results of the proposed theory?\n2.In line 155, as for the proposed Two-stage adversarial MTRL, I have a question wonder whether it's better to optimize a two-stage optimization than one-stage optimization?\n3.Are Lipschitz losses and smooth nonnegative losses necessary for adversarial transfer?\n4.Are different datasets effect the results of adversarial transfer?\n5.Are the claim that representation derived from adversarial training assist in defending against adversaries on downstream tasks in different adversarial attacks?"
},
{
"confidence": 3,
"rating": 5,
"review_id": "CBjB3UXoOW",
"review_text": "This work studies the adversarially robust multi-task representation learning. They introduce the definition of robust $(\\nu, \\epsilon, \\mathcal{A})$-task diversity and the algorithm of two-stage adversarial MTRL. Using these, they show novel results on excess transfer risk for adversarial loss under mild conditions. The authors then present the proof sketches and compare the results with previous work in detail.\n\n1. It is a valuable work to study adversarially robust multi-task representation learning. The notations, definitions, and assumptions are all clearly written, which makes it easy to understand.\n2. The algorithm 1 is reasonable in practice for me. the authors discuss the novel assumption to show that it is also reasonable. Most of the assumptions in this paper seem to be mild. \n3. The authors carefully discuss the differences of results shown in this work and related works. They also compare the techniques used in this work and previous works. It is clear to understand the contribution of this work.\n\n1. The proofs shown in section F.1 are not clear. The authors do not show the formal proofs of these theoretical results.\n2. The authors introduce the definition of the vector-valued fat-shattering dimension, which is a generalization of the fat-shattering dimension, while it does not seem to appear in the theoretical results, which makes it confusing.\n\n1. Although the authors include a section to discuss the difference in results and techniques used in this work and prior works. It is still not clear whether there is a major difference between the **proof techniques** of your work and that of [26] since the results shown in these two works are similar. If so, what are the main differences and difficulties? \n\n2. In the definition of $\\mathcal{A}$, it looks like any function $A \\in \\mathcal{A}$ maps all inputs $\\mathcal{x}$ to $\\mathbb{x} + \\delta$ with the same $\\delta$. Does it correct? If so, it is a weaker version of the regular adversarial attack."
}
] |
w28i9oe9Xr | High Rank Path Development: an approach to learning the filtration of stochastic processes | Since the weak convergence for stochastic processes does not account for the growth of information over time which is represented by the underlying filtration, a slightly erroneous stochastic model in weak topology may cause huge loss in multi-periods decision making problems. To address such discontinuities, Aldous introduced the extended weak convergence, which can fully characterise all essential properties, including the filtration, of stochastic processes; however, it was considered to be hard to find efficient numerical implementations. In this paper, we introduce a novel metric called High Rank PCF Distance (HRPCFD) for extended weak convergence based on the high rank path development method from rough path theory, which also defines the characteristic function for measure-valued processes. We then show that such HRPCFD admits many favourable analytic properties which allows us to design an efficient algorithm for training HRPCFD from data and construct the HRPCF-GAN by using HRPCFD as the discriminator for conditional time series generation. Our numerical experiments on both hypothesis testing and generative modelling validate the out-performance of our approach compared with several state-of-the-art methods, highlighting its potential in broad applications of synthetic time series generation and in addressing classic financial and economic challenges, such as optimal stopping or utility maximisation problems. Code is available at https://github.com/DeepIntoStreams/High-Rank-PCF-GAN.git. | https://openreview.net/pdf/eb7aecacaf5d45f5ac40f8a3fe78d6f3122cb6e7.pdf | [
{
"confidence": 1,
"rating": 6,
"review_id": "UdDPBUIv3V",
"review_text": "The paper addresses the issue of weak convergence of stochastic processes, whereby evolving information is generally unaccounted for. This can lead to discontinuities when applying these processes to multi-period decision-making problems. Prior work has proposed the concept of extended weak convergence, as introduced by Aldous (1981), but practical numerical implementations have been challenging. To address this, the authors introduce a novel metric called High Rank PCF Distance (HRPCFD) which is shown to overcome computational issues encountered in previous attempts. The paper then demonstrates the utility of HRPCFD via experiments on hypothesis testing and generative modelling of time series data.\n\nUnfortunately this paper lies well outside my area of expertise and I am unable to review it effectively. The mathematical framework around extended weak convergence is not an area I’m familiar with, and I consequently found it challenging to grasp the nuances of the problem statement, the significance of the proposed HRPCFD metric, and the potential implications for applications in finance and economics.\n\nSo as not to negatively impact the paper’s chances of acceptance, I have defaulted to a mid-range score in my review, which reflects my assessment that the paper could nevertheless still be made more accessible for readers who are less familiar with the domain.\n\nSee above.\n\nSee above."
},
{
"confidence": 4,
"rating": 9,
"review_id": "y6X3P20qa5",
"review_text": "Time series is ubiquitous in machine learning. They are modeled as stochastic processes and therefore notions of distance between stochastic processes and more generally convergence of stochastic processes are fundamental ideas. Weak convergence of probability measures occupies a central position in this area, but for many settings, like optimal stopping, or the one studied in this paper, it is not sufficient. Extended weak convergence, defined via weak convergence of prediction processes, is the right notion. This paper introduces a metric on the space of filtered processes that metrizes the topology of extended weak convergence, proposes statistical procedures to compute it, and then tests these ideas on GANs for time series.\n\nThis is a paper on a very nice topic and I learnt a lot while reading it. Some of the ideas are very abstract and the paper does a good job of organizing the topics and defining everything precisely so that a meticulous reader is not left confused. It is welcoming to see a rigorous paper in ML conferences. The ideas introduced in the paper are novel and the proofs of all the claim are carefully done, although I can't claim to have read every section in appendix in detail.\n\nAlthough as mentioned above the paper defines everything clearly, the exposition on PCF and HRPCF could be improved. It took me quite some time after re-reading the paper multiple times and some other referenced paper to develop an intuition for these concepts, even though I have some background on probabilistic notions like weak convergence and extended weak convergence. I understand this is difficult to do well in a conference paper with page limits, but I think having a more detailed appendix on PCF and HRPCF would help.\n\nI should mention here that I didn't find the experiments super convincing, but I am viewing this paper as a theoretical contribution, and thus any experiments it has as an added bonus and not a weakness.\n\nNone"
},
{
"confidence": 4,
"rating": 7,
"review_id": "2p2hj9POQQ",
"review_text": "The paper constructs a computationally-implementable metric which metrizes an \"extended\" weak convergence for stochastic processes, which more plausibly accounts for the convergence of the process with respect to their filtrations.\nThe result can apparently more effectively account for similarities between controlled processes, at least in the class of linearly-interpolated stochastic paths considered here.\n\nThe method seems to construct high-rank analogues of classic tools, such as the characteristic function, such that the prediction processes arising from projection onto the filtration at each moment can be quantified and divergence between them metrized, without density evaluations, using empirical measures.\n\nThe classical SDE reasoning seems sound. I confess that I am less familiar with the rough path theory component, but there are no obvious red flags in the material.\n\nThe paper is long;\nThe results seem to be an improvement both theoretically and empirically over the main antecedents\n\n* [18] Hang Lou, Siran Li, and Hao Ni. PCF-GAN: generating sequential data via the characteristic function of measures on the path space. Advances in Neural Information Processing Systems, 36, 2023. \n* [19] Cristopher Salvi, Thomas Cass, James Foster, Terry Lyons, and Weixin Yang. The signature 392 kernel is the solution of a goursat pde. SIAM Journal on Mathematics of Data Science, 3(3):873–899, January 2021.\n* [20] Cristopher Salvi, Maud Lemercier, Chong Liu, Blanka Hovarth, Theodoros Damoulas, and Terry Lyons. Higher order kernel mean embeddings to capture filtrations of stochastic processes. Advances in Neural Information Processing Systems, 34:16635–16647, 2021\n\nBut it is not clear whether the increment is \"important\" in practice; Is the increased performance \"worth\" the implementation effort and/or computational cost? The answer is probably problem-dependent.\n\nl101: The prediction process seems to be introduced on a fixed finite set of times — $I = {0, \\dots, T }$ and $X = (X_t)_{t\\in I} $ — and yet we are concerned with continuously-indexed processes, so I would more naturally assume I is the interval $I=[0,T]$, and in fact we discuss linear interpolation in l109. Is this a notational confusion? What is the index $t$ of the filtration $\\mathcal{F}_t$?\n\nTitle: I'm not sure about the grammar of the paper's title. \"High Rank Path Development: an approach of learning the filtration of stochastic processes\" -> \"High Rank Path Development: an approach _to_ learning the filtration of stochastic processes\"?"
},
{
"confidence": 4,
"rating": 4,
"review_id": "S54pQLBAQT",
"review_text": "This paper proposes High Rank Path method, motivated by the extended convergence notion and the rough path theory, to generate (conditioned) time-series data. A new metric HRPCFD is introduced, and experiments are conducted for Brownian motion, GANs with applications in finance.\n\nThe paper is rigorously written, which introduces a new metric HRPCFD on the path-valued processes based on various ideas from probability theory -- extended convergence, rough path, signature... I checked most proofs, and they are correct.\n\nWeakness and comments:\n\n(1) The paper may be too heavy for the Neurips audience (though I enjoyed reading it). It seems to be more suitable for a rigorous mathematical or statistical journal (e.g., Annals of Statistics). \n\n(2) Many proofs of the results (e.g., Thm 3.3) are purely measure-theoretical, and I think the authors may shrink some proofs to keep the idea concise. \n\n(3) The authors may want to explain why the proposed HRPCFD outperforms others (e.g., signature...) Is there any possible theoretical guarantee?\n\n(4) The authors may have a discussion on the computational efforts of the proposed method (e.g., computational complexity and running time). The path-space optimization (or signature-type methods) often suffer from computational efficiency.\n\nSee weakness."
}
] |
vymkuBMLlh | Conditional Generative Models are Sufficient to Sample from Any Causal Effect Estimand | Causal inference from observational data plays critical role in many applications in trustworthy machine learning.
While sound and complete algorithms exist to compute causal effects, many of them assume access to conditional likelihoods,
which is difficult to estimate for high-dimensional (particularly image) data. Researchers have alleviated this issue by simulating causal relations with neural models. However, when we have high-dimensional variables in the causal graph along with some unobserved confounders, no existing work can effectively sample from the un/conditional interventional distributions. In this work, we show how to sample from any identifiable interventional distribution given an arbitrary causal graph through a sequence of push-forward computations of conditional generative models, such as diffusion models. Our proposed algorithm follows the recursive steps of the existing likelihood-based identification algorithms to train a set of feed-forward models, and connect them in a specific way to sample from the desired distribution. We conduct experiments on a Colored MNIST dataset having both the treatment ($X$) and the target variables ($Y$) as images and sample from $P(y|do(x))$. Our algorithm also enables us to conduct a causal analysis to evaluate spurious correlations among input features of generative models pre-trained on the CelebA dataset. Finally, we generate high-dimensional interventional samples from the MIMIC-CXR dataset involving text and image variables. | https://openreview.net/pdf/d7a91169682e7030f7d0115904a50cf697b82461.pdf | [
{
"confidence": 2,
"rating": 5,
"review_id": "lAhdmlRK5x",
"review_text": "The paper leverages state-of-the-art conditional generative models and algorithms from causal do calculus to perform \"approximately correct\" high-dimensional interventional sampling. Their contribution is ID-GEN, a recursive algorithm that uses diffusion models (among other generative models) to sample from any identifiable causal effect estimand in high-dimensional settings. The efficacy of the method is demonstrated on three diverse datasets: Colored MNIST, CelebA, and MIMIC-CXR.\n\n- The paper introduces ID-GEN, a novel algorithm that integrates diffusion models with causal inference for high-dimensional interventional sampling.\n- ID-GEN creatively exploits the causal structure via the known recursive algorithm for sampling in complex causal scenarios, particularly with latent confounders.\n- The algorithm is applied in three applications across diverse datasets (Colored MNIST, CelebA, MIMIC-CXR).\n\n- The paper is objectively hard to read. Many important graphical elements that should be paired with the text in the main paper are delayed to supplements. This is notably problematic for Example 4.1, where one would expect the full example to be self-contained in the main text. \n- The contributions are stated in the introduction, but it still seems hard to understand if the proposed method is \"just\" an implementation of the ID algorithm, replacing probability tables by samples from diffusion models. I appreciate that this is hard already, but it has much lower novelty then proposing a new recursive algorithm. This should be well explained in the manuscript. \n- The paper does not discuss the implications of the proposed algorithm. Is there any way to extend this to symbolic calculus, or to probabilistic programming? What are the obstacles for moving towards automatic causal inference with images (which would be a super exciting prospect).\n\n- Can you clarify whether the proposed ID-GEN algorithm is primarily an implementation of the existing ID algorithm with diffusion models substituting for probability tables, or does it introduce fundamentally new recursive methodologies? How does this distinction affect the perceived novelty of your contribution?\n- What are the potential extensions of your algorithm to areas like symbolic calculus or probabilistic programming?\n- Are there any specific obstacles that need to be overcome to advance towards automatic causal inference using high-dimensional data such as images? How feasible is this prospect in the near future?"
},
{
"confidence": 3,
"rating": 5,
"review_id": "5E1qQuqv9G",
"review_text": "This paper proposes an algorithm for sampling from intervention distributions under known causal models with hidden confounders, using conditional generative models.\nThis allows nonparametric distributional estimation of high-dimensional outcomes such as X-ray image generation, unlike existing methods.\nThe proposed method combines the existing ID algorithm with a generative model and inherits the ID algorithm's theoretical guarantees. That is, non-identifiable quantities are indicated to be non-identifiable, while all identifiable quantities can be estimated.\n\n* They shed light on the new problem setting of causal simulation for outcomes with high-dimensional and multimodal distributions, such as image generation. This could open up new applications if well justified. Such a problem setting requires a very different approach to point estimation of expectations for low-dimensional, unimodal distributions.\n* The theoretical background is solid and well-explained. The method is proved to be able to estimate all identifiable quantities and otherwise outputs \"unidentifiable.\"\n\n[W1] The motivation for high-dimensional distribution estimation is weak. For example, it does not seem very meaningful to me to generate synthetic X-ray images.\n\n[W2] In particular, is it important in cases where there are bidirectional edges due to the presence of hidden confounding factors, but where the causal orientations are all identified among variables? A clear comparison with similar methods would be beneficial for readers, e.g., comparison in assumptions and targets (e.g., parametric/nonparametric, latent confounder, distributional estimation, etc.).\n\n[W3] The base procedure seems to come from existing methods, such as the ID algorithm, and they just combine it with a generative model.\n\n[Q1] Related to W2, is the proposed method a non-trivial novelty in situations where there are no unobserved confounding variables, i.e. no bidirectional edges?\n\n[Q2] Related to W3, are there any non-trivial points in theoretical guarantees when the generative model is combined with the ID algorithm?"
},
{
"confidence": 3,
"rating": 7,
"review_id": "NBUnsljdIS",
"review_text": "This paper studies the problem of sampling from an interventional distribution of high-dimensional data, where the causal relationships are described by a acyclic directed mixed graph (ADMG). Motivated by the ID algorithm that provides a recursive way of identifying any causal effect from a conditional probability table, the authors propose ID-GEN that follows the recursive steps of ID but instead trains deep generative models to fit the conditional distributions. The final sampling model is then obtained by connecting all the trained networks together in some suitable way. The authors prove that ID-GEN is sound and complete, and run extensive experiments on both synthetic and real dataset to demonstrate the effectiveness of their approach.\n\n1. This paper studies the problem of sampling from an interventional distribution, an arguably important problem with broad applications. Current approaches, as the authors point out, are either restricted to simple causal graphs or face computational challenges.\n\n2. Most parts of the paper are clearly written, and sufficient explanations are provided for the key steps in the ID-GEN algorithm. Also, simple examples are provided that help with the understanding of the paper. The paper is also well-organized and the authors put most complicated details into the Appendix.\n\n3. The authors conduct extensive experiments to demonstrate the superior performance of their model, by comparing with other sampling models proposed by previous works.\n\nI don't think this paper has obvious weaknesses. One thing that the authors may wish to improve is that the notations are a litle bit complex; and it would be better to more often remind the authors of their meanings.\n\n1. In some identification formula e.g. Eq.(1) in the paper, the probability on the denominator might be small. To what extent would this affect the stablity of the proposed algorithm.\n\n2. Can your algorithm be straightforwardly adapted to the case of imperfect interventions i.e. some conditional probabilities in the structural causal model are modified, but no causal edges are removed?"
},
{
"confidence": 2,
"rating": 5,
"review_id": "8o7Su9m1hS",
"review_text": "This paper provides an algorithm for sampling from a causal interventional distribution using conditional generative models, building on Shpitser and Pearl's ID algorithm. They discuss how their algorithm, ID-GEN, can sample from any identifiable distribution given a causal graph, and handles the presence of unobserved confounders (when identifiable). Empirically, they demonstrate their method can work for measurement, evaluation and interpretability purposes in the challenging setting where both the target and treatment are high dimensional e.g. images.\n\n- interesting work, the case of high-dimensional variables in a causal graph is a super important and under-discussed one\n- thorough theoretical treatment of the extension of the ID algorithm\n- experiments show a nice range of usages of the suggested ID-GEN approach\n\n- I frankly got a bit lost in a few key parts of Section 3. I got the main ideas (I think) but missed a bunch of nuance. Some spots were: Example 4.1 (I don’t understand why ID fails in this case but ID-GEN succeeds - specifically the importance of merging is a bit lost on me), Step 4 of ID-GEN (again, merging), and Step 7 of ID-GEN (I think the logic around how training data is sampled, used and modified wrt the graph needs to be explained more clearly)\n- Step 1 of ID-GEN confuses me - I don’t see why we can’t just learn a model of P(y) directly in this case? Also the 2nd equality in 203 doesn’t make sense to me - how is the sum over values of v equal to P(y)?\n- in each experimental section I find I have at least a medium-sized point of confusion around the setup or evaluation - more care should be taken to explain empirical setup + results overall\n- In 5.1, the authors state that U_color makes W1 and Y correlated by color - however, X contains color information and is a direct ancestor of Y, so this unobserved confounding seems trivial\n- in 5.1, it seems like a better metric than a classifier (which may be unreliable and as you note isn’t useful for all possible images) would be something based specifically on the RGB values of the pixels themselves\n- in 5.2, I don’t quite see why Young & Male have unobserved confounding - are they not fully determined by the shared + observed parent I_1?\n- in 5.3, I don’t understand why the report is a causal ancestor in the graph - isn’t it generated upon viewing the X-ray?\n- in 5.3, I think the setup with the labeller can be made clearer - how good is this labeller? How is it structured? Additionally, is the bottom row intended to be a success or failure examples? (label says it should be right lung base but all inferences name the left lung)\n\n\nSmaller points:\n- L115: are unobserved confounders only allowed to affect 2 variables in this framework? Is that more limiting than general SCMs?\n\n- would be great to see clarification throughout Section 3, particularly in the highlighted areas and around merging and training data sampling\n- experimental setups all need clarifications\n- generally assuming I understood what's happening better I'd be happy to increase my score"
}
] |
vx4NgdyyVG | Revive Re-weighting in Imbalanced Learning by Density Ratio Estimation | In deep learning, model performance often deteriorates when trained on highly imbalanced datasets, especially when evaluation metrics require robust generalization across underrepresented classes. To address the challenges posed by imbalanced data distributions, this study introduces a novel method utilizing density ratio estimation for dynamic class weight adjustment, termed as Re-weighting with Density Ratio (RDR). Our method adaptively adjusts the importance of each class during training, mitigates overfitting on dominant classes and enhances model adaptability across diverse datasets. Extensive experiments conducted on various large scale benchmark datasets validate the effectiveness of our method. Results demonstrate substantial improvements in generalization capabilities, particularly under severely imbalanced conditions. | https://openreview.net/pdf/44e3160934c3d9637c541a15c8827e17ec5bba0e.pdf | [
{
"confidence": 4,
"rating": 6,
"review_id": "c2IewU8rY6",
"review_text": "This paper presents a dynamic re-weighting method for imbalanced learning. The author defines the ratio of the balanced data set distribution to the training set distribution, and tries to estimate it with an iterative update method. The effectiveness of this method is proved by experiments.\n\n1.\tThis paper points out a problem with distribution differences, which leads to the potential missing feature patterns in general re-weighting methods.\n2.\tThis paper proposes a new method, which approximates the ratio of the balanced data set distribution to the training set distribution using methods of density ratio estimation. As far as I know, a dynamic re-weighting strategy is novel in this field.\n3.\tThe experimental introduction of this paper is clear, and extensive experiments have been carried out, which validates the effectiveness of the proposed method.\n\n1.\tThe formula derivation in Sec. 3.3 can be more detailed. It is suggested to explain how formula (7) is obtained in the appendix.\n2.\tThe introduction may have overlooked some key articles. For example, the article mentions Wang et al. 's article at the end of Sec.3.3, but does not discuss this paper in the introduction section.\n3.\tDoes the new method enjoy the same theoretical boundaries as the general reweighting method? It is recommended to provide more analysis.\n4.\tBesides, there are some typos in the details:\n - In the experimental section, 'class[390,385] 'may be a typo.\n - In table 3, the interpretation of Tr_{Few} is supposed to be there.\n\nPlease refer to Weaknesses.\n\nBesides, I am also curious about some experimental details. Do the authors use other techniques such as RandAug or mixup?"
},
{
"confidence": 3,
"rating": 6,
"review_id": "I52GUULegQ",
"review_text": "The paper introduces a novel approach called Re-weighting with Density Ratio (RDR) to address the challenges posed by imbalanced data distributions in machine learning. The RDR approach aims to mitigate overfitting on majority classes and enhance adaptability across diverse datasets by continuously updating the weights in response to observed shifts in class density. Extensive experiments on various large-scale, long-tailed datasets demonstrate that the RDR method significantly improves the model's generalization capabilities, particularly under severely imbalanced conditions. The analysis of the weight changes during training reveals that the method increasingly focuses on minority classes as training progresses, initially learning common features across all categories and then targeting learning towards minority samples to enhance generalizability. The paper also provides an ablation study to further validate the effectiveness of the proposed approach.\n\n1. The paper introduces a novel approach called Re-weighting with Density Ratio (RDR) to address the challenges posed by imbalanced data distributions in machine learning. \n\n2. Extensive experiments on various large-scale, long-tailed datasets demonstrate that the RDR method significantly improves the model's generalization capabilities, particularly under severely imbalanced conditions. \n\n3. The analysis of the weight changes during training reveals that the method increasingly focuses on minority classes as training progresses, initially learning common features across all categories and then targeting learning towards minority samples to enhance generalizability. \n\n4. The paper provides an ablation study to further validate the effectiveness of the proposed approach. \n\n5. The results show that RDR generally outperforms other methods, including Inverse Frequency (1/n) and SAM variants, in both the Many and Few classes, indicating that RDR can efficiently address the overfitting issues for Few classes.\n\n- The paper does not provide a detailed theoretical analysis or justification for the proposed Re-weighting with Density Ratio (RDR) method, beyond the intuition that it can mitigate overfitting on majority classes and enhance adaptability across diverse datasets.\n\n- I am interested in how RDR might perform in the presence of extreme imbalance, noisy data, or other challenging scenarios. The current experiment is well-established but dataset itself is relatively simple.\n\n- The paper discusses reweighting/non-reweighting for classification problems. I suggest the authors also briefly discuss reweighting methods in imbalanced regression problems, e.g., VIR [1] for reweighting problems and ConR [2] for non-reweighting problems.\n\n[1] Variational Imbalanced Regression: Fair Uncertainty Quantification via Probabilistic Smoothing, NeurIPS 2023\n\n[2] ConR: Contrastive Regularizer for Deep Imbalanced Regression, ICLR 2024\n\n**Summary** I think the theoretical analysis or at least insights is needed for acceptance, so my suggest score is 5, as the experiment part is excellent in this paper.\n\nsee above"
},
{
"confidence": 3,
"rating": 6,
"review_id": "rfGmNQzVmg",
"review_text": "The paper presents a weighting strategy in order to handle class imbalance. Contrary to existing method, they propose to adapt the weight throughout the training procedure. \n\nTheir method estimates the discrepancy between the sample distribution and the balanced sample distribution for parameterization w and updates the estimate through the training.\n\nThe authors use two resnet architectures to evaluate their contribution on multiple datasets. They also compare to other baselines and show significant gain.\n\n* The paper develops a novel approach for handling class imbalance.\n* The methodology is derived theoretically from the problem formulation\n* The authors propose an analysis of the complexity of the method and empirically evaluate the training time.\n* The methodology is evaluated on multiple datasets and compared to multiple baselines.\n\n* The paper is sometimes difficult to read:\n * Row 125, the authors refer to the distribution of training set, which get parameterized by w. Thus, my understanding is that the authors refer to the distribution of the training set \"captured by the model\". \n * row 134, P_bal = pi P.. P_bal is the distribution of y in the balanced case ? But should therefore be 1/number classes... and P, should just be the class proportion and we should have P = pi P_bal ?\n * LDAM and LA terms are not defined at first. First definition of LA is at row 208\n * row 212 \"trategies\" => strategies\n\n* Could you clarify the term, P, etc. You often refer to it as \"real world data distribution\", but the distribution of x does not depends on any other classes (imbalanced or not) ?"
}
] |
vwgWbCxeAQ | Rethinking Misalignment in Vision-Language Model Adaptation from a Causal Perspective | Foundational Vision-Language models such as CLIP have exhibited impressive generalization in downstream tasks. However, CLIP suffers from a two-level misalignment issue, i.e., task misalignment and data misalignment, when adapting to specific tasks. Soft prompt tuning has mitigated the task misalignment, yet the data misalignment remains a challenge. To analyze the impacts of the data misalignment, we revisit the pre-training and adaptation processes of CLIP and develop a structural causal model. We discover that while we expect to capture task-relevant information for downstream tasks accurately, the task-irrelevant knowledge impacts the prediction results and hampers the modeling of the true relationships between the images and the predicted classes. As task-irrelevant knowledge is unobservable, we leverage the front-door adjustment and propose Causality-Guided Semantic Decoupling and Classification (CDC) to mitigate the interference of task-irrelevant knowledge. Specifically, we decouple semantics contained in the data of downstream tasks and perform classification based on each semantic. Furthermore, we employ the Dempster-Shafer evidence theory to evaluate the uncertainty of each prediction generated by diverse semantics. Experiments conducted in multiple different settings have consistently demonstrated the effectiveness of CDC. | https://openreview.net/pdf/b0f849743c20730b56ef48ad02e259f767f16cf5.pdf | [
{
"confidence": 4,
"rating": 6,
"review_id": "zKZlg9Dccr",
"review_text": "This paper investigates the two different misalignment issues between CLIP and downstream tasks, i.e., task misalignment and data misalignment. The author designed several experiments that demonstrated that over-fitting occurs when tuning with the learnable prompt. They propose the Causality-Guided Semantic Decoupling and Classification(CDC) method to mitigate the impact of task-irrelevant generative factors on downstream tasks. The extended experiments demonstrate that the proposed CDC method is effective.\n\nThis paper investigates the difficulty of adapting CLIP to downstream tasks via two-level misalignment, which is a bright idea to help the community understand the working mechanism. Then the author provides a comprehensive experiment that reveals how the overfitting occurs and impacts the recognition of new classes. The author uses the perspective of causal inference to alleviate the data misalignment and proposes CDC with front-door adjustment for implementation, which predicts with explicit evidence.\n\n1. There is no obvious evidence to show that the CDC will improve the prediction in certain cases, which category previous wrong and correct with CDC. It would be better if several compared failure cases to demonstrate that the CDC can solve the question at which level and which case still needs more advanced methods.\n 2. It is interesting how the misalignment between CLIP and downstream pattern will change as the model’s capabilities increase, such as ViG-Large, and whether the more powerful model can solve the misalignment problem. Hence, it will be better if some comparisons of models can be added.\n3. As we all know, MLLMs already sweep through the multimodal community, it will be better if expand some discussion about the misalignment in this paradigm and current method whether easily general to that.\n4. How many parameters are tuned during downstream adapting? Compared to the pre-trained model, what is the ratio of tuned parameters?\n\nsee weaknesses"
},
{
"confidence": 3,
"rating": 5,
"review_id": "HEljMeZXAK",
"review_text": "This paper addresses the two-level misalignment (task and data) issue in adapting CLIP to specific tasks. The authors develop a structural causal model to analyze CLIP's pre-training and adaptation processes, revealing how task-irrelevant knowledge interferes with predictions. To mitigate this, they propose Causality-Guided Semantic Decoupling and Classification (CDC), which implements front-door adjustment. CDC includes Visual-Language Dual Semantic Decoupling (VSD) to represent different semantics through multiple prompt templates, and Decoupled Semantic Trusted Classification (DSTC) to perform classification based on each decoupled semantic while estimating uncertainties. Experiments demonstrate CDC's effectiveness in enhancing CLIP's performance across various settings and tasks, addressing the challenge of data misalignment in vision-language model adaptation.\n\n* CDC is well motivated from a causal perspective and has significant technical novelty.\n* Clear writing and well organized.\n* Experiment results show the effectiveness of CDC.\n\n* Figure 1(a) appears to illustrate task misalignment. Consider enhancing the caption of Figure 1 with more detailed explanations to clarify this concept.\n* Regarding data misalignment, it would be beneficial to provide a more precise definition. Does it specifically refer to discrepancies in classes between training and testing processes? It's important to clarify that data misalignment encompasses both label misalignment and distribution misalignment. A brief explanation of each type would improve understanding.\n* In Figure 3, the term \"fuse\" is used. It would be helpful to clarify the meaning and context of this term within the figure.\n* How about accuracy if we directly use zero-shot test for CDC?\n\nSee Weaknesses"
},
{
"confidence": 4,
"rating": 7,
"review_id": "hFqtNMATGP",
"review_text": "This paper investigates the task and data misalignment issues in pre-trained vision-language models such as CLIP. It discovers that the task-irrelevant information significantly affects the prediction of CLIP and soft prompt tuning cannot mitigate the data misalignment issue. The authors propose a novel Causality-Guided Semantic Decoupling and Classification method to mitigate the interference of task-irrelevant information. The experimental results show that the proposed method effectively mitigates the data misalignment and improves the generalization of CLIP.\n\n1. The paper is well-organized. The introduction of the method and the figures are clear and easy to understand. The description of the experiment setting is detailed, which makes the paper reproducible.\n2. The proposed methods to mitigate the task and data misalignment of CLIP are highly-motivated and intuitive.\n3. The authors design and conduct exhaustive experiments to demonstrate the effectiveness of the propose method. The proposed methods provide significant improvements on the generalization of CLIP.\n\n1. In the experiments section, the method is currently adapted solely to the CLIP model. This limitation may not fully demonstrate the model's universality. The authors can adapt the method to various vision-language models with different architectures to showcase broader applicability.\n2. The experiments are exclusively conducted on image classification tasks. The authors can explore adapting vision-language models (VLMs) to a wider range of tasks, such as object detection, image captioning, or visual question answering, to further validate the model's versatility and performance across diverse applications.\n\nIs it feasible to adapt the proposed method to various tasks such as object detection, image captioning, or visual question answering."
}
] |
vvpewjtnvm | Low Precision Local Training is Enough for Federated Learning | Federated Learning (FL) is a prevalent machine learning paradigm designed to address challenges posed by heterogeneous client data while preserving data privacy.
Unlike distributed training, it typically orchestrates resource-constrained edge devices to communicate via a low-bandwidth communication network with a central server. This urges the development of more computation and communication efficient training algorithms. In this paper, we propose an efficient FL paradigm, where the local models in the clients are trained with low-precision operations and communicated with the server in low precision format, while only the model aggregation in the server is performed with high-precision computation. We surprisingly find that high precision models can be recovered from the low precision local models with proper aggregation in the server.
In this way, both the workload in the client-side and the communication cost can be significantly reduced. We theoretically show that our proposed paradigm can converge to the optimal solution as the training goes on, which demonstrates that low precision local training is enough for FL. Our paradigm can be integrated with existing FL algorithms flexibly. Experiments across extensive benchmarks are conducted to showcase the effectiveness of our proposed method. Notably, the models trained by our method with the precision as low as 8 bits are comparable to those from the full precision training. As a by-product, we show that low precision local training can relieve the over-fitting issue in local training, which under heterogeneous client data can cause the client models drift further away from each other and lead to the failure in model aggregation. Code is released at https://github.com/digbangbang/LPT-FL. | https://openreview.net/pdf/aeb8909737e7debabd78bc8d5e4a4a9228bf658a.pdf | [
{
"confidence": 4,
"rating": 6,
"review_id": "561ttBjztF",
"review_text": "This paper proposes an efficient federated learning (FL) paradigm, where the local models in the clients are trained with low-precision operations and communicated with the server in low precision format, while only the model aggregation in the server is performed with high-precision computation. The performance is comparable to full-precision training, and sometimes even better since the over-fitting issue in local training is relieved.\n\nS1. The idea of applying SWALP's low-precision training within each cycle for local training in FL is meaningful and effective.\nS2. There are theoretical analysis on convergence.\nS3. Experiments are comprehensive and demonstrate the effectiveness of the proposed method.\n\nW1. The biggest concern is regarding the novelty w.r.t. SWALP [40]. The entire Sec 3.2 and Sec 4.1 are almost the same as in [40]. The only difference seems to be Sec 4.2 but still very similar to the idea of SWA, just the setting of aggregation changes from by cycles to by clients, and a moving average of parameters is used.\n\nW2. Writing needs improvement. For example, there is a typo \"sever\" in Line 108.\nEq (5) is different from its counterpart in [40] where the power was F-1 but now W-F-1. please explain the reason of difference.\nLine 135, missing a space before \"i.e.\"\nEq(7) uses E which is not clear until continuing reading to Line 161 and Algorithm 2 Lines 3-4.\nAlgorithm 2 Line 11, t' is only briefly mentioned in Line 161 without even referring to the used lines.\n\nW3. It would be good to estimate the time reduction with the professional hardware (real acceleration).\n\nJustify method novelty against [40] (W1) and answer related questions in W2."
},
{
"confidence": 4,
"rating": 5,
"review_id": "VGilmW8JOz",
"review_text": "The paper proposes a federated learning approach that performs local training on low precision through quantization combined with a high-precision averaging and a moving average at the server. The paper guarantees convergence and empirically compares several levels of low-precision local training to full-precision training on 4 baseline FL methods. It remains unclear, though, what the contribution of the method is: the main focus seems to be on performance in terms of test accuracy, but the experiments do not show a significant improvement over existing methods. The method supposedly improves communication and computation efficiency but is not empirically compared to state-of-the-art methods, such as [1,2,3].\n\nIn their rebuttal, the authors provided novel results that address my concerns about missing baselines. While I remain concerned about the limited novelty and the presentation, I believe that the authors will be able to address these issues to some extent in the next version of the manuscript. Therefore, I have decided to increase my score.\n\n\\\n[1] Liu, Shiyu, et al. \"Quantized SGD in Federated Learning: Communication, Optimization and Generalization.\" International Conference on Neural Information Processing. Singapore: Springer Nature Singapore, 2023.\n\n[2] Kamp, Michael, et al. \"Efficient decentralized deep learning by dynamic model averaging.\" Machine Learning and Knowledge Discovery in Databases: ECML PKDD, 2018.\n\n[3] Reisizadeh, Amirhossein, et al. \"Fedpaq: A communication-efficient federated learning method with periodic averaging and quantization.\" International conference on artificial intelligence and statistics. PMLR, 2020.\n\n- convergence guarantees\n- clear explanation of the method\n\n- the presentation should be improved (e.g., several typos, such ass \"accept\" instead of \"aspect\", the term \"a avg\" for \"with moving average\" is unintuitive)\n- the ablation study does not clearly show that low precision local training improves performance, since it is combined with a moving average that has a strong positive impact on performance.\n- lack of comparison to baselines\n- unclear use-case for the method\n- the paper does not discuss existing federated learning with quantization literature in sufficient detail.\n- for the non-iid experiments it would be interesting how quantization interacts with local BN layers in FedBN [4]\n\n\\\n[4] Li, Xiaoxiao, et al. \"FedBN: Federated Learning on Non-IID Features via Local Batch Normalization.\" International Conference on Learning Representations, 2021.\n\n- It is unclear from the results how much of the benefit stems from quantization and how much from the moving average. Is it correct that the moving average has a very large positive effect regardless of quantization?"
},
{
"confidence": 4,
"rating": 7,
"review_id": "drFSfGb6ae",
"review_text": "The paper studies an FL system with data heterogeneity, a topic has been extensively studied in the past few years. The idea is to perform local training with lower precision through applying block floating point quantization. The idea per se is not new, but proving that the convergence can be achieved using low precision local training is an interesting contribution.\n\nThe paper is well written and is easy to follow. The idea is also interesting but it is not necessarily new. The most important contribution is the theoretical proof for convergence. The evaluation results in Section 6 validate the theoretical results.\n\nThe paper mainly focuses on data heterogeneity in FL systems. What about resource heterogeneity? It would be important to have a discussion (or better some experimental results) on how the proposed solution perform in such setting. Should we use the same quantization level for all clients, or can we adjust the precision according to the resource availability? Also, the current state of the art of quantization in conjunction with Federated Learning is also missing, e.g.:\n\n- FedQNN[1] uses quantized training in FL.\n\n- CoCoFL[2] uses a combination of quantization and freezing for heterogeneous resources in FL.\n\nHow does these SOTA techniques perform compared with the proposed solutions?\n\n[1] Y. Ji and L. Chen, \"FedQNN: A Computation–Communication-Efficient Federated Learning Framework for IoT With Low-Bitwidth Neural Network Quantization,\" in IEEE Internet of Things Journal.\n \n[2] Kilian Pfeiffer, et al. \"CoCoFL: Communication-and Computation-Aware Federated Learning via Partial NN Freezing and Quantization.\" Transactions on Machine Learning Research., 2023\n\nPlease also check my questions in the weakness section."
},
{
"confidence": 4,
"rating": 6,
"review_id": "itKqnXEfUf",
"review_text": "The paper proposes an efficient Federated Learning (FL) paradigm where local models are trained using low-precision operations and communicated with the central server in low precision format. The aggregation on the server, however, is performed with high-precision computation to ensure accuracy. The authors demonstrate that high-precision models can be recovered from low-precision local models with proper aggregation on the server side. This approach significantly reduces the computational load on client devices and the communication cost. The method is theoretically proven to converge to an optimal solution, even with non-IID data distributions, and extensive experiments show that models trained with low precision (as low as 8 bits) are comparable in performance to those trained with full precision.\n\n1. The proposed method reduces the computational and communication overhead for client devices, which is crucial for resource-constrained environments for large models. The paper also provides theoretical guarantees for convergence to the optimal solution, even with non-IID data distributions.\n2. The method is effective on the datasets and models in the experiments where low precision training has little to no impact on utility.\n\n1. The integration of low precision training and high precision aggregation may add complexity to the implementation.The performance improvements are partly dependent on the hardware capabilities, such as the availability of processors supporting low precision operations.\n2. The experiments are limited. Only image datasets are considered. Evaluation on other types of data, like text or tabular, can strengthen the results. \n3. No integration with differential privacy or other privacy protection mechanisms. Federated learning itself is not private and it would be interesting to see what privacy mechanisms are suitable for low precision model updates.\n\n1. How can we determine the optimal precision level for local training without extensive hyperparameter tuning which is expensive in FL?\n2. Have you explored mixed precision strategy? E.g. using different quantization schema for gradients, activations etc."
}
] |
vunJCq9PwU | GREAT Score: Global Robustness Evaluation of Adversarial Perturbation using Generative Models | Current studies on adversarial robustness mainly focus on aggregating \textit{local} robustness results from a set of data samples to evaluate and rank different models. However, the local statistics may not well represent the true \textit{global} robustness of the underlying unknown data distribution. To address this challenge, this paper makes the first attempt to present a new framework, called \textit{GREAT Score}, for global robustness evaluation of adversarial perturbation using generative models. Formally, GREAT Score carries the physical meaning of a global statistic capturing a mean certified attack-proof perturbation level over all samples drawn from a generative model. For finite-sample evaluation, we also derive a probabilistic guarantee on the sample complexity and the difference between the sample mean and the true mean. GREAT Score has several advantages: (1) Robustness evaluations using GREAT Score are efficient and scalable to large models, by sparing the need of running adversarial attacks. In particular, we show high correlation and significantly reduced computation cost of GREAT Score when compared to the attack-based model ranking on RobustBench \cite{croce2021robustbench}. (2) The use of generative models facilitates the approximation of the unknown data distribution. In our ablation study with different generative adversarial networks (GANs), we observe consistency between global robustness evaluation and the quality of GANs. (3) GREAT Score can be used for remote auditing of privacy-sensitive black-box models, as demonstrated by our robustness evaluation on several online facial recognition services. | https://openreview.net/pdf/9b455fd219afd050f8e2c5821f56718aa6c2426f.pdf | [
{
"confidence": 5,
"rating": 7,
"review_id": "EZ5SXm1SaK",
"review_text": "This paper presents an innovative adversarial robustness measure, which leverages a generative model to produce data samples, record the marginal confidence score as a local statistic and average them over the data distribution. The proposed measure is designed to be efficient, scalable, and potentially applicable to unknown data distributions. Empirical validation is conducted using local models and commercial inference APIs, demonstrating the utility of the robustness evaluation.\n\nThe concept introduced in this study is commendable for its originality, and the metric indeed offers valuable insights into model robustness. Nonetheless, it requires substantial revisions to enhance its clarity, presentation, and justification of claims before it can be accepted.\n\n1. The metric introduced is a pioneering approach for assessing model robustness, characterized by its attack-independence, scalability, and potential applicability to unknown data distributions.\n2. A theoretical analysis is provided, establishing that the metric serves as a lower bound for the probabilistic minimum adversarial perturbation.\n3. The practicality of the proposed measure is supported by experimental validation on commercial black-box APIs.\n4. There is a demonstrated strong correlation between the proposed metric and robust accuracy, suggesting the metric's effectiveness.\n\n1. Presentation issues that may lead to confusion include: \n (1) The second paragraph of introduction lacks a precise definition of adversarial robustness evaluation, which could be problematic for less experienced readers. \n (2) Putting the testing algorithm in the appendix hurts the coherence of the paper. It would be better to include it in the main text. \n (3) Figure 2 requires additional clarification to elucidate how robust accuracy (RA) and the proposed metric are integrated into the same plot. Current discussion is insufficient. \n\n2. The generative model's training requires at least partial knowledge of the data distribution. So the claim that the proposed metric can scale to unknown data distribution needs justification.\n\n3. The metric's performance is contingent on the generative model's capacity to produce benign samples, yet no guarantee is provided that the generative model's ability to do so.\n\n4. The endeavor to train a generative model to produce benign samples should be considered in making it a cost-effective and scalable solution. Maybe this metric can consider online learning to update the generative model. Hope a discussion can be provided on this.\n\n5. The claim regarding the limitation to white-box settings (Page 2, Line 56) is inaccurate, as adversarial accuracy can also be assessed in black-box scenarios, evidenced by the effectiveness of the Square Attack method.\n\n1. The paper omits discussion of several significant works that evaluate model robustness from different perspectives. The authors should consider addressing the following studies in the related works section: \n [1] \"How many perturbations break this model? evaluating robustness beyond adversarial accuracy.\" Olivier, Raphael, and Bhiksha Raj. International Conference on Machine Learning. PMLR, 2023. \n [2] \"Probabilistically robust learning: Balancing average and worst-case performance.\" Robey, Alexander, et al. International Conference on Machine Learning. PMLR, 2022. \n [3] \"Exploring the Adversarial Frontier: Quantifying Robustness via Adversarial Hypervolume.\" Guo, Ping, et al. arXiv preprint arXiv:2403.05100 (2024).\n\n2. While the paper presents test results on the original test samples on CIFAR-10 in Table 9, it does not provide results for the ImageNet dataset. The authors should explain this choice and consider including ImageNet results for a more comprehensive comparison."
},
{
"confidence": 3,
"rating": 6,
"review_id": "JtYfeZJDfF",
"review_text": "The authors propose the GREAT score that uses conditional generative models to mimic the data generating distribution. Thereafter, the classification margin on a set of generated images can be used to obtain a global robustness score. For this, the authors make the connection between local and global robustness explicit and show that the classification margin yields a lower bound on the robustness w.r.t. L2 perturbations (convolution with Gaussian). The authors empirically validate their GREAT score on various benchmarks and highlight the strong correlation to other common robustness evaluations while reducing the computational cost for the robustness evaluation significantly.\n\n1. The authors propose an efficient procedure to rank the robustness of models using generative models\n1. GREAT can be used with \"off-the-shelf\" generative models and does not require specialized training etc.\n1. GREAT does not require access to the gradient\n1. The paper is well-written and easy to follow\n\n1. There are several assumptions on the generative model that are not sufficiently/prominently enough covered. The assumptions are: (a) the model generates an instance actually belonging to the conditioned class, (b) the true class is unambiguous (e.g., conversely, there might be cases where the \"Bayes optimal\" model cannot decide between two or more classes). (c) the generative model is a good approximation of the true data-generating distribution. The authors should highlight such limitations more and their implications for the guarantees/method.\n1. Since the authors emphasize the guarantee on the average robustness of a model, the authors could elaborate more on the practical importance of such a guarantee\n1. The derived Lipschitz constant might be a loose estimate since the Lipschitz constant also includes the generative model and not only the neural network. This is not accurately reflected in, e.g., Eq 10. Here it seems the model was convolved with the Gaussian ($g' * N(0,1))$), but it should actually be $((g' \\circ G) * N(0,1))$.\n\nMinor:\n- The font size in figures and tables is very small\n\n1. Figure 2 shows a large gap between empirical attacks (upper bound) and the GREAT score (lower bound). To what extent do the authors expect this gap to be due to the looseness of the respective upper and lower bounds?\n1. How is the class label (input of conditional generative model) distributed in the experiments for calculating the GREAT score?\n1. Would it also be possible to derive guarantees for a subset of the data distribution's support? For example, obtaining class-specific average robustness guarantees?"
},
{
"confidence": 3,
"rating": 6,
"review_id": "VjV69PUpBz",
"review_text": "The paper introduces a novel framework called GREAT Score (Global Robustness Evaluation of Adversarial Perturbation using Generative Models), aimed at evaluating the global robustness of machine learning models against adversarial perturbations. Unlike traditional methods that aggregate local robustness results from a finite set of data samples, GREAT Score leverages generative models to approximate the underlying data distribution, providing a more comprehensive global robustness assessment.\n\n- The GREAT Score framework introduces a novel method for global robustness evaluation using generative models, which is a fresh and innovative approach in the field.\n- The paper provides solid theoretical foundations with formal definitions, probabilistic guarantees, and detailed proofs, enhancing the credibility of the proposed method.\n- The GREAT Score framework offers significant computational savings over traditional methods and can audit privacy-sensitive black-box models without accessing real data, highlighting its practical importance and broad applicability.\n\n1. The GREAT Score in the paper primarily focuses on adversarial perturbations under the L2 norm. While this is a common setting in adversarial attack research, it lacks ablation studies for other norms, such as the L∞ norm\n2. The GREAT Score framework relies on generative models (such as GANs or diffusion models) to approximate the true data distribution. If the quality of the generative model is not high, the generated samples may not accurately represent the true data distribution, thus affecting the accuracy of robustness evaluation. Besides, the evaluation results of the GREAT Score also depend on the generated sample set. If the sample set is biased or fails to comprehensively cover the diversity of the data distribution, the evaluation results may be inaccurate or unrepresentative.\n3. The evaluation of online facial recognition APIs using GREAT Score is innovative, but the paper could provide more detailed analysis and discussion on the specific challenges and insights derived from this application. For instance, exploring the variability in robustness scores among different groups (e.g., age, eyeglasses) in greater depth and providing potential reasons for these variations would add depth to the analysis.\n4. The calibration process described in Section 3.5 appears somewhat ad-hoc, relying on grid search for optimizing temperature parameters. This could be perceived as lacking robustness and generalizability. A more systematic approach to calibration, possibly incorporating advanced optimization techniques or sensitivity analysis, would strengthen the framework. Discussing the stability and consistency of the calibration process across different models and datasets would also be beneficial.\n5. Despite claiming computational efficiency, the paper does not provide a detailed analysis of the scalability of the GREAT Score framework, especially in the context of extremely large datasets and models. A thorough examination of how the computation time scales with increasing data size and model complexity would add significant value. This could include empirical results demonstrating the method's performance on larger datasets or theoretical analysis of its computational complexity.\n\n1. Have you considered extending the GREAT Score framework to other norm-based perturbations like L1 or L∞ norms? If so, what are the potential challenges or theoretical adjustments needed?\n2. How does the quality of the generative model affect the GREAT Score evaluation? Have you conducted any experiments using generative models of varying quality to analyze this impact?\n3. Can you provide more detailed information on the scalability of the GREAT Score framework with respect to extremely large datasets and models? How does the computation time scale with increasing data size and model complexity?\n4. How stable and consistent is the calibration process described in Section 3.5 across different models and datasets? Have you explored any advanced optimization techniques for calibration?"
},
{
"confidence": 4,
"rating": 3,
"review_id": "kUFCu2On6U",
"review_text": "The paper addresses the important and under-explored problem of \"global robustness evaluation\" for neural networks. It proposes GREAT Score, a novel framework for assessing global robustness using generative models (GMs). Besides, through Monte Carlo sampling from GMs and using Hoeffding's concentration bound, the algorithm can reach an epsilon probabilistic guarantee on the sample mean's closeness to the true mean. The paper then applies their proposed algorithm on various classifiers using GMs to measure global robustness scores.\n\n1) The paper attempts to tackle a significant gap in global robustness assessment, offering a reasonable and innovative contribution to the field.\n2) The paper is well-organized, clearly written, and easy to follow.\n3) The experimental results show high consistency between GREAT Score and attack-based model rankings on RobustBench, demonstrating its potential as an efficient alternative to existing robustness benchmarks.\n\n1) The reliance on GANs as a proxy for the true data distribution raises concerns about the method's accuracy. To the best of my knowledge, current GANs do not generate better coverage than the test set. GANs are a bad estimation of the underlying data distribution with known issues such as bias and model collapsing. Considering model collapse, the fixed test set is likely to have even better distribution coverage than the samples generated from GAN. It would be much more reliable and convincible by involving the recent class generative models.\n2) I also encourage the authors to include experiments with other local robustness estimators, further strengthening the submission.\nHow does the choice of generative model and local robustness estimator affect the reliability of the global measure computed by the paper?\n3) Theoretically, while the authors provide a probabilistic guarantee on the obtained estimates derived from GMs and true estimate, there's a lack of theoretical bound on gap between the true estimate and models' global robustness arising from the distance of the generative distribution and underlying data distribution. Otherwise, the significance and utility of the GREAT score is unclear for me, and this omission makes it unclear how the accuracy of the empirical distribution affects the overall error, beyond just sample complexity.\n\nPlease respond my questions in the weakness part."
}
] |
vtRotUd539 | Average gradient outer product as a mechanism for deep neural collapse | Deep Neural Collapse (DNC) refers to the surprisingly rigid structure of the data representations in the final layers of Deep Neural Networks (DNNs). Though the phenomenon has been measured in a variety of settings, its emergence is typically explained via data-agnostic approaches, such as the unconstrained features model. In this work, we introduce a data-dependent setting where DNC forms due to feature learning through the average gradient outer product (AGOP). The AGOP is defined with respect to a learned predictor and is equal to the uncentered covariance matrix of its input-output gradients averaged over the training dataset. Deep Recursive Feature Machines are a method that constructs a neural network by iteratively mapping the data with the AGOP and applying an untrained random feature map. We demonstrate theoretically and empirically that DNC occurs in Deep Recursive Feature Machines as a consequence of the projection with the AGOP matrix computed at each layer. We then provide evidence that this mechanism holds for neural networks more generally. We show that the right singular vectors and values of the weights can be responsible for the majority of within-class variability collapse for DNNs trained in the feature learning regime. As observed in recent work, this singular structure is highly correlated with that of the AGOP. | https://openreview.net/pdf/e6aa9a4011a802d00ba4c1202d7c30befc5c3233.pdf | [
{
"confidence": 3,
"rating": 5,
"review_id": "gRCr5hwstP",
"review_text": "Given the complexity of the process of neural network training, any understanding of robust phenomena that can be identified in the training process has potential value that can guide the design of models and algorithms. Neural Collapse (and its deep counterpart) is one such phenomenon that has been identified and reproduced across multiple model classes and datasets. This work shows that Neural Collapse also occurs for a recursive kernel-based model known as Deep RMF, when trained using an algorithm that is based on projection onto a matrix constructed from an outer products of gradients computed locally at each layer.\nAdditionally, the authors present experimental results that document neural collapse in these models when trained on standard datasets. They also show that in standard neural networks, the projection of features onto the gradient outer product leads to neural collapse, rather than the effect of the nonlinearity.\n\nThe paper is clearly written, and presents both theoretical results and some empirical results that complement them, since they apply to datasets that violate the assumptions under which the results hold. They prove that deep Neural Collapse can indeed occur in models beyond standard neural networks trained with gradient descent.\nThe experimental results (specifically in Appendix D) demonstrate that the projection onto the gradient outer product matrix (AGOP) leads to neural collapse in standard models, motivating the further study of this object.\n\nGiven that the main results apply both to a non-standard kernel method and a non-standard training algorithm, it is unclear what the implications of the results are for more well-known models and algorithms. If the authors believe that these results have implications of this form, they should be presented more clearly. Algorithms that are not based on backpropagation are interesting both as possible means of explaining learning in biological systems where backpropagation is unrealistic, and in distributed settings where backpropagation may incur a prohibitive communication overhead. However, the motivation of the algorithm used appears to be that it is a simple model that demonstrates certain phenomena that arise in the training of deep networks. \n\nThe authors assume that the gram matrix of the data is full-rank. This requires assuming that the number of datapoints is smaller than or equal to the input dimension (which subsumes assumption 4.1). Standard datasets violate this assumption.\n\nAre the models in appendix D trained with SGD? If so, they indicate the importance of the AGOP projection in causing neural collapse in standard models and I believe this result should be highlighted. That being said, this result may be of independent interest regardless of the analysis of Deep RMF."
},
{
"confidence": 3,
"rating": 6,
"review_id": "tFBJ65mNcI",
"review_text": "This paper studies deep neural collapse (DNC) in deep neural networks (DNN) through the prism of the neural feature ansatz (NFA) and deep recursive feature machines (RFM). It is comprised of several results:\n- empirical evidence that DNC occurs in deep RFMs,\n- a theoretical analysis of DNC in a high-dimensional RFM setting,\n- a theoretical analysis of DNC in a kernel learning setting,\n- empirical evidence that the mechanisms which lead to DNC in RFMs and traditional DNNs are the same.\n\nThis paper shows that deep neural collapse occurs in a similar way in deep networks and deep recursive feature machines. It thus provides a simplified setting in which to investigate deep neural collapse, which is an important research direction to further our understanding of deep learning. Specifically, it shows that neural collapse can be obtained just by iterating linear regression problems, without backpropagating through a deep network.\n\nMy main issue with the paper is its writing, which makes it quite difficult to read.\n- The notations could be improved in several places throughout the paper (see minor points below).\n- I could not follow most of section 4.2, despite being rather familiar with kernel methods and their behavior in high dimensions. \nOn a high level, I don't understand how a linear kernel could be the best setting for neural collapse. The text contradicts itself, as it simultaneously state that \"if [$\\lambda_k = 0$] [...], collapse will occur in just one layer] , but also that \"this theory offers an explanation for why non-linear activation is needed\". A linear layer can collapse within-class variability but also typically collapses class means together, and thus cannot induce neural collapse (see paragraph below). \nOn a technical level, $k_\\Phi$ and $\\lambda_\\Phi$ are referred to before being defined, and I do not understand the roles played by $k$/$\\lambda_k$ vs $k_\\Phi$/$\\lambda_\\Phi$. Assumption 4.2 is also referred to before being stated.\n- Section 4.3 is also slightly difficult to read. \nI took me several tries to guess that $k_M(x,x') = \\tilde k_M(x,x')\\mathrm{Id}$, which should appear in the paper. The terms \"input-level\" and \"output-dimension level\" kernels should be introduced for non-specialists in multi-task kernel learning.\nI also do not understand the point of introducing $M$ if it is dropped afterwards. Theorem 4.4 could simply be stated as \"the optimal feature map for ridge regression is the one which already predicts the label: $\\Phi(x) = y$\". This result is not very surprising, and is not very integrated in the paper. I suppose that it is some kernel-level equivalent of the unstructured feature model, and suggests that weight decay might be instrumental in bringing about neural collapse? The normalization of $k$ should be restated in the definition of Problem 3 (otherwise the optimal loss is obtained when $k \\to 0$).\n- The message of section 5 could be presented more clearly. What I understood was that it argues that RFMs and DNNs achieve neural collapse through the same means. I suggest making this point before introducing RFMs (in particular, stating the NFA correlations). I also did not understand why this mechanism is referred to as \"denoising\". \n\nMy second issue is that I was not convinced by the claim that it is the right singular vectors and singular values which lead to neural collapse. By the same logic as lines 309-315, the right singular vectors do not change the DNC1 metric (with a \"full\" SVD where $U$ and $V$ are invertible). Similarly, if I were to divide operations in the network as $V^T\\sigma$ and $US$ as opposed to $\\sigma U$ and $SV^T$, I should see that it is now $US$ which is responsible for neural collapse (again with a full SVD). This conclusion also depends on the chosen metric for evaluating collapse. Why do the authors consider the ratios of traces of between- and within-class covariances, rather than the trace of their ratio (the Fisher linear discriminant)? It seems that it would reverse one of the conclusions of the analysis, since the trace of the Fisher discriminant ratio $\\mathrm{tr}(\\Sigma_W^{-1} \\Sigma_B)$ is invariant to invertible linear transformations, and decreases under non-invertible linear transformations, so can only be improved through the non-linearity. If the conclusion of which part of the network is responsible for DNC depends strongly on the chosen metric, can we really ascribe meaning to the question? It seems to me that it is really the sequence of weights and non-linearity which _together_ induce DNC, and trying to separate their effects is not really possible.\n\nFinally, Proposition A.1 was first published by Cho and Saul in _Kernel Methods for Deep Learning_, NIPS 2009. Besides, the expression of the kernel in eq. (5) can be simplified with algebra and trigonometry (compare with their eq. (6)).\n\nMinor notes and suggestions:\n- I suggest using a so-called \"diverging\" colormap (such as \"bwr\") in Figure 1 to clearly separate positive from negative correlations, and use the same range for both datasets.\n- I suggest replacing \"Gram matrix\" with \"(uncentered) covariance\" to refer to $W^TW$, as weight matrices $W$ are generally decomposed in rows which correspond to individual neurons.\n- The notation $||X||$ to refer to the vector in $\\mathbb R^N$ of column norms of a matrix $X \\in \\mathbb R^{d\\times N}$ is never introduced (and clashes with the usual convention that this is a matrix norm).\n- Why is the last layer denoted $W_{L+1}$ instead of $m_{L+1}$?\n- The choice of layer-indexing is confusing and seems inconsistent throughout the paper. Contrarily to what is stated in section 3.1, isn't $X_l$ the features after $l-1$ network layers? I suggest to denote the input as $X_0$ instead of $X_1$ to simplify the notations. Also, it seems that $M_l^{1/2} X_l$ should be referred to as $\\tilde X_{l+1}$ rather than $\\tilde X_l$ given the chosen conventions.\n- Typo: missing a norm in the definition of $\\bar H_l$ line 128.\n-In section 4.2, I suggest defining activations before the kernels, e.g., $\\tilde X_{l+1} = \\kappa^{-1/2} M_l^{1/2} X_l$ and $X_{l+1} = \\Phi_{\\rm lin}(\\tilde X_{l+1})$. I also suggest choosing a different notation for $k_{\\rm lin}$ and $\\Phi_{\\rm lin}$ which are confusing as they imply linearity, and to avoid the awkward \"non-linear feature map $\\Phi_{\\rm lin}$\".\n- Typo line 260: the output space of $k_M$ should be $\\mathcal R^{C\\times C}$.\n- I suppose that $\\lambda = \\mu$ in section 4.3.\n- In the caption of Figure 2, I suppose that \"fully-connected\" should be removed in the case of ResNet.\n\n- Are the feature maps $\\Phi_l$ and kernels $k_l$ unrestricted in Algorithm 1, or do they have to match in the sense that $k_l(x,x') = \\langle \\Phi_l(x), \\Phi_l(x')\\rangle$?\n- What is the motivation behind considering _normalized_ features _before_ the non-linearity in section 4.1? Could the authors clarify the role of these non-standard conventions?\n- Why are the setting different between sections 4.1 and 4.2? (relative to normalizations). Is neural collapse still empirically observed with the choices of section 4.2? It raises the suspicion that it could not the case, in which case Theorem 4.3 would not really explain the results of section 4.1 (e.g., because the asymptotic regime is not reached in practice)."
},
{
"confidence": 2,
"rating": 6,
"review_id": "CeyUZ3Di9o",
"review_text": "The submission introduces a mechanism for Deep Neural Collapse (DNC) using the average gradient outer product (AGOP). The authors also propose the Deep Recursive Feature Machines (Deep RFM) model, which employs AGOP in its architecture to empirically and theoretically demonstrate DNC. The main contribution is that AGOP-based explanation is a data-based approach while prior work focused on data-agnostic explanations.\n\n* Using a data-based approach based on AGOP to explain DNC is novel to the best of my knowledge\n* The paper offers both theoretical analysis and empirical evidence supporting the role of AGOP in inducing DNC\n* The experiments are performed on different architectures and datasets\n\n* I found the paper challenging to read\n* I am unsure about the practical implications of this work\n\n* Can the authors clarify if other metrics, such as the Neural Tangent Kernel (not the limit), would effectively predict this behavior? Or AGOP is unique in this aspect?\n* What are the practical implications of this work?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "laxGrox7Fm",
"review_text": "The authors study two effects associated with neural collapse: the within class variability going to zero and the orthogonality/tight-frame of the class means. They study the deep recursive feature machine model, and show that neural collapse forms in that setting as well, due to the projection of the data onto the average-gradient outer product (AGOP). They show both empirical and theoretical results on this phenomenon, leveraging high-dimensional gaussian equivalence of nonlinear random feature models. Further, they show that the right singular vectors of the weight matrices are responsible for most the within-class variability collapse, projecting onto the subspace spanned by the gradients.\n\nThe writing of the paper is for the most part quite readable. The literature review is thorough and puts the results of this paper in a good context. The empirics are extensive and compelling. Moreover, the theoretical ideas leveraging the equivalence of nonlinear random features to a linear kernel with additional identity term make for a compelling argument about the mechanism for neural collapse in RFMs. Given the good mix of theory and experiment, I recommend this paper for acceptance.\n\nSection 4 is doing many things at the same time. It may be better to split it into an empirical evidence section, and then do a section on the theoretical results. In particular, it would be good to give an idea of where the theoretical ideas are going at the start of 4.2 before setting out to prove the deep neural collapse results. This would substantially improve the readability of this section. \n\nThis goes double for section 4.3. The opening paragraph of that section is unreadable:\n\n*\"Next, we show that the formation of the neural collapse is not only implicitly given by the specific\noptimization procedure of the Deep RFM, but is also implicitly regularized for in the parametrized\nkernel ridge regression, a model class that includes RFM (i.e., a single layer of Deep RFM)\"*\n\nI don't really understand what this is saying, or even what you're trying to accomplish in the entire subsection. I tried many times to read it. The whole subsection should be rewritten. There are many sentences there that make no sense to me. Here is another one:\n\n*\"Since we do not explicitly regularize M, we will drop the dependence on it, treat k as\n a free optimization variable and compute the optimal value of the following relaxed problem:\"*\n\nThis is certainly not something you can do generally. For example if I had a matrix parameterized as $A = M_1 M_2$ and optimized just the $M_i$ with no explicit regularization, there are many cases where this isn't the same as optimizing $A$. Maybe you mean to say something else but once again I can't understand what you're trying to say. The prior subsection was compelling enough that I am discounting this rather poor form of writing. Please rewrite this section. \n\nMore generally, there are many sentences throughout the paper that are not well-worded and seem to run on. Improving the writing would benefit the quality, precision, and reach of this otherwise strong paper. If in your rebuttal you can provide examples of improved presentation, I may raise my score higher.\n\nOne can show that for a $\\ell_2$ regularized deep network that the weights pick up rank one spikes proportional to $W^{\\ell}_{ij} \\propto \\frac{\\partial f}{\\partial x^\\ell_i} \\phi(x^{\\ell}_j)$ where $\\phi$ is a nonlinearity and $x^\\ell$ is the preactivation.. This usually means that the *left* singular values of the weight matrices should pick up terms aligned with $\\nabla f$. See for example the update equation for $W$ in 2.2.4 of LeCun:\n\nhttp://yann.lecun.com/exdb/publis/pdf/lecun-88.pdf\n\nIs there any easy way to square this with the results on RFMs, that the *right* singular values align with $\\nabla f$ terms?\n\nIt would be good to be explicit and put a subscript below the $\\nabla$s on the $\\nabla f_\\ell(x^\\ell_{c i})$ in algorithm 1 to be clear what you're differentiating with respect to. \n\nI don't understand why 4.3 is called non-asymptotic analysis. I don't think you're proving non-asymptotic bounds compared to 4.2. If anything 4.3 seems completely unrelated. Can you please give it a title where someone can understand what you're trying to do? Once again the rest of the paper is quite readable but this subsection is a mess."
}
] |
vt2qkE1Oax | Learning Segmentation from Point Trajectories | We consider the problem of segmenting objects in videos based on their motion and no other forms of supervision. Prior work has often approached this problem by using the principle of common fate, namely the fact that the motion of points that belong to the same object is strongly correlated. However, most authors have only considered instantaneous motion from optical flow. In this work, we present a way to train a segmentation network using long-term point trajectories as a supervisory signal to complement optical flow. The key difficulty is that long-term motion, unlike instantaneous motion, is difficult to model -- any parametric approximation is unlikely to capture complex motion patterns over long periods of time. We instead draw inspiration from subspace clustering approaches, proposing a loss function that seeks to group the trajectories into low-rank matrices where the motion of object points can be approximately explained as a linear combination of other point tracks. Our method outperforms the prior art on motion-based segmentation, which shows the utility of long-term motion and the effectiveness of our formulation. | https://openreview.net/pdf/59c057325ef06b75796a650f052e761c2c377f1e.pdf | [
{
"confidence": 4,
"rating": 7,
"review_id": "pmBrXW3gAL",
"review_text": "The authors propose a loss function that seeks to group the trajectories into low-rank matrices where the motion of object points can be approximately explained as a linear combination of other point tracks. Experiments on the synthetic MOVi-F variant of\nthe Kubric dataset and the real datasets DAVIS 2016, SegTrackv2 and FBMS show that the proposed method outperforms single-sequence methods, single-stage end-to-end methods and multi-stage methods.\n\n1) The authors address key issues in the field and the contribution is original even if somewhat incremental.\n2) The proposed method is detailed and reproducible.\n3) Experiments are relatively well conducted on synthetic and real datasets showing the superiority of the proposed method.\n\nAbout the presentation, please clearly state a name/acronym to the proposed method and replace \"ours\" by it in the comparison tables.\n\nNo questions"
},
{
"confidence": 3,
"rating": 6,
"review_id": "4CoctVqn8z",
"review_text": "This paper introduces a method for training a segmentation network using long-term point trajectories as a supervisory signal to enhance optical flow. It proposes a novel loss function aimed at grouping these trajectories into low-rank matrices, allowing the motion of object points to be approximately represented as a linear combination of other point tracks. The proposed approach surpasses previous methods in motion-based segmentation, demonstrating the value of long-term motion and the effectiveness of the new formulation.\n\n1. The introduction describes the problem in more detail when introducing the issue.\n2. The structure of the article is good.\n3. The experimental results of the method proposed in this paper show a significant improvement.\n\n1. The main contribution of this paper is the proposal of two losses, but the loss seems to be effective in the experiments of other segmentation methods.\n2. The contribution of the paper in Subspace Clustering is not described clearly.\n3. The resolution of Fig 3 is relatively low.\n4. There is a lack of comparison in terms of inference speed.\n\n1. Does the method proposed in this paper work on other segmentation networks as well, and would additional experiments on other segmentation networks help demonstrate the generality of the proposed loss function?\n2. How is the training time and inference speed of the method proposed in the paper? It would be better to include some quantitative comparison experiments.\n3. What is the specific contribution of this paper to Subspace Clustering? \n4. The Flow Estimator and Point Tracker are both frozen during training in this work. Is it possible to also update them during training to leverage the information in the dataset?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "PmHiQGPkQy",
"review_text": "This paper proposes a novel loss function that allows training image object segmentation\nmodels based on object motion in videos. Motivated by recent work on self-supervised\nlearning of segmentation using optical flow, the authors propose to use longer point\ntrajectories as additional self-supervision signal. Related to subspace clustering, the\nproposed loss function encourages to predict segments whose trajectories can be well\nexplained by few basis trajectories. The predicted segments are merged into a binary\nsegmentation mask and evaluated on standard, real-world segmentation benchmarks.\nPrevious methods based only on optical flow are consistently outperformed, demonstrating\nthe effectiveness of the proposed method.\n\n- Unsupervised learning of segmentation is an important problem. Several recent methods\n approached this task using optical flow as a self-supervision signal, extending this\n line of research to trajectories is a well-motivated idea.\n- The mathematical motivation of the loss is very well explained. Without having a deep\n mathematical background, I could follow the derivation of the loss function without\n issues.\n- Standard benchmarks and modelling components are used for evaluation, which makes it\n easy to compare the proposed method to previous approaches.\n\n1. It is not described clearly enough what kind of segmentation task is targeted. From\n the introduction and method section it seems to me that multi-object segmentation is\n adressed, only at the very end of the method section it is mentioned that the\n predicted segments are merged into a binary segmentation in some cases.\n - To my understanding the task is multi-object segmentation for MOVi and binary\n segmentation for all other datasets. This should be clearly stated in the\n experiment section.\n - It should be stated in the introduction and method section more clearly that the\n main task is binary segmentation.\n\n2. The proposed method is not compared to models that do not use optical flow for\n self-supervision. It would be interesting to see how the proposed method compares to\n other self-supervised segmentation approaches. For example\n - CutLER ([Wang et al. 2023](https://arxiv.org/abs/2301.11320)) and VideoCutLER ([Wang et al. 2023](https://arxiv.org/abs/2308.14710))\n - DINOSAUR ([Seitzer et al. 2023](https://www.amazon.science/publications/bridging-the-gap-to-real-world-object-centric-learning)) and VideoSAUR ([Zadaianchuk et al. 2023](https://proceedings.neurips.cc/paper_files/paper/2023/hash/c1fdec0d7ea1affa15bd09dd0fd3af05-Abstract-Conference.html))\n \n The masks predicted by these models could be merged to obtain a binary segmentation\n in the same way as for the proposed method.\n\n- How do the predicted segments look like before merging? Visualization would help to\n better understand the capabilities and limitations of the proposed method.\n\n- The principle of common fate is not cited in the paper, a reference to the literature\n on Gestalt psychology would be appropriate (e.g., Wertheimer 1912).\n\n- How well does the proposed method perform on MOVi when estimating trajectories using\n RAFT and CoTracker? This would allow for better judging how much the proposed method\n could be improved in the future by using more accurate trajectory estimation methods."
},
{
"confidence": 4,
"rating": 5,
"review_id": "lOtW3LlMkN",
"review_text": "This paper proposes a model to process long-term motion and short-term motion simultaneously to achieve motion-based segmentation. Specifically, motivated by subspace clustering, this work proposes a loss function that enables training a neural network to learn motion grouping from both optical flows and point trajectories. It outperforms the previous method in the unsupervised video segmentation task. The qualitative comparison also shows obvious improvement, giving a clearer boundary.\n\n1. The motivation and method explanation seems to be clear. The paper writing is easy to follow.\n2. Using a simple sample to introduce the low-rank intuition is convincing and reasonable. Based on this core idea, other smoothing losses and regular loss from optical flow make learning more effective.\n3. Experiments show the strength of the proposed strategy. A comprehensive ablation study has been performed to illustrate the impact of each factor.\n\n1. As mentioned in the limitation, the paper's principle assumes that the object is rigid. However, the task that this paper works on not only includes rigid objects -- it's a general video segmentation task. Then it seems that the low-rank theory can not extend to a general setting. And why not consider local rigid like ARAP loss? (SpatialTracker)\n2. Do not give some corner cases or failure cases, especially for non-rigid objects. I hope to see some corner cases like multiple objects, where they behave similarly in the short term but different in the long term. Then it can better demonstrate the motivation of the paper.\n\nWhy does solely using long-term loss get worse performance than solely using optical flow loss (7 percent drop in Table 4)? Though the paper gives a short explanation that it is due to the sparse set of points and noise, long-term motion also has its advantage like it's more stable than short-term information."
},
{
"confidence": 5,
"rating": 6,
"review_id": "fGqW9QxIQb",
"review_text": "The paper tackles video object segmentation by incorporating into the loss function not only instantaneous optical flow information but also long term pixel tracking information. Raft was used for optical flow and CoTracker was used for long term pixel tracking in the experiments. The experiments show a marginal improvement in performance when combining the two information sources in the loss function.\n\nThe paper flows quite well, it addresses that video object segmentation is the problem space, the focus is on loss function, Figure 2, the layout appears clear as well. There are a handful of datasets and comparing methods used in the experiments.\n\nTable 2 where the experimental results are presented lists a collection of methods categorized into different groupings. Perhaps these groupings and methods could be better discussed in the lit review, it appears that the categories in the lit review do not correlate nicely and I do not know the difference of these methods unless I look at the references and read the papers myself.\nThe improvement is incremental. IT is expected that there would be some improvement however what cases do we actually get the improvement in, a bit of more depth in the analysis would make this a better paper.\nI assume that the camera is static?, correct? if not, perhaps making this clearer would help.\nI have no idea how long the long term point trajectories were, perhaps analyzing this would help. Also depending on the trajectories, were there occlusions or other interesting factors that contribute to the loss function would be interesting.\n\n1. I found the related works were like a laundry list. You divided the categories into unsupervised video object segmentation, motion segmentation, trajectory based motion segmentation and subspace clustering. That is find however your focus is only video object segmentation, why is that and how can you address the other problem areas?\n2. I would imagine if we had 3d scene flow, by perhaps combining monocular depth and optical flow would result in good results without long term tracking?\n3. why not incorporate appearance information as well? \n4. Appearance information for segmentation in the examples would suffice, it would be interesting to focus on cases where appearance info is not sufficient for segmentation and we require motion information."
}
] |
vpEq2bzsS0 | MoTE: Reconciling Generalization with Specialization for Visual-Language to Video Knowledge Transfer | Transferring visual-language knowledge from large-scale foundation models for video recognition has proved to be effective. To bridge the domain gap, additional parametric modules are added to capture the temporal information. However, zero-shot generalization diminishes with the increase in the number of specialized parameters, making existing works a trade-off between zero-shot and close-set performance. In this paper, we present MoTE, a novel framework that enables generalization and specialization to be balanced in one unified model. Our approach tunes a mixture of temporal experts to learn multiple task views with various degrees of data fitting. To maximally preserve the knowledge of each expert, we propose Weight Merging Regularization, which regularizes the merging process of experts in weight space. Additionally with temporal feature modulation to regularize the contribution of temporal feature during test. We achieve a sound balance between zero-shot and close-set video recognition tasks and obtain state-of-the-art or competitive results on various datasets, including Kinetics-400 \& 600, UCF, and HMDB. Code is available at https://github.com/ZMHH-H/MoTE. | https://openreview.net/pdf/70da6cead9fc894a60ab0f52a2a9b0e9149a95a3.pdf | [
{
"confidence": 3,
"rating": 6,
"review_id": "8xG8m2Ft9T",
"review_text": "The paper introduces a novel framework called MoTE. This framework addresses the trade-off between zero-shot generalization and close-set performance in video recognition tasks by tuning a mixture of temporal experts. The key contributions include:\n\n- Introducing Weight Merging Regularization to balance generalization and specialization.\n- Proposing temporal feature modulation to improve generalization during inference.\n- Demonstrating state-of-the-art or competitive results on various video datasets such as Kinetics-400, Kinetics-600, UCF-101, and HMDB-51.\n\n- The introduction of Weight Merging Regularization and temporal feature modulation provides a novel approach to balancing generalization and specialization in video recognition.\n- The experimental results are thorough, demonstrating the effectiveness of the proposed methods on multiple datasets.\n\n- The framework's text space is confined to video category names, which limits the richness of textual representations. Expanding the semantic space using large-scale generative models could enhance performance.\n- The method currently explores limited forms of additional parameters. Extending the approach to other forms could improve generality and versatility.\n- While results on certain benchmarks are promising, the model's performance on more diverse and challenging datasets needs further validation.\n- The additional complexity from Weight Merging Regularization and other components can slightly increase training time, which may be a barrier for real-time applications.\n- Extensive fine-tuning required for different tasks can be computationally expensive and time-consuming.\n\n- Can you provide more details on how expanding the text space with large-scale generative models might improve the model's performance?\n- How does the performance of MoTE vary with different numbers of temporal layers and experts? Are there optimal configurations for specific tasks?\n- What measures can be taken to reduce the computational overhead introduced by the additional components such as Weight Merging Regularization?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "UESTGeMUK2",
"review_text": "This paper addresses the issue of Video-Language Models (VLMs), such as CLIP, experiencing reduced generalization performance to unseen categories when learning domain-specific knowledge for video understanding tasks. The authors propose the MoTE framework, which introduces temporal experts and employs a Mixture of Experts (MoE) approach to effectively learn domain-specific knowledge for videos. Additionally, a soft stochastic routing policy is utilized to further enhance the learning efficiency of the experts. To guarantee the discrepancy in knowledge learned by different experts while maintaining a flat loss landscape, the paper incorporates weight merging regularization, which improves the generalization performance of the learned features. Moreover, the paper presents a temporal feature modulation method that leverages the semantic relevance confidence of proxy text features to modulate features.\n\n1. The paper introduces the Mixture of Experts (MoE) approach in zero-shot video classification tasks based on Video-Language Models (VLMs). By utilizing weight merging regularization and other methods, the approach ensures effective learning of domain-specific knowledge in videos while maintaining strong model generalization.\n\n2. The study effectively combines temporal modeling of visual content with the MoE approach. During downstream task adaptation, it leverages multi-perspective data bias learning to avoid overfitting, thus enhancing the learning effectiveness of domain-specific knowledge in videos.\n\n3. The paper analyzes model generalization from the perspective of loss landscape flatness. By improving the flatness, weight merging regularization enhances the generalization performance of the learned features.\n\n1. There is ambiguity in the use of certain symbols within the paper. For example, the symbol L is used to represent both the loss function of CLIP and the number of layers in the Transformer introduced in MoTE. This issue is particularly evident in Equations (4) and (7). The paper should consider adjusting the usage of these symbols to avoid confusion.\n\n2. There seems to be a problem with the calculation in Equation (5). The notation \"exp\" typically represents the exponential function of e, but this is not clearly explained. According to the equation, the probability of selecting an expert increases with i, which seems to contradict the intended randomness of stochastic. This requires clarification or correction.\n\n3. In the Introduction and Section 3.4, the paper emphasizes the plug-and-play characteristic of the modulation module. However, the subsequent experiments only demonstrate the improvement in model performance without introducing additional training parameters (Play). They do not showcase the flexibility and usability of the module regardless of the upper model structure (Plug). Therefore, it would be beneficial to add experiments validating the plug-and-play effect or adjust the relevant descriptions in the paper.\n\n1. What is the design basis for the candidate set of the temperature hyperparameter in weight merging? The paper does not provide a reference for the design of this candidate set, nor does it further validate its superiority over continuous space selection schemes in the experimental analysis.\n\n2. What is the connection between the modulation method proposed in Section 3.4 and the paper's overall motivation? The issue of constrained semantic space it addresses does not seem to be related to the MoE method or the maintenance of feature generalization.\n\n3. What is the specific idea behind the trade-off metric mentioned in Section 4.3? Considering the balance between the two, the arithmetic mean does not seem to be a good metric. If a model achieves 100% ZS performance but 0% close-set performance, its trade-off metric result would be the same as if both values were 50%. How is this issue addressed?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "fwN1fUtu6Z",
"review_text": "This paper introduces MoTE (Mixture-of-Temporal-Experts) to improve the generalization and specialization capabilities of visual-language models (VLMs) when adapting to video tasks. MoTE addresses two main questions: how to enhance the generalization of additional parameters during fine-tuning, and how to balance generalization and specialization in a unified model. The approach uses multiple feedforward network (FFN) experts in each Transformer layer to capture various data bias views, improving generalization. A routing algorithm based on multinomial distribution maximizes knowledge diversity among experts, while Weight Merging Regularization effectively combines generalized and specialized knowledge in the final model.\n\nTo further improve generalization at test time, MoTE incorporates a Temporal Feature Modulation module. Notably, the approach maintains the same computational cost and final structure as conventional methods. The paper contributes to the field by offering a new perspective on enhancing parameter generalization and balancing it with specialization in the context of adapting VLMs to video tasks. Extensive experiments demonstrate that MoTE achieves an optimal trade-off between zero-shot and close-set performance, with thorough ablation studies showing the scalability and effectiveness of the proposed method.\n\n- The manuscript is well-written and easy to follow.\n\n- It is interesting to observe that the introduction of a mixture of experts can enhance the balance between acquiring generalizable knowledge and learning video-specific features. The motivation is intuitive, and the extensive experiments effectively validate the method’s efficacy.\n\n- The design of weight merging regularization and temporal feature modulation harmonizes the pursuit of the two learning objectives. The temporal feature modulation is particularly noteworthy, as it takes into account the categorical relationships between the training and test sets to inform the integration of features.\n\n- The primary motivation for this study stems from two objectives: (1) mitigating the catastrophic forgetting that emerges with the integration of trainable parameters, and (2) striking a balance between generalizable knowledge and video-specific learning within one single model. However, these objectives bear considerable resemblance to the work presented in the paper FROSTER (ICLR 2024), which has not been discussed by the authors. While I acknowledge that the current paper and FROSTER employ distinct methodologies to address these issues, their close relevance necessitates a thorough discussion and a direct performance comparison.\n\n- According to the description in the paper, the baseline model utilizes a clip encoder equipped with several temporal transformer layers. This leads me to question whether the model can be effectively integrated with alternative network architectures, such as adapter-based networks, X-CLIP, and ST-adapter, particularly given their noted efficiency in training.\n\n- I would also request that the authors provide details regarding the additional computational and training time costs associated with implementing their method in conjunction with the baseline model.\n\n- I believe it would be beneficial to delve deeper into the specific types of actions that each expert excels at recognizing. Providing a more detailed analysis in this area would enhance our comprehension of the distinct roles played by various experts, as well as the unique temporal knowledge they contribute in comparison to one another.\n\n[1] FROSTER: Frozen CLIP Is A Strong Teacher for Open-Vocabulary Action Recognition. ICLR 2024.\n\nPlease refer to the weaknesses."
},
{
"confidence": 4,
"rating": 6,
"review_id": "ri6UHLEETR",
"review_text": "To preserve the generalization ability of the model trained on general visual-language model (VLM) with task-specific data, while boost the performance on specific task, this paper propose a new framework and training strategy to learn a unified model with specific performance and generalization ability. Three techniques are introduced. Mixture temporal experts to avoid overfitting on the task-specific data. A weight merging regularization to enlarge the loss flat region such that optimization on generalization ability will not introduce perturbation that drops the close-set performance. A temporal feature modulation to reuse the feature of VLM model when the target category label is not fitted during task-specific finetuning. The proposed method is evaluated on four benchmark datasets. K400 for close-set finetuning and UCF-101, HMDB-51and K600 for zero-shot evaluation.\n\n1.\tTo train a model with both task-specific performance and zero-shot generalization ability is a interesting topic, and it is less explored in the community. \n2.\tThe proposed method achieves competitive performance compared with the similar methods.\n3.\tBalancing between the zero-shot and the task-specific ability is always hard to handle. Considering the wide application of general VLM, this method bears practical value in the industry.\n\n1.\tThe experimental setting may hide the weakness of the proposed method. The method is only trained on K400 and evaluated its zero-shot ability on UCF-101, HMDB-51and K600. Considering K400 is already a large-scale dataset, the MoTE may still have good performance on UCF-101 and HMDB-51. Besides, K600 is an extension of K400, therefore they may have similar data distribution. It would be great to also finetune the model on small-scale dataset and evaluated generalization ability on large-scale dataset, for example, train the model on UCF-101 and evaluate it on K400.\n2.\tA simple solution to handle the zero-shot / task-specific balancing issue is to use a finetuned model such as Text4Vis for specific task and to use its temporally mean-pooled clip feature when facing out-of-distribution task. This baseline is missing in the comparison. If the performance of this baseline is acceptable, is it really necessary to train a unified model with such much cost?\n\n1.\tThe second question in the weakness section\n2.\tThe VLM Clip is actually trained on noisy data, and there are also VLM trained with selected data to boost its cross-modality alignment [1]. Therefore, the selection of K in line189 may have influence on the final performance. Besides, for different text-query, the influence of noisy data is diffrerent, and one fixed K may not be optimal. Is there any solution for this issue. Is the selection of K have large influence on the performance?\n[1] Bulat, Adrian, Yassine Ouali, and Georgios Tzimiropoulos. \"FFF: Fixing Flawed Foundations in contrastive pre-training results in very strong Vision-Language models.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024."
}
] |
voJCpdlw53 | UltraPixel: Advancing Ultra High-Resolution Image Synthesis to New Peaks | Ultra-high-resolution image generation poses great challenges, such as increased semantic planning complexity and detail synthesis difficulties, alongside substantial training resource demands. We present UltraPixel, a novel architecture utilizing cascade diffusion models to generate high-quality images at multiple resolutions (\textit{e.g.}, 1K, 2K, and 4K) within a single model, while maintaining computational efficiency. UltraPixel leverages semantics-rich representations of lower-resolution images in a later denoising stage to guide the whole generation of highly detailed high-resolution images, significantly reducing complexity. Specifically, we introduce implicit neural representations for continuous upsampling and scale-aware normalization layers adaptable to various resolutions. Notably, both low- and high-resolution processes are performed in the most compact space, sharing the majority of parameters with less than 3$\%$ additional parameters for high-resolution outputs, largely enhancing training and inference efficiency. Our model achieves fast training with reduced data requirements, producing photo-realistic high-resolution images and demonstrating state-of-the-art performance in extensive experiments. | https://openreview.net/pdf/17e36670338ced1f6346b71e16283ec8543c38f0.pdf | [
{
"confidence": 4,
"rating": 8,
"review_id": "5xRQxntKXW",
"review_text": "The paper presents UltraPixel, an innovative architecture for ultra-high-resolution image generation that tackles semantic planning, detail synthesis, and high resource demands. UltraPixel uses cascade diffusion models to generate images at multiple resolutions within a single model, efficiently guiding high-resolution generation with lower-resolution semantics. It features implicit neural representations for continuous upsampling and scale-aware normalization layers. Moreover, it requires less than a 3% increase for high-resolution outputs, boosting efficiency.\n\n1、The paper demonstrates impressive results, with generated high-resolution images exhibiting remarkable detail. The proposed method outperforms existing approaches in terms of speed and flexibility, supporting arbitrary resolution image generation with a single model. This represents a significant advancement in the field.\n\n2、The authors present a clear and well-motivated approach. They provide compelling evidence (Figures 2 and 6) to support their argument that the absence of low-resolution (LR) guidance can lead to suboptimal generation results.\n\n1、 The manuscript's layout requires some refinement. For instance, Figure 4 extends beyond the page margins, and the text adjacent to Figure 9 appears overly condensed.\n\n2、 Given that this is a text-to-image generation work, the paper would benefit from a more comprehensive set of visual results, including additional comparisons with state-of-the-art methods.\n\n1. The discussion in Section 4.3 regarding the timesteps of LR guidance extraction is intriguing. It would be valuable to see final generated images using different timesteps for guidance, rather than just attention visualizations.\n2. The authors' use of LR guidance bears similarities to recent diffusion-based image super-resolution methods. A comparative discussion of these approaches could provide valuable context.\n3. Given the method's design, it should theoretically support even higher resolutions (e.g., 5K, 6K). Have the authors explored this possibility?\n4. The visual results demonstrating the limitations mentioned in the paper could be included in the supplementary materials to provide a more comprehensive understanding of the method's constraints.\n5. Will the code and pre-trained models be made publicly available to facilitate reproducibility and further research in this area?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "VP7U3aFaqE",
"review_text": "This paper introduces UltraPixel, a method for generating high-quality ultra-high-resolution images. It utilizes the semantics-rich representations of lower-resolution images in a later denoising stage to guide the overall generation of highly detailed high-resolution images. The method incorporates implicit neural representations for continuous up-sampling and scale-aware normalization layers that are adaptable to various resolutions. The experimental results show that it has excellent ability in generating high-resolution images of different sizes.\n\n1. The introduction of implicit neural representations for continuous up-sampling and scale-aware normalization layers adaptable to various resolutions is a creative solution that addresses a challenge in the scalability of image generation models.\n\n2. The methodology is well-articulated, with a clear explanation of how the model manages to generate high-quality images while maintaining computational efficiency.\n\n3. The ablation experiments are thoroughly conducted, systematically revealing the contribution of each component to the overall performance.\n\n4. The paper proposes an innovative method for generating high-quality, ultra-high-resolution images efficiently, tackling a major challenge in image synthesis.\n\n1. The explanation of the implicit neural representation (INR) requires further clarity regarding its ability to enable continuous upscaling. Moreover, an in-depth analysis and dedicated ablation study of the Scale-Aware Normalization (SAN) feature would provide insights into its role in resolution adaptability.\n\n2. To underscore the advantages of the proposed framework, the experiments should be expanded to include comparative analyses with Latent Diffusion Model (LDM)-based and pixel-based image synthesis methods, showcasing the superior performance of the framework in high-resolution image generation tasks.\n\n1. Why is the perturbed version $𝑧_{1}$ preferred over the $𝑧_{0}$ from the low-resolution synthesis for guidance purposes?\n\n2. Regarding Figure 4, could you clarify why additional learnable tokens are integrated with the guidance tokens for the self-attention mechanism, instead of solely relying on the guidance tokens? What unique function do these learnable tokens serve?\n\n3. Can you outline the computational steps involved in the implicit neural representation? Is there a need for manually specifying positions?\n\n4. What justifies the forms of Equations (3) and (4), which amalgamate terms with distinct physical interpretations? Is there an underlying principle that supports their direct summation, as it seems to go against intuitive reasoning?\n\n5. In the context of line 255, the use of 𝑡=0.5 and 𝑡=0.05 is ambiguous. Are these intended to denote specific sampling stages within the low-resolution synthesis—fixed and terminal steps, respectively? Consequently, is 𝑡=1 encompassed within the scenario where 𝑡=0.5?"
},
{
"confidence": 5,
"rating": 4,
"review_id": "4axynMPINH",
"review_text": "This paper presents a method for Ultra-High-Resolution image generation from text prompts. The method is based on StableCascade. The original StableCascade can generate 1024x1024 images. This paper proposes another HR latent diffusion model that can utilize the guidance from 1024 x 1024 images and generate 4096 x 4096 images. Unlike previous methods that directly use the low-resolution output, the method chooses to use the features of the base model as guidance and proposes an implicit-based method to upsample the low-res guidance features.\n\n- The idea of guidance feature and implicit-based upsampling is simple but effective.\n- The paper reads well, and the presentation is clear.\n- The results are very impressive. \n- The proposed method only needs light-weight finetuning from StableCascade.\n\n- More validation and analysis are needed. In the comparison, a traditional image upsampler is used, but the traditional image upsampler is often smaller and also trained on much smaller datasets. For a fair comparison, it will be good to compare with the state-of-the-art generative image upsampler such as StableSR and Stable Diffusion Upscaler.\n- A comparison of this baseline is missing: instead of using guidance features, the HR latent model can directly use the LR images / latents from the base model.\n- It would be good to have visual results of the ablation on LR guidance timesteps.\n- Ablation on scale-aware normalization is missing.\n\n- Is the base model frozen from StableCascade?\n- Is the implicit model jointly trained with the HR latent model?"
}
] |
vo5LONGAdo | Remix-DiT: Mixing Diffusion Transformers for Multi-Expert Denoising | Transformer-based diffusion models have achieved significant advancements across a variety of generative tasks. However, producing high-quality outputs typically necessitates large transformer models, which result in substantial training and inference overhead. In this work, we investigate an alternative approach involving multiple experts for denoising, and introduce RemixDiT, a novel method designed to enhance output quality at a low cost. The goal of RemixDiT is to craft N diffusion experts for different denoising timesteps, yet without the need for expensive training of N independent models. To achieve this, RemixDiT employs K basis models (where K < N) and utilizes learnable mixing coefficients to adaptively craft expert models. This design offers two significant advantages: first, although the total model size is increased, the model produced by the mixing operation shares the same architecture as a plain model, making the overall model as efficient as a standard diffusion transformer. Second, the learnable mixing adaptively allocates model capacity across timesteps, thereby effectively improving generation quality. Experiments conducted on the ImageNet dataset demonstrate that RemixDiT achieves promising results compared to standard diffusion transformers and other multiple-expert methods. | https://openreview.net/pdf/1b07fc3963a665cf6f8f91e36966e55baf261cdb.pdf | [
{
"confidence": 5,
"rating": 6,
"review_id": "uRlQWBcbUk",
"review_text": "This paper proposes Remix-DiT, which creates multiple experts by mixing fewer basis diffusion transformers, allowing each expert to specialize in the denoising task for corresponding timestep intervals. It achieves performance improvements by having each expert responsible for a larger number of timestep intervals with fewer total trainable parameters than previous multi-expert methods. Also, the paper analyzes the coefficients of how much each expert uses bases, demonstrating the denoising task similarity for adjacent timesteps, as well as the use of specialized bases for lower timesteps.\n\n* The paper is structured well, making it easy to understand and follow.\n\n* The proposed mixing basis strategy is interesting as it achieves better performance with fewer parameters compared to existing multi-expert methods.\n\n* Ablation studies on mixing methods are comprehensive.\n\n* **Lack of experiments.** The authors have to validate the performance of Remix-DiT by reporting comparisons with previous methodologies on the FFHQ or MS-COCO datasets. It would make the manuscript more solid if Remix-DiT achieves consistent performance improvements on multiple datasets.\n\n* **Lack of comparison.** There are two methods, DTR [1] and Switch-DiT [2], to address the multi-task learning aspect of diffusion training by designing distinct denoising paths for 1000 timesteps in a single model. These are more parameter-efficient methods where they use no additional parameters or 10%, respectively. The authors should analyze them with respect to Remix-DiT.\n\n[1] Park et al., Denoising Task Routing for Diffusion Models, ICLR 2024.\n\n[2] Park et al., Switch Diffusion Transformer: Synergizing Denoising Tasks with Sparse Mixture-of-Experts, ECCV 2024.\n\n* Is the Exponential Moving Average (EMA) model used to further train a pre-trained diffusion model?\n\n* It would be better that the authors provide an affinity matrix between 20 timestep clusters based on the learned mixing coefficients. I think the affinity matrix could explain the similarity between denoising tasks."
},
{
"confidence": 4,
"rating": 7,
"review_id": "EZld1XOId6",
"review_text": "The paper introduces Remix-DiT, a modification to the diffusion transformer architecture that incorporates the multi-expert denoiser framework during both training and inference. Unlike traditional multi-expert methods that train $N$ separate individual experts independently for each time interval, Remix-DiT employs $K$ base models combined with $N$ mixing coefficients to dynamically compute time-specific experts. This approach enhances efficiency and leverages task similarities between adjacent intervals more effectively. Experiments on ImageNet demonstrate that Remix-DiT improves the performance of DiT across various model sizes.\n\n- The paper is well-motivated and represents a valuable step towards integrating the multi-expert denoising framework into standard diffusion models. \n\n- The main idea of the paper (using global mixers to compute the final experts) is novel and interesting to me in this context.\n\n- The method is simple and effective, making it more suitable for practical use cases. \n\n- The experiments are well-designed, and the ablations clearly illustrate the impact of various aspects of Remix-DiT. \n\n- The paper is well-written and generally easy to understand.\n\n- While the authors show the benefits of Remix-DiT on finetuning a pretrained DiT model, it would be interesting to see its effect when training all components from scratch. If the compute budget allows, I suggest that the authors also add this experiment for better insights into what happens if one uses the remixing scheme from the beginning of training (perhaps after a small warmup)\n\n- The performance gain seems to diminish as the size of the base model increases. Hence, a more detailed discussion on this issue is needed for the final version. For example, the performance gain is almost 30% for DiT-S, while it drops to only 15% for DiT-L.\n\n\n**Minor comments:**\n\nPlease fix the following issues in terms of writing in your draft:\n- L114 \"refer to\" -> \"refers to\"\n- L144 -> citation is missing\n- L215 -> I assume 100M steps should be 100K steps\n- L290 -> it seems that it should be written as N experts because K is the number of base models\n- L295 -> \"can found\" should be \"can find\"\n\nPlease also cite GigaGAN [1] as the mixing part of the paper is related to their method of mixing different convolution kernels during training.\n\n[1] Kang M, Zhu JY, Zhang R, Park J, Shechtman E, Paris S, Park T. Scaling up gans for text-to-image synthesis. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2023 (pp. 10124-10134).\n\n1. The EDM paper [1] suggests that for denoising, only the middle noise levels are important, while this paper suggests that the noise levels towards 0 are more crucial. Do you have an intuition on the difference between these two conclusions?\n\n2. Is the performance of Remix-DiT more sensitive to the number of sampling steps compared to a normal DiT? In other words, how do the experts perform when using a deterministic sampler with low NFEs (<50)?\n\n3. Can you also visualize some examples generated by DiT and Remix-DiT? While the metrics are valuable, a qualitative evaluation is interesting as well.\n\n[1] Karras T, Aittala M, Aila T, Laine S. Elucidating the design space of diffusion-based generative models. Advances in neural information processing systems. 2022 Dec 6;35:26565-77."
},
{
"confidence": 4,
"rating": 7,
"review_id": "1tbPXEydjf",
"review_text": "The paper proposes Remix-DiT, a model architecture designed to enhance the capacity of a standard DiT model without significantly increasing inference costs. This is accomplished by training mixing coefficients to adaptively fuse multiple DiT models and developing specialized experts for multi-expert denosing. A key advantage highlighted in this paper is that Remix-DiT achieves better generation quality while maintaining inference speed comparable to that of a standard DiT. Experimental results on ImageNet-256 demonstrate favorable outcomes compared to baseline methods.\n\n1.\tThe visualization results in Figure 4 are very interesting. It seems that the model has a certain preference in allocating the capacity of basis models, with clear segmentation across the timesteps. Additionally, a high coefficient is observed at early timesteps, such as 0-150. Does this imply that those steps are more challenging for the diffusion model to learn?\n2.\tThe idea of mixing multiple basis models is clear and easy to implement. It does not requires the expensive training of independent experts for different steps.\n\n1.\tUsing multiple base models may introduce more training costs. However, in Table 3, the GPU memory usage only slightly increases from 13G to 16G for DiT-B. Can the authors provide more details about the reason? Will Remix-DiT introduce a substantial backward and forward footprint?\n2.\tThis method utilizes the pre-trained model as the initialization. This might make the mixed experts always the same after mixing since they are working on the same basis model initially. Will this be a problem?\n3.\tWhy does the proposed method outperform naively training independent experts? In this method, the experts are crafted by mixing, which should theoretically be upper bounded by the naïve method mentioned above.\n\nPlease refer to the weaknesses."
},
{
"confidence": 4,
"rating": 5,
"review_id": "ae5rwXtJ07",
"review_text": "To improve the generation quality of diffusion transformers, Remix-DiT proposes to enhance output quality at a lower cost and aims to create N diffusion experts for different denoising timesteps without the need for expensive training of N independent models. Remix-DiT achieves this by employing K basis models (where K < N) and using learnable mixing coefficients to adaptively craft expert models. This approach offers two main advantages: although the total model size increases, the model produced by the mixing operation shares the same architecture as a plain model, maintaining efficiency comparable to a standard diffusion transformer. Additionally, the learnable mixing adaptively allocates model capacity across timesteps, effectively improving generation quality. Experiments on the ImageNet dataset show that Remix-DiT achieves promising results compared to standard diffusion transformers and other multiple-expert methods.\n\nNovelty: Model mixers for efficient multi-expert diffusion model training is innovative and unique.\n\nSignificance: Addressing the challenge of efficient training of multi-expert diffusion transformers is significant in the field of diffusion models.\n\nMethodology: The proposed algorithm is well-formulated and clearly explained.\n\nResults: Experimental results demonstrate promising improvements over existing methods such as DiT.\n\n1. Lack of Visualization Results: The paper does not include any visualization results. Providing visual examples of generated outputs is crucial for qualitatively evaluating the effectiveness of the proposed method.\n\n2. Insufficient Motivation for Multi-Expert Training: The rationale behind adopting a multi-expert training approach is not fully well-motivated, particularly in the context of quantitative comparisons. A more detailed explanation of why multi-expert training is beneficial and how it compares quantitatively to other methods would strengthen the argument. Clarifying the advantages and potential trade-offs in performance and efficiency would provide a more compelling case for this approach.\n\n3. High Training Cost: The training cost associated with the proposed method is substantial. It would be beneficial to provide a thorough analysis of the computational resources, time, and energy required for training compared to other existing methods. Discussing potential ways to mitigate these costs or offering insights into why the increased training cost is justified by the performance gains would add valuable context for evaluating the practicality of the method.\n\n1. Performance Comparison Between Multi-Expert and Single Larger Models: Is it possible for the multi-expert small models to outperform a single, larger model? To fully validate the potential of the multi-expert approach, it is crucial to provide a thorough performance comparison. This should include quantitative metrics and benchmarks that demonstrate the advantages, if any, of using multiple experts over a single larger model in terms of both output quality and computational efficiency.\n\n2. Scalability and Efficiency of Increasing the Number of Experts: If the number of experts is increased for the same basis models, how easily can the system be scaled, and does this lead to more efficient training? It would be important to discuss the scalability of the multi-expert framework, including any potential challenges or limitations in transferring the model to a larger number of experts. Additionally, insights into how the efficiency of training might be affected by increasing the number of experts would be valuable."
}
] |
vjw4TIf8Bo | PaDeLLM-NER: Parallel Decoding in Large Language Models for Named Entity Recognition | In this study, we aim to reduce generation latency for Named Entity Recognition (NER) with Large Language Models (LLMs). The main cause of high latency in LLMs is the sequential decoding process, which autoregressively generates all labels and mentions for NER, significantly increase the sequence length. To this end, we introduce Parallel Decoding in LLM for NE} (PaDeLLM-NER), a approach that integrates seamlessly into existing generative model frameworks without necessitating additional modules or architectural modifications. PaDeLLM-NER allows for the simultaneous decoding of all mentions, thereby reducing generation latency. Experiments reveal that PaDeLLM-NER significantly increases inference speed that is 1.76 to 10.22 times faster than the autoregressive approach for both English and Chinese. Simultaneously it maintains the quality of predictions as evidenced by the performance that is on par with the state-of-the-art across various datasets. All resources are available at https://github.com/GeorgeLuImmortal/PaDeLLM_NER. | https://openreview.net/pdf/d41c719a3d75bbd4f587ed89d649f8de4444d47f.pdf | [
{
"confidence": 4,
"rating": 6,
"review_id": "spexdGTOXl",
"review_text": "This paper introduces a novel approach to reduce generation latency in Named Entity Recognition (NER) using Large Language Models (LLMs). The primary issue addressed is the high latency caused by the sequential decoding process in LLMs, which significantly lengthens the sequence by autoregressively generating all labels and mentions for NER. To tackle this, the authors propose Parallel Decoding in LLM for NER (PaDeLLM-NER), which integrates into existing generative model frameworks without requiring additional modules or architectural changes. PaDeLLM-NER enables simultaneous decoding of all mentions, effectively reducing generation latency. Experimental results show that PaDeLLM-NER can improve the inference speed than the traditional autoregressive approach.\n\n- The parallel decoding strategy is well-designed and experimental results prove the effectiveness.\n- The authors provide comprehensive experiments with different setting and with furthre analysis.\n- The paper is easy to follow.\n\n- The proposed method cannot improve the inference speed in scenarios where only one type of entity is predicted.\n- Since the method focuses on the inference efficiency of LLMs-based NER, it is better to report both inference speed and performance compared to zero-shot (Table 3) and supervised (Tables 4 and 5) methods. Notably, Table 3 only reports performance without considering the efficiency of different LLMs. Furthermore, why not report the performance of AutoReg_aug and AutoReg_struct in Table 3?\n- For better understanding of the training resource usage when compared with other methods, it is better to report the base language models used (SOTA methods) in Tables 4 and 5.\n- The writing of this paper could be further improved. For example, Line 219, “As per Ning et al...” appears to be a typo; the meanings of the underline (second performance) and bold (best performance) are not provided; and there is no explanation for why “*” indicates that results are not directly comparable in the Table 5 caption.\n- Comparing with fixed few-shot in-context learning of LLMs may also be worth considering, as caching the fixed prompt could improve the inference speed of LLMs.\n\nPlease see the weakness."
},
{
"confidence": 4,
"rating": 9,
"review_id": "wKA2XWPwHg",
"review_text": "They create an NER system where an LLM first outputs the number of mentions there are of a given type (for all possible types). Then all mentions can be generated in parallel.\n\nThis results in faster inference times as each generation is short, and they can be done in parallel.\n\nThey compare to several different baseline on multiple NER datasets in multiple different settings.\n\nTheir method is much faster than others.\n\nTheir reformulation of NER as predicting (label, mention) pairs removes a critical component of classical NER, the actual alignment of the mention to the tokens. Polysemous words are often mentions in some context and not in others and it if often important to know which one was the actual mention, especially if it is used for things like editing downstream.\n\nThe deduplication strategy is very aggressive and removes the possibility that some surface text is a label for multiple types in a single sentence. For example, \"It is England vs. Italy on this sunny day in England\", England is both a place (LOC) and a sports team (ORG) this would get filtered by their setup.\n\nThe prose's definition of \" prediction quality [...] that is on par\" is rather loose, with their model being 6 points behind on average for zero-shot (table 3) and behind by a point of two on most supervised datasets.\n\nHow do you expect this to scale to NER datasets like Ontonotes where a there are 20+ different mention category? Similarly what about long documents that could have 10-30+ mentions of a given type?\n\nDid you see inconsistencies in the model outputs? For example a model that output that there is `1` person but then generated <mention 2> ${person name}?"
},
{
"confidence": 3,
"rating": 6,
"review_id": "yhyCeH4zVj",
"review_text": "This paper presents PaDeLLM-NER, a novel approach for accelerating Named Entity Recognition (NER) inference in Large Language Models (LLMs) through parallel decoding. A reformulation of the NER task that enables parallel generation of label-mention pairs, significantly reducing inference latency. A two-step inference process involving mention count prediction and parallel mention generation.\nExtensive experiments demonstrated significant speedups (1.76x to 10.22x faster) compared to autoregressive approaches while maintaining or improving prediction quality across multiple datasets and two languages.\n\nThe parallel decoding strategy for NER is innovative and addresses a significant bottleneck in LLM inference speed, which is important in some speed-sensitive applications. The authors conduct extensive experiments across multiple datasets, languages, and settings (zero-shot and supervised), proving the method's effectiveness. The reported speedups are substantial and could have a meaningful, practical impact on NER applications. The method is compatible with existing LLM architectures and can be integrated with other acceleration techniques. The methodology is well-explained with helpful diagrams and examples.\n\nSome details and corner cases are not well explained. For example, I didn't see the token location information in Figure 2. If the input has multiple and same mentions (e.g., \"Donald Trump owns the Trump Organization\" ), how does this framework distinguish with the same mentions? (e.g. Trump in the above example)\n\nIn addition, it is not clear how the de-duplicate model processes the partially duplicated mentions. For example, in the above case, the \"Trump organization\" was recognized as ORG, and what if the person module predicted the \"Trump\" in the \"Trump organization\" as a person? Will the de-duplicate model filter this case?\n\nNo"
},
{
"confidence": 4,
"rating": 6,
"review_id": "vZKjT60W3a",
"review_text": "This paper proposes an interesting extension of the parallel text generation paradigm, where the authors tackle the NER task and propose to generate the labels independently. For each label prediction, the proposed method first predicts the number of mentions and then predicts the exact entity. The results show that the proposed model performs reasonably well, while achieving faster inference.\n\n1. The proposed method is a pioneer work to accelerate LLM generation following the parallel generation paradigm. \n2. We do observe significant speed-up empirically, which suggests the proposed method may be of value in real word applications.\n\n1. The importance of the two-step prediction for each entity is not justified. I feel there should be a baseline such that the multiple mentions can be predicted together in an autoregressive fashion. For example, I can predict \"entity type: LOC Italy English\" as a whole.\n2. Fundamentally, parallel predictions should be weaker than autoregressive predictions due to the drop in dependency capturing. However, we observe from Table 4 that AR models are noticeably worse than the parallel approach. Since these results contradict common wisdom, there needs more effort to justify them. For example, the authors may need to reveal the full training/testing configurations of both the AR and parallel models, and there could be some more detailed error analysis to show how AR models are making more mistakes than the parallel approach.\n3. The proposed approach may face difficulty when a word is used multiple times with different types. For example, in \"Washington lives in Washington,\" the proposed approach may predict \"LOC\" and \"PER\" for both \"Washington\"; however, it can not align them because the parallel approach is ordered agnostic among entities. \n4. The proposed method needs finetuning to adjust the LLMs, which can be difficult when it comes to very large LLMs.\n\n1. Is the duplication issue, as mentioned in Figure 2, very common? Do you have statistics for this? \n2. Do you know if the testing datasets are in the training data of the LLM?"
}
] |
vjsd8Bcipv | $\epsilon$-Softmax: Approximating One-Hot Vectors for Mitigating Label Noise | Noisy labels pose a common challenge for training accurate deep neural networks. To mitigate label noise, prior studies have proposed various robust loss functions to achieve noise tolerance in the presence of label noise, particularly symmetric losses. However, they usually suffer from the underfitting issue due to the overly strict symmetric condition. In this work, we propose a simple yet effective approach for relaxing the symmetric condition, namely **$\epsilon$-softmax**, which simply modifies the outputs of the softmax layer to approximate one-hot vectors with a controllable error $\epsilon$. Essentially, ***$\epsilon$-softmax** not only acts as an alternative for the softmax layer, but also implicitly plays the crucial role in modifying the loss function.* We prove theoretically that **$\epsilon$-softmax** can achieve noise-tolerant learning with controllable excess risk bound for almost any loss function. Recognizing that **$\epsilon$-softmax**-enhanced losses may slightly reduce fitting ability on clean datasets, we further incorporate them with one symmetric loss, thereby achieving a better trade-off between robustness and effective learning. Extensive experiments demonstrate the superiority of our method in mitigating synthetic and real-world label noise. | https://openreview.net/pdf/f2d333e0e79783e10cbe29dab26d04b300ab7d1c.pdf | [
{
"confidence": 5,
"rating": 5,
"review_id": "S9UoASaNhu",
"review_text": "This paper proposes $\\epsilon$-softmax to deal with label noise. $\\epsilon$-softmax modifies the outputs of the softmax layer to approximate one-hot vectors with a controllable error $\\epsilon$. Both theoretical and empirical studies show the effectiveness of the proposed method.\n\n1. The writing of this paper is good.\n2. The robustness of the proposed loss is theoretically proved.\n\n1. I think the motivation or the underlying reason for the effectiveness needs further explanation.\n2. In experiment, the advantage of the proposed method over the competitors is probably not statistically significant.\n\n1. The term \"symmetric condition\" in abstract needs further explanation.\n2. In the implementation steps in Line 61, is $\\mathbf{p}(\\cdot)$ the same as ${p}(\\cdot)$? Or what is the relationship between these two notations? I think the mathematical notations should be strictly used and defined.\n3. The robustness of the proposed $\\epsilon$-softmax loss is theoretically justified. However, I'm curious to know the insight, or the underlying reason for its robustness. The authors wrote that \"The distinctive attribute of $\\epsilon$-softmax lies in its guarantee to possess a controllable approximation error $\\epsilon$ to one-hot vectors, thus achieving perfect constraint for the hypothesis class.\" But, I cannot figure out why controlling approximation error $\\epsilon$ to one-hot vectors and achieving perfect constraint for the hypothesis class are useful for handling label noise? What is the direct reason? It would be better if the authors can provide some intuitive explanations.\n4. From the experiments, I note that the performance is a bit sensitive to different selection of m. Therefore, is it possible to give some guidance in choosing m for practical use? Besides, will better performance be obtained if we add the $\\epsilon$-softmax loss functions with different m?\n5. From the experimental results in Table 2, 4, I can see that the improvement of the proposed method over other methods is quite marginal. I guess if we do statistical significance test, maybe such improvement will not be statistically significant."
},
{
"confidence": 4,
"rating": 7,
"review_id": "Py8fejDZWi",
"review_text": "This submission proposes a enhanced softmax layer for label-noise learning, namely $\\epsilon$-softmax. By incorporating with the well-known $\\epsilon$-relaxation, the proposed $\\epsilon$-softmax can regularize the outputs of the model and avoid fitting the label-noise sample. This simple and plug-and-play method theoretically bounds the output logits to be an approximated one-hot vector. Extensive experiments demonstrate the effectiveness of the proposed method.\n\n- The proposed method is simple, plug-and-play, and effective. Unlike other label-noise robust losses, the proposed method not only works well by itself, but also can be integrated with other label-noise robust method such as DivideMix. To the best of my knowledge, this could one of the first works endow such property.\n- The theoretical analysis is comprehensive and make sense. The theoretical results suggests the proposed method possesses the Top-K error consistency and label-noise robustness.\n- The analysis between the most related previous works is in reason. The basic idea that balances the label-noise robustness and learning effectiveness has been researched for a long time, e.g., GCE. This submission clearly presents the connection between the proposed method and other symmetric losses.\n- The empirical results is effective and enough. This submission presents the comparison results between many label-noise robust losses and the proposed method achieves the best performance in most cases. Additionally, this submission provides the experimental results that demonstrated the plug-and-play property of the proposed method on sample-selection based method and loss-correction based method.\n\n- The ablation studies on the gradient clipping should be conducted and providing experimental results with different backbones would be better.\n- It is exhaustive and labor-expensive to find the optimal $m$ for diverse datasets.\n\n- Why choose the MAE as the loss to incorporate with ${CE_\\epsilon}$? Following GCE and SCE, the MAE is indeed a practical and evaluated choice. Do there have any alternatives?\n- What is the connection or relationship between logit adjustment and $\\epsilon$-softmax?"
},
{
"confidence": 4,
"rating": 5,
"review_id": "bEFTq3DDkf",
"review_text": "This manuscript proposes a novel method to approximate the symmetric condition of the loss function, which is necessary for robustness to label noise. Specifically, the proposed method, named \\\\( \\\\epsilon \\\\)-softmax, can adjust the model output to approximate one-hot vector. However, the proposed method alone suffers from underfitting, so the authors combined it with MAE to achieve better performance. The authors evaluated the proposed method on datasets with different noise types and rates.\n\n1. This manuscript proposes a novel and simple method to approximate the symmetric condition of loss function.\n2. The theoretical analysis focuses not only on robustness to label noise but also on the top-k consistency of the loss function.\n3. The proposed method was evaluated on various noise types and rates, including class-dependent noise and real-world noise.\n4. It good to see the authors compare their proposed method with temperature-dependent softmax combined with MAE. The experimental results demonstrate the superiority of their proposed method compared to temperature-dependent softmax.\n\n1. The theoretical discussion with temperature-dependent softmax is missing. As the authors mentioned in L42 to L53, there are other output restriction-based methods in the literature, such as temperature-dependent softmax. Although the authors claim that these methods “lack predictability, fail to achieve a quantitative approximation to one-hot vectors, and exhibit limited effectiveness,” there is no detailed discussion on why the proposed \\\\( \\\\epsilon \\\\)-softmax has superior properties.\n2. A direct comparison with sparse regularization [1] is missing. Sparse regularization utilizes temperature- dependent softmax, which this manuscript has already compared, to approximate one-hot vector output. However, sparse regularization also employs an additional regularization term, \\\\( \\\\ell_p \\\\)-norm \\\\( \\\\| p(\\\\cdot | x) \\\\|^p_p \\\\) to enhance performance, and this regularization term is equivalent to MAE only if \\\\( p = 1 \\\\). It’s necessary to highlight the advantages of the proposed method compared to this highly relevant approach.\n3. There is no ablation study on \\\\( \\\\alpha \\\\) and \\\\( \\\\beta \\\\). As the authors mentioned in L218, \\\\( \\\\epsilon \\\\)-softmax alone suffers from a loss in fitting ability, and they combined it with MAE to balance the robustness and effective learning. However, without the relevant ablation study, it’s unclear how this “trade-off” is achieved.\n4. The theoretical discussions and experiments regarding instance-dependent label noise are overlooked. In recent years, the instance-dependent label noise has attracted increasing attention [2,3,4]. Experimenting the proposed method on instance-dependent label noise can provide a better understanding of how the proposed method performs with different types of label noise. I encourage the authors to include related discussion in the revised manuscript.\n\n[1] Learning with noisy labels via sparse regularization, ICCV, 2021.\n\n[2] Part-dependent Label Noise: Towards Instance-dependent Label Noise, NeurIPS, 2020.\n\n[3] Learning with Instance-Dependent Label Noise: A Sample Sieve Approach, ICLR, 2021.\n\n[4] Instance-Dependent Label-Noise Learning With Manifold-Regularized Transition Matrix Estimation, CVPR, 2022.\n\n1. Is \\\\( \\\\epsilon \\\\)-softmax + MAE still All-\\\\( k \\\\) calibrated and All-\\\\( k \\\\) consistency?\n2. Can the proposed method perform better compared to sparse regularization?"
},
{
"confidence": 3,
"rating": 7,
"review_id": "cwEb4uunDn",
"review_text": "The paper introduces “ϵ-softmax,” a method to adjust softmax outputs for better approximation to one-hot vectors, thereby mitigating the impact of label noise in classification tasks. The approach modifies the softmax layer outputs to include a controllable error term ϵ, aiming to improve noise robustness without extensive alteration to the network architecture. The authors provide theoretical backing for the effectiveness of ϵ-softmax in achieving noise-tolerant learning across various loss functions. Extensive experiments with both synthetic and real-world noisy datasets are conducted to validate the claims.\n\n1.ϵ-softmax is presented as a plug-and-play module compatible with any existing classifier that uses a softmax layer, enhancing its practical utility.\n\n2.The paper proves that ϵ-softmax can achieve a controlled approximation to one-hot vectors, which is significant for learning with noisy labels.\n\n3.The methodology is backed by extensive experimental results showing its superiority over existing methods in handling label noise, with detailed results across multiple datasets and noise configurations.\n\nThis paper should pay attention to the axis labels of its figures. In Figure 1, the x-label is Epoch and y-label is Test Accuracy. In Figures 2 and 3, the axis labels are missing.\n\n1.This paper seems to focus on classification tasks. Does ε-Softmax also work well for regression tasks?\n\n2.Is ε-Softmax computationally efficient in terms of training time compared with baseline methods?\n\n3.This paper shows the excellent performance of ε-Softmax for label noise. Does ε-Softmax work for input (features) noise as well?\n\n4.I assume that in this paper, for the label noise models, both training and testing labels are noisy. I am curious about the performance when the training labels are clean and the testing labels are noisy."
},
{
"confidence": 4,
"rating": 5,
"review_id": "FS3mp6lxaN",
"review_text": "The author proposes the epsilon-softmax technique as a method to address label noise. Epsilon-softmax facilitates peaky predictions by increasing the value of the highest prediction, and it also functions to reduce the magnitude of the gradient when the prediction aligns with the given label. The author introduces the concept of All-k consistency to interpret this paradigm and presents experiments on prominent real-world benchmark datasets in the field of label noise learning, specifically WebVision and Clothing1M.\n\nThe proposed epsilon softmax by the author takes a different approach compared to existing symmetric-like functions, which aim to reduce the gradient value for entirely incorrect predictions. From my understanding, the author’s approach reduces the gradient value for predictions that match the given label. By providing the value and interpretation of this novel approach, the author has significantly broadened the scope of the label noise learning field with their straightforward yet impactful idea. The proposed method has the advantage of simple gradient computation without requiring additional high-cost operations, making it applicapable to other Label Noise Learning (LNL) methods. The author demonstrates the experimental value of this approach by applying it to both the cross-entropy loss function and the focal loss function.\n\nThis section addresses two major concerns. For minor concerns, please refer to the \"Questions\" part.\n1. The author mentions in line 40 the necessity for a method that can achieve both effective learning and robustness. While the proposed method offers a different perspective compared to symmetric-like loss methods, it is challenging to assert that it fully meets this necessity. Ironically, to balance the trade-off between effective learning and robustness, the author combines CE_(epsilon) loss and MAE loss. This ability to manage trade-offs is also found in other symmetric-like loss-based methods. In this context, I am interested in understanding why the proposed method might offer a better trade-off and whether it truly provides a better trade-off. I attempted to verify this through experimental comparison, but several issues arose: (1) There are no experiments that allow for a comparison of trade-offs. Experiments demonstrating the trade-off by varying alpha and beta are necessary. (2) The performance of existing methods is reported to be lower. For example, refer to the SCE paper.\n2. The proposed method introduces two additional hyperparameters: m and alpha / beta. Unfortunately, based on the recorded experimental results, the proposed method appears to be sensitive to these hyperparameters. If this is not true, providing comprehensive experimental results that show the effects of varying these hyperparameters would enhance the perceived value of the proposed method. \nAnd if the proposed method is indeed sensitive to changes in hyperparameters, I would like to see evidence that it is not sensitive to hyperparameter variations within the in-distribution domain. I recommend performing validation and test processes to identify optimal hyperparameters (refer to the processes outlined in the GJS and JS papers).\n\n1. Based on the gradient analysis, it appears that when m is sufficiently large, the sensitivity of performance to m would not be significant. However, as shown in Table 3, the difference in performance is quite notable. Does the author have an explanation for this discrepancy? Additionally, has the author investigated the results when m is infinite, meaning the CE loss is not used when the prediction matches the label?\n2. Has the author ever checked the results of using only the CE_(epslion) loss function? Sections 3.1 to 3.3 and Lemma 2 suggest the importance of the single CE_(epslion) loss function."
}
] |
vjCFnYTg67 | Bileve: Securing Text Provenance in Large Language Models Against Spoofing with Bi-level Signature | Text watermarks for large language models (LLMs) have been commonly used to identify the origins of machine-generated content, which is promising for assessing liability when combating deepfake or harmful content. While existing watermarking techniques typically prioritize robustness against removal attacks, unfortunately, they are vulnerable to spoofing attacks: malicious actors can subtly alter the meanings of LLM-generated responses or even forge harmful content, potentially misattributing blame to the LLM developer. To overcome this, we introduce a bi-level signature scheme, Bileve, which embeds fine-grained signature bits for integrity checks (mitigating spoofing attacks) as well as a coarse-grained signal to trace text sources when the signature is invalid (enhancing detectability) via a novel rank-based sampling strategy. Compared to conventional watermark detectors that only output binary results, Bileve can differentiate 5 scenarios during detection, reliably tracing text provenance and regulating LLMs. The experiments conducted on OPT-1.3B and LLaMA-7B demonstrate the effectiveness of Bileve in defeating spoofing attacks with enhanced detectability. | https://openreview.net/pdf/6ec48e6544bb032898b579c21036f7d2dead471a.pdf | [
{
"confidence": 3,
"rating": 6,
"review_id": "6Zj9Wye6Zj",
"review_text": "The robustness of previous watermark algorithms would lead to a type of spoofing attack where attacker would modify the watermarked text to contain harmful contents while ensuring the watermark can still be detected. This paper introduce a bi-level signature scheme called bileve to mitigate spoofing attacks and enhance detectability. Bileve could recognize 5 scenarios during detection, compared to only 2 for previous methods. Using bileve, the LLM owners could verify if the source of the given texts. From experiments, the effectiveness of bileve against spoofing attack is validated.\n\n1. The 3 types of spoofing attacks are clearly listed in this paper, with exploited vulnerabilities explained, which makes the motivation reasonable.\n2. The paper provides comparsion between single-level signature and bileve, which is good for understanding.\n3. Bileve could produce a total of 5 different detection results, which meets real-world cases.\n\n1. Although with the proposed WRA, the text quality is improved as compared to SLS, the difference between bileve and unigram is still noticable.\n2. For case 4&5, if the watermarked text is inserted into a long document (copy-paste attack), then the global alignment test would not produce a small p-value while the detected text does contain watermarked text.\n\n1. what is the generation/detection complexity? The statistical test during detection seems quite time-consuming.\n2. Are all 5 cases tested during evaluation? Details of how local alignment test is conducted, e.g., chunk size, can be clarified in the paper."
},
{
"confidence": 4,
"rating": 7,
"review_id": "4wGhzhXv9o",
"review_text": "The paper presents a novel approach to secure the provenance of texts generated by large language models (LLMs) through a bi-level signature scheme. This method aims to mitigate spoofing attacks—where malicious actors alter the content generated by LLMs to forge harmful content or misattribute blame—by integrating fine-grained signature bits for integrity checks and a coarse-grained signal for source tracing when signatures are invalidated.\n\n1. This paper reveals a spoofing attack that takes advantage of the robustness features of state-of-the-art watermarking schemes.\n\n2. This paper improves the ability to trace the provenance of text and regulate LLMs by differentiating between five detection scenarios. \n\n3. This paper introduces a novel bi-level signature scheme that enhances text provenance security for large language models (LLMs). It combines fine-grained signatures for integrity checks with coarse-grained signals for source tracing.\n\n1. While the experiments demonstrate effectiveness in specific settings with OPT-1.3B and LLaMA-7B models, the generalizability and scalability of the Bileve scheme to other models are somewhat uncertain. Authors could consider using larger or more powerful LLMs to demonstrate the effectiveness of the proposed algorithm.\n\n2. The authors could consider using a more powerful LLM to measure the perplexity, like GPT-3/GPT-4.\n\n3. I suggest reporting TPR scores at fixed low FPR (FPR = 10% or 1%).\n\n4. This paper demonstrates detectability by modifying 10% of the tokens. It would be good to test with a higher rate of token modification, like 20%, 30%, to further validate the detectability.\n\nPlease see above."
},
{
"confidence": 4,
"rating": 3,
"review_id": "s27rEvWJWY",
"review_text": "This paper proposes to consider spoofing attack, where an attacker wants to prove the proposition like \"The person holding this watermark private key used an LLM to write this text A.\" where text A is constructed by the attacker. The paper proposes a defense against spoofing attacks.\n\nThis paper points out the fundamental trade-off between defending against removal attacks and spoofing attacks.\n\nI have doubt on the significance of spoofing attack. It is important to first clarify a potential misunderstanding. Authors may believe that a watermark in the text proves \"the person holding this watermark private key used an LLM to write this text A.\" But that's not accurate.\n\nHowever, the watermark only proves that the probability of text A being generated by a process independent of the watermark key holder is very low. It does not conclusively prove the key holder generated that specific text A.\n\nTherefore, I believe the spoofing attack lacks real significance from the outset. If someone wants to prove they said certain things and nothing else, they can just use a traditional digital signature.\n\nThe problem is also framed as \"How to avoid an LLM being wrongly blamed?\" But what can we really blame an LLM for? Sure, there may be instances where a single LLM inference generates a token sequence that is interpreted as harmful by humans.\n\nHowever, LLMs are probabilistic models that can potentially generate any harmful content given enough inferences. We can only blame an LLM for having a high average probability of generating harmful content, not for the existence of individual harmful inferences.\n\nMoreover the paper appears hard to read to me. For example, \"instead of ranking them based on probability like conventional methods [13]\" doesn't specify what conventional methods mean in paper [13] Pre-trained language models for text generation: A survey.\n\nFurthermore, t's unclear if the signature preservation attack requires constructing two messages with the same hash, as implied by \"replaced token hashes to the same signature bit.\" If so, that would be extremely difficult for modern hash functions.\n\nMore importantly, the paper does not provide any rigorous theoretical guarantees that Bileve actually solves the spoofing attack issue as claimed. The key assertion is that \"it is less likely to simultaneously align well with $\\Xi$ sequences, thereby effectively mitigating such attacks.\" However, this statement is quite vague and unconvincing on its own.\n\nWhat does it mean for a method to be \"less likely to simultaneously align well with $\\Xi$ sequences\"? How much less likely is it quantitatively? Under what assumptions or conditions does this property hold? The paper does not provide clear answers to these crucial questions.\n\nIn the definition of \"signature preservation attack\", I saw that \"replaced token hashes to the same signature bit\". Does it mean that signature preservation attack requires constructing two messages with the same hash? If so, that would be extremely difficult for modern hash functions."
},
{
"confidence": 4,
"rating": 3,
"review_id": "VFF69ZHPgK",
"review_text": "The submission proposes a spoofing attack on LLM watermarks and a new bi-level scheme meant to protect against spoofing by distinguishing five possible scenarios. The scheme is based on signature bits for integrity checks and rank-based sampling on top of a Kuditipudi-like random key sequence.\n\n- The paper takes a somewhat original approach compared to most contemporary methods. \n- On a high-level, the problem of preventing spoofing is well-motivated and important for the community.\n\n- Weak experimental results, bringing the practical value of the defense into question: \n - The provided quality evaluation, despite its limitations (see below), clearly shows an order-of-magnitude increase in perplexity which strongly suggests that produced text are of impractically bad quality; there is no evaluation that would test this. This is the most important weakness in my opinion.\n- Limited experimental evaluation, in ways that make it hard to evaluate the merit:\n - Text quality is measured only as PPL of Llama-13B and only on one small 1.3B model; there is no qualitative evaluation of text quality so the negative effect on text quality can't be well understood.\n - Only Unigram and SLS are considered as baselines, while self-hash and other variants of the KGW scheme are generally considered more promising, esp. from the perspective of spoofing.\n - Watermark removal is evaluated only as 10% editing attack which ruins text quality, no paraphrasing attack is evaluated.\n- Bigger framing issues around Table 2 and the attack:\n - The framing of Table 2 seems inappropriate. \"Knowing the secret key\" is not a spoofing attack but simply an application of the watermark, this seems to be introduced as a way to suggest that symmetric schemes are flawed by design, which is not necessarily true in cases where there is no detector access. \n - The attack is framed as a \"novel advanced spoofing attack\" while it is (1) in the opinion of this reviewer a direct result of scheme robustness and very limited in scope and thus hardly advanced (2) more importantly, already proposed in a different form in prior work [1] which was not cited, making this an overclaim. To elaborate on (1), for example, [7, 9] would be able to produce a detailed watermarked response to a harmful query such as \"Teach me how to steal someone's identity\" while there is no way to produce such a response by a few token modifications of a non-harmful response. \n - This attack type is used as a key motivation, setting aside the true spoofing attacks from [7,9], which are much more relevant. This is evident in claims such as \"anti-spoofing requires perturbation-sensitivity\". Further, the robustness of Bileve to such approaches based on learnability is claimed but not substantiated.\n- Poor writing: The paper is often quite hard to read and understand. On top of that there is a very large amount of typos. I advise the authors to work on improving the writing for the next version. Here is a list of some examples that I found, in hopes this helps. \n - \"Symmetric characteristic\" and \"learnability\" in Introduction are unclear without being defined\n - Paper keywords typo: \"provence\"\n - L50: unforgettable \n - L285 L325 L50: tempering / temper-evident\n - Table 2: model' \n - L87: simply \n - Algo1: $h$ is undefined, although $H$ (a different symbol) is defined outside in the main text \n - L211: \"associate\"\n - L283: resulted \n - L284: \"the source are\"\n - L284: the failure verification\n - L308: \"tokens also\"\n - L311: \"return\" \n - L312: \"the rest segments\"\n - L314: \"shows\" \n - L315: \"t0\"\n - L316: \"cause\" \n - L327: \"limitaition\" \n - L456: \"neucles\"\n\n[1] Attacking LLM Watermarks by Exploiting Their Strengths, Pang et al. arXiv 2402.16187\n\n- Can the authors provide evidence of practical text quality of Bileve texts?\n- Can the authors include the missing experiments discussed above?"
}
] |
vjAORqq71s | Newton Losses: Using Curvature Information for Learning with Differentiable Algorithms | When training neural networks with custom objectives, such as ranking losses and shortest-path losses, a common problem is that they are, per se, non-differentiable. A popular approach is to continuously relax the objectives to provide gradients, enabling learning. However, such differentiable relaxations are often non-convex and can exhibit vanishing and exploding gradients, making them (already in isolation) hard to optimize. Here, the loss function poses the bottleneck when training a deep neural network. We present Newton Losses, a method for improving the performance of existing hard to optimize losses by exploiting their second-order information via their empirical Fisher and Hessian matrices. Instead of training the neural network with second-order techniques, we only utilize the loss function's second-order information to replace it by a Newton Loss, while training the network with gradient descent. This makes our method computationally efficient. We apply Newton Losses to eight differentiable algorithms for sorting and shortest-paths, achieving significant improvements for less-optimized differentiable algorithms, and consistent improvements, even for well-optimized differentiable algorithms. | https://openreview.net/pdf/db338206d5b05120e97346114d0d20170da316fe.pdf | [
{
"confidence": 3,
"rating": 6,
"review_id": "bPka2PtUyt",
"review_text": "The paper presents an alternative backpropagation scheme for deep learning with algorithmic losses that combines a preconditioned step on the loss with a gradient step on a least square objective. Two preconditioning methods are investigated: using the Hessian, or the empirical Fisher. Experiments demonstrate that the proposed plugin consistently improve performance of algorithms across architectures and losses on two benchmarks. An ablation study of the potential additional tikhonov regularization is give, as well as a discussion of runtime comparisons.\n\n- The main strength of the paper is its experimental evaluation. Two relevant benchmarks are analyzed. 4 losses are considered in the frist benchmark, 3 in the second benchmark. Ablation studies and runtime comparisons provide a rather full picture on the algorithm.\n- Overall the proposed method clearly provides gains across settings. Its theoretical motivation may be unclear but such experimental evidence invites for further research on the subject.\n\n- The soundness of the approach from a theoretical viewpoint is lacking. However, it is probably better to have clear experimental evaluations than wobbly theoretical explanations. And theoretical explanations can be given later.\n\n- Why do the authors multiply an inverse of the Hessian with the gradients? Such operation amounts to solve a linear system and is always best handled with matrix-free linear algebra solvers that can make use of Hessian vector products.\n- Can the approach be cast as appropriate block-coordinate minimization of the Lagrangian associated to the composition of the loss and the network? As the authors acknowledge if both steps in 2.a, 2.b are gradient steps, we retrieve the classical gradient back-propagation as demonstrated in [57]. Such a setting may help uncover the principles underlying the improved performance.\n- I don't see how the learning rates were tuned in the experiments. Since it is generally a crucial hyperparameter for optimization methods, can the authors comment on this? \n- The authors keep on mentioning vanishing/exploding gradient phenomena. Could the authors actually demonstrate this issue numerically? I suppose the vanishing/exploding gradient phenomena depends on the number of steps taken by the algorithm defining the loss. Maybe one could correlate the number of steps taken by the underlying algorithm and the improved performance of the proposed approach."
},
{
"confidence": 5,
"rating": 4,
"review_id": "RnJOOKw3Ip",
"review_text": "The paper proposes second-order optimization with splitting for hard objectives that arise as smoothing of such hard problems as sorting and ranking to address the problem of vanishing/exploding gradients.\n\nIt is a well-written and very complete description of algorithms for reproducibility, which is a very good thing in itself.\n\n1. Insufficient experiments. I'd appreciate adding a comparison here with the SFA technique from there, as it will rely only on first-order information: https://arxiv.org/pdf/2003.02122\n\n1. Considering Newton's method in the case of non-convex objectives is a mistake. No matter how much it is regularized, as long as regularization is L2, had authors considered cubic regularization? E.g., https://link.springer.com/article/10.1007/s10107-006-0706-8\n2. Adding to the first question, for ranking objectives, Hessians are expected to converge to zero. Have you considered an increasing learning rate schedule? I am somewhat sure that this hessian/fisher type of method, due to vanishing gradients, also vanishes, resulting in effectively increasing the learning rate up to $\\propto \\lambda^{-1}$. I will appreciate experiments against the first-order method with the learning rate scheduler growing up to that value."
},
{
"confidence": 2,
"rating": 5,
"review_id": "dMFzDesEyL",
"review_text": "The paper proposes a new method to optimize complex possibly non-smooth and algorithmic losses for neural networks. The approach is based on splitting the problem into two-step procedure, where in the first step we construct and optimize the so-called Newton loss and the second step is based on SGD-type procedure for MSE loss with the first step. The authors present a wide experimental comparison of the proposed Fisher and Newton approaches with existing methods.\n\nThe paper has a strong literature review and motivation for solving different applications. The experimental section is well described, it contains 4 different non-trivial problems to solve. For the presented cases, the proposed methods outperform the baselines.\n\nThe paper does not contain any proofs or convergence guarantees. The mathematical formulation of the main problem is also quite confusing for me.
For example, is vector $x$ fixed for all steps or is it a batch of data? Is it a sum-type problem or an expectation problem? What are the properties for $l(\\cdot)$? Is it differentiable, smooth? Because some parts of the text said that the loss is non-smooth and later we calculate the Hessian of such a function.
In Formulas 1 and 2, it is not clear what are the fixed parameters or data. Should $\\theta$ in 2a be $\\theta_{t-1}$? Also, I think the mention of Lemma 2 in the main text could be very helpful.\n\nFor the experimental section, personally, it feels that the most of space is taken by the description of the problems and the setup and not the actual comparison. As the paper is mostly experimental and empirical, one would expect a better comparison of the proposed methods with the multiple benchmarks. There are no convergence figures with the per-iteration or per-gradient performance. As the authors claim, the main issues in the existing approaches are vanishing and exploding gradients. However, I didn’t find any clipping method for the comparison, which are the possible solutions for exploding gradients.\n\nSee Weaknesses"
}
] |
vieIamY2Gi | Improved off-policy training of diffusion samplers | We study the problem of training diffusion models to sample from a distribution with a given unnormalized density or energy function. We benchmark several diffusion-structured inference methods, including simulation-based variational approaches and off-policy methods (continuous generative flow networks). Our results shed light on the relative advantages of existing algorithms while bringing into question some claims from past work. We also propose a novel exploration strategy for off-policy methods, based on local search in the target space with the use of a replay buffer, and show that it improves the quality of samples on a variety of target distributions. Our code for the sampling methods and benchmarks studied is made public at [this link](https://github.com/GFNOrg/gfn-diffusion) as a base for future work on diffusion models for amortized inference. | https://openreview.net/pdf/444c02396fa1bcdf5db528a1e89da47cdafed517.pdf | [
{
"confidence": 5,
"rating": 7,
"review_id": "MLrSnnfzKL",
"review_text": "The paper studies the problem of training diffusion models to sample from a target distribution. The contributions are summarized as follows: \n\n1. A codebase is provided for the study of diffusion-based samplers, due to the issue of inconsistent experimental settings in previous research; \n\n2. Exploration in the target space can be enhanced by GFlowNet-based off-policy training objectives and local search with the use of replay buffer. \n\n3. Experimental results validate the effectiveness of the proposed approach.\n\nSampling from a target distribution can be challenging in high-dimensional spaces, especially when the distribution of interest has many separated modes. This paper explores diffusion models to address this challenge. Unlike existing reverse KLD-based methods, such as PIS and DDS, this paper considers GFlowNet-based training objectives (e.g., trajectory balance, sub-trajectory balance), which enable off-policy training. This means that training trajectories are not necessarily from the current forward process, thus enhancing exploratory capability. Additionally, local search using a replay buffer can further enhance exploration in the target space. In general, the paper is well-written and well-organized.\n\n---\n\n**After rebuttal:** I will increase my score to 7. Typos or incorrect writing should be corrected upon acceptance.\n\n---\n\nPlease see the below questions.\n\n- In terms of Table 1, which proposed methods perform best according to $\\log Z^{LB}$ and $\\log Z^{RW}$? Both evaluation metrics reflect different aspects of performance. For example, $\\log Z^{LB}$ should indicate mode collapse behaviour, with lower values suggesting more serious mode collapse (e.g., PIS+LP: 13.19 vs. TB+Expl: 4.01, as illustrated in Figure 1)? In contrast, lower $\\log Z^{RW}$ values suggest a closer approximation to the ground-truth $\\log Z$.\n\n- In terms of C2 variance-preserving noising process, \n\n - As far as I undertand from the paper, GFlowNet uses PIS architectures, where the initial state is a point mass at 0. How did you design it under VP settings, where the marginal distribution $p_{t}^{ref}$ is an invariant Gaussian distribution, i.e., the initial state is Gaussian distributed, not a point mass at 0? \n\n - In VP settings, we gradually add more and more noises to data (data --> Gaussian: $\\beta$ increases). However, the sampling process starts with simple Gaussians and ends with samples, so should $\\beta$ decrease, when Gaussian --> data?\n\n- I am curious if the authors have ever tried the detailed balance objective, due to its superior generalization capability (https://arxiv.org/pdf/2407.03105, different settings though)?\n\n- The results for DIS and DDS over Manywell are still missing compared to the previous version. However, the DIS paper does include the results. Any reasons?\n\n- Missing reference - Beyond ELBOs: A Large-Scale Evaluation of Variational Methods for Sampling. Both papers share the same goal. It would be better to discuss it in the paper.\n\n\nPotential typos:\n\n- Eq.7: To define the reverse KL, a distribution is missing between $\\int$ and $\\log$? Why does $\\mu_{0}(x_{0})$ appear before $d x_{\\Delta t}$?\n\n- Eq.8: $d x_{0}$ is missing?\n\n- Eq.10 & 11: $p_{target}(x_{1})$ --> $R(x_{1})$?\n\n- Line 181: $Z_{\\theta}$ is missing in the equation?\n\n- Eq.12: $f(x_{m})$ --> $f(x_{m \\Delta t})$, as well as $f(x_{n})$?\n\n- Line 209: $I_{d}$ is missing in $p_{t}(x_{t})$?\n\n- Line 290 & 294: $\\log$ is missing before $\\int$, and $P_{F}(\\tau | x_{1})$ --> $P_{F}(\\tau | x_{0})$?"
},
{
"confidence": 3,
"rating": 3,
"review_id": "s6aMp82sTW",
"review_text": "This paper proposes an off-policy diffusion-based sampler training method to match a target distribution and a corresponding exploration strategy and credit assignment to improve it.\n\n1.\tThe proposed idea of this paper is interesting, which connects the Euler-Maruyama sampler and GFlowNets.\n\n1.\tAlthough the authors mention that traditional MCMC have high cost in sampling, the proposed method based on neural sde seems to still have this problem. To the reviewer’s knowledge, the solving procedure of neural sde is time-consuming as well.\n2.\tThe experimental target distribution also seems relatively simple. In the reviewer’s opinion, for GMM, we can first sample a mode according to the weights of different modes and then obtain a sample in this mode. Hence, it seems unnecessary to use complex model like diffusion.\n3.\tIn many real-world applications like image generation, the pdf (may be unnormalized) of the target distribution is unavailable and we can only achieve data samples from the target distribution. Hence, the application scenarios of the proposed model are limited. \n4.\tBesides, as mentioned in the conditional sampling case, the proposed method seems to need an extra trained vae to perform sampling. However, the vae can directly do the image generation. In that case, what is the real contribution of the proposed method?\n\n1.\tThe biological sequence design seems more appropriate to be the validation benchmark for the proposed model, which is also considered in GFlowNets. So the reviewer wonders how the proposed method is compared with GFlowNets in such tasks.\n2.\tCould the authors explain why they use the log-partition function estimation error as the metrics rather than the log-partition function itself? Similar to MLE (Maximum Likelihood Estimation), the model can be considered better with higher $\\log Z$."
},
{
"confidence": 2,
"rating": 4,
"review_id": "n4w6hwI5Rv",
"review_text": "This paper focuses on the problem of sampling with distributions defined by a black-box and unnormalized energy function. This work provides a comprehensive review of existing works, including both variational methods and policy-based methods, and offers a codebase and benchmark to replicate and evaluate the existing works. Additionally, this work proposes a method to improve existing policy-based methods via local search and a replay buffer.\n\n1. The studied problem of sampling from a distribution is an important issue with a long history in statistical inference. The paper provides a good review of recent works on this topic by leveraging diffusion models. The codebase that unifies existing methods is certainly useful to the community for continuing research on this topic.\n\n2. The experiments are comprehensive in baselines, including not just diffusion-based methods but also classical MCMC algorithms. The results clearly show the advantages of diffusion-based methods and the techniques proposed in this work.\n\n1. It appears to me that this work only tests the algorithm on relatively simple and manually-constructed scenarios. Are there any real and important applications within the field? I am not very familiar with this field, but I think that only conducting experiments on synthetic datasets makes this topic less practical. I believe the main advantage of the diffusion-based method over classical methods is in modeling complex distributions, making experiments on synthetic examples less meaningful.\n\n2. Additionally, the tested scenarios are all low-dimensional cases. I wonder how this algorithm performs on high-dimensional cases, such as when the energy function is learned through neural networks. For example, is it possible to apply this algorithm to image generation where the energy function is represented by an image classifier? Testing the algorithm on high-dimensional tasks like these would provide a better understanding of its scalability and practicality in more complex and realistic settings.\n\nIs there any benchmark in this field involving a real application and complex high-dimensional distribution?"
},
{
"confidence": 3,
"rating": 6,
"review_id": "n3BXFQ6tik",
"review_text": "The paper presents a variety of improvements to off-policy strategies for training diffusion models to sample from unnormalized densities. Equation 13. These include maintaining a replay buffer (obtained with Langevin sampling) to enable efficient off-policy exploration and incorporating an inductive bias into the neural network which estimates the SDE drift term. They also present a software library containing unified implementation of these techniques and e.g. diffusion model training.\n\n- Enabling diffusion models to be efficiently applied to sampling from unnormalized probability distributions is a problem with high potential for impact\n- Thorough experimental analysis and comparison of different alterations to the training procedure. On most problems considered, the authors' contributions are necessary to achieve good results with trajectory balance. \n- The contribution of a software library could be valuable to the community.\n\n- The results are not overwhelming - although the proposed contributions are helpful compared to a basic version of TB, there is only one modeling task in Tables 1-2 (25GMM) where they provide a statistically significant improvement over the baselines.\n- The experiments are on synthetic energy functions and MNIST VAE. Including more real-world data or models would be informative.\n\nSee weaknesses. My main concern is the underwhelming results when compared to DIS, DDS, PIS, PIS+LP. Is there any reason why the proposed method(s) should be preferred to these? Or e.g. expected to scale better to more complex problems?"
}
] |
vh9yEPLeyD | Can We Leave Deepfake Data Behind in Training Deepfake Detector? | The generalization ability of deepfake detectors is vital for their applications in real-world scenarios. One effective solution to enhance this ability is to train the models with manually-blended data, which we termed ''blendfake'', encouraging models to learn generic forgery artifacts like blending boundary. Interestingly, current SoTA methods utilize blendfake $\textit{without}$ incorporating any deepfake data in their training process. This is likely because previous empirical observations suggest that vanilla hybrid training (VHT), which combines deepfake and blendfake data, results in inferior performance to methods using only blendfake data (so-called "1+1<2"). Therefore, a critical question arises: Can we leave deepfake behind and rely solely on blendfake data to train an effective deepfake detector? Intuitively, as deepfakes also contain additional informative forgery clues ($\textit{e.g.,}$ deep generative artifacts), excluding all deepfake data in training deepfake detectors seems counter-intuitive. In this paper, we rethink the role of blendfake in detecting deepfakes and formulate the process from "real to blendfake to deepfake" to be a $\textit{progressive transition}$. Specifically, blendfake and deepfake can be explicitly delineated as the oriented pivot anchors between "real-to-fake" transitions. The accumulation of forgery information should be oriented and progressively increasing during this transition process. To this end, we propose an $\underline{O}$riented $\underline{P}$rogressive $\underline{R}$egularizor (OPR) to establish the constraints that compel the distribution of anchors to be discretely arranged. Furthermore, we introduce feature bridging to facilitate the smooth transition between adjacent anchors. Extensive experiments confirm that our design allows leveraging forgery information from both blendfake and deepfake effectively and comprehensively. Code is available at https://github.com/beautyremain/ProDet. | https://openreview.net/pdf/c5f94fbd9d60ce8c8dc283b8970f02e3f631bd22.pdf | [
{
"confidence": 5,
"rating": 4,
"review_id": "Bz01k7m02V",
"review_text": "This study introduces a novel training strategy for Deepfake detection using real, blendfake, and deepfake datasets. By designing an oriented progressive regularizer and a feature bridging module, the proposed approach effectively extracts forgery information from the training data, resulting in enhanced generalizability.\n\nThe proposed method categorizes forgery faces into several types: SBI, CBI, and Deepfake faces, each containing distinct forgery artifacts, such as blending clues, identity inconsistencies, and generative fingerprints. The fine-grained learning scheme encourages the model to learn representative features from the training data, thus achieving robust and general face forgery detection.\n\n1. The method employs a progressive transition from real to blendfake to deepfake samples. However, the necessity of continuity in these features remains unclear. The transition from real to fake faces, as depicted in Fig. 2, appears conceptually weird. The rationale behind the feature bridging and transition design is not well-explained. The progressive transition between adjacent anchors seems unusual, and the reasoning for a continuous rather than discrete transition is not justified.\n2. Despite the generative artifacts present in deepfake data, it remains ambiguous why directly incorporating blendfake and deepfake data during training degrades performance. The authors suggest that direct VHT may fail to disentangle the learned representation in the latent space, but no experiments support this claim.\n3. Fig. 1(b) does not appear to be an experimental result, which is crucial for validating the work's motivation.\n4. In Line 44-45, the authors raised a question “Can the blendfake face entirely substitute the actual AI-generated deepfake face in training deepfake detectors?” However, this question has already been addressed by Face X-ray and SBI, which successfully use blendfake data to train general models.\n5. The term A^T in Eq. (6) is not explained.\n6. It is unclear why features augmented with noise should be mapped to a more fake distribution.\n7. More Deepfake datasets, such as WildDeepfake and DeepForensics-1.0, should be included in cross-dataset evaluations.\n8. For robustness evaluations in Table 6, the method should be compared with recent state-of-the-art deepfake detection methods, and more severity levels for each perturbation type should be included to mimic complex real-world scenarios.\n\n1. In Table 4, is R2B2D the same as D2B2R?\n2. What is the real/fake label of the mix of F_r and F_s?"
},
{
"confidence": 5,
"rating": 5,
"review_id": "J1zimi86zm",
"review_text": "The authors introduced a method aimed at detecting deepfakes. Their approach, known as Oriented Progressive Regularizor (OPR), employs a progressive transition strategy. This strategy is designed to enable the model to effectively train on a combination of blendfake and deepfake data, ultimately leading to improved performance. The experimental results indicated that this method surpasses current state-of-the-art (SOTA) approaches when tested on deepfake datasets.\n\nThe paper provides a fresh perspective on the well-known problem of deepfake detection, which should be appreciated.\n\nThe paper is mostly well-written. The arguments and results presented are easy to understand.\n\nThe authors performed an extensive evaluation.\n\nArgument on Blendfake: The argument that blendfake data alone is sufficient for training deepfake detectors is based on empirical observations on certain datasets or benchmarks with some particular deepfake detection models and may not hold universally. I would suggest toning down that claim or providing the exact conditions when this argument holds.\n\nCDFv1 vs CDFv2: I believe that using CDFv1 for evaluation may not be necessary. It would have been more beneficial to utilize a different deepfake benchmark dataset such as FakeAVCele, DFD from Google/Jigsaw, or RWDF-23 (please refer to this repository for additional information https://github.com/Daisy-Zhang/Awesome-Deepfakes-Detection). The same applies to DFDC and DFDCP. I advocate for incorporating more diversity in the selection of benchmark datasets. In essence, the authors compared against three datasets instead of five, which is still an acceptable number.\n\nDatasets: The authors utilized widely known deepfake datasets from 2019 in their research. However, considering the rapid advancements in deepfake technology since then, I believe these datasets may no longer accurately represent the current landscape. It would be valuable for the authors to include an assessment of their method using real-world deepfake videos sourced from social media and other online platforms. By doing so, they can demonstrate the effectiveness of their proposed solution in addressing contemporary and future iterations of deepfakes.\n\nProgressive transition: Currently, the Progressive transition goes like this \"Real --> Blendfake (SBI) --> Blendfake (CBI) --> Deepfake\". I could imagine it being further extended to have addition of compression or adversarial artefacts (i.e., \"Real --> Blendfake (SBI) --> Blendfake (CBI) --> Deepfake--> compression and other artefacts\"). That way one could really see a generalisable pipeline that could incorporate the variance in the types of deepfakes available on social media and will greatly increase the quality of the work.\n\nSee the above comments."
},
{
"confidence": 3,
"rating": 4,
"review_id": "R2H7yNcK6w",
"review_text": "This paper investigates the generalization ability of deepfake detectors and proposes a novel training approach using \"blendfake\" data to enhance the model's learning of generic forgery artifacts. The authors point out that existing state-of-the-art methods do not incorporate deepfake data in their training process, which contradicts previous empirical observations. The paper introduces an \"Oriented Progressive Regularizor\" (OPR) to establish constraints on anchor distribution and proposes feature bridging to facilitate smooth transitions. Experimental results indicate that the proposed method effectively utilizes forgery information from both blendfake and deepfake.\n\n- Proposes a new training method that may enhance the generalization capability of deepfake detectors.\n- Introduces OPR and feature bridging techniques to improve the model's recognition of forgery features.\n\n- The attribution of the unorganized latent-space distribution lacks comprehensive experiments.\n- There are some minor writing issues, such as the consistency of using SOTA and SoTA.\n\n- In ablation Table 2, the AUC of VHT is lower than that of BF-only in cross-dataset comparison, which is opposite to the statement in line 230.\n- Are all the training sets for the SOTA model comparisons the same? How do you control the blendfake and deepfake training datasets for different models?"
},
{
"confidence": 4,
"rating": 7,
"review_id": "2MSilcigTq",
"review_text": "The paper explores the utilization of blendfake and pseudo-fake data in training deepfake detectors. It argues that the significance of deepfake samples has been underestimated due to insufficient exploration. To better exploit both pseudo-fake and deepfake data, the paper introduces a progressive transition from \"real to blendfake to deepfake\" and proposes a hybrid training scheme. This scheme includes an oriented progressive regularizer (OPR) to model the transition and a feature bridging strategy to simulate a continuous transition.The paper explores the utilization of blendfake and pseudo-fake data in training deepfake detectors. It argues that the significance of deepfake samples has been underestimated due to insufficient exploration. To better exploit both pseudo-fake and deepfake data, the paper introduces a progressive transition from \"real to blendfake to deepfake\" and proposes a hybrid training scheme. This scheme includes an oriented progressive regularizer (OPR) to model the transition and a feature bridging strategy to simulate a continuous transition.\n\n1.The paper is well-motivated, and the proposed solution is both intuitive and effective.\n2.The experiments robustly demonstrate the rationality and effectiveness of the proposed design.\n\n1. Choice of Blend Algorithms: The paper does not provide sufficient explanation or discussion on the choice of blendfake image algorithms (SBI and CBI). As mentioned in Section 2.2, there are many other methods for crafting blendfake images. Would these methods be effective as well?\n2. Interpolation Strategy: In Section 3.2, the paper introduces an interpolation strategy to achieve a smoother transition from real to deepfake. Why was interpolation performed at the feature level, and would setting multiple mixing parameters (alpha) for more interpolations further improve performance?\n3. Possible Typos: There might be a typo on line 149, $M_a$.\n\nSee in weakness"
}
] |
veMnGKXvTx | Homology Consistency Constrained Efficient Tuning for Vision-Language Models | Efficient transfer learning has shown remarkable performance in tuning large-scale vision-language models (VLMs) toward downstream tasks with limited data resources. The key challenge of efficient transfer lies in adjusting image-text alignment to be task-specific while preserving pre-trained general knowledge. However, existing methods adjust image-text alignment merely on a set of observed samples, e.g., data set and external knowledge base, which cannot guarantee to keep the correspondence of general concepts between image and text latent manifolds without being disrupted and thereby a weak generalization of the adjusted alignment. In this work, we propose a Homology Consistency (HC) constraint for efficient transfer on VLMs, which explicitly constrains the correspondence of image and text latent manifolds through structural equivalence based on persistent homology in downstream tuning. Specifically, we build simplicial complex on the top of data to mimic the topology of latent manifolds, then track the persistence of the homology classes of topological features across multiple scales, and guide the directions of persistence tracks in image and text manifolds to coincide each other, with a deviating perturbation additionally. For practical application, we tailor the implementation of our proposed HC constraint for two main paradigms of adapter tuning. Extensive experiments on few-shot learning over 11 datasets and domain generalization demonstrate the effectiveness and robustness of our method. | https://openreview.net/pdf/5cab7ed9375bde4b2a85485640533b73a623b658.pdf | [
{
"confidence": 3,
"rating": 5,
"review_id": "4cOdEh2D12",
"review_text": "A Homology Consistency (HC) constraint for efficient transfer on VLMs is proposed in this paper, which explicitly constrains the correspondence of image and text latent manifolds through structural equivalence based on persistent homology in downstream tuning.\nThe proposed method tracks the persistence of the homology classes of topological features across multiple scales and guide the directions of persistence tracks in image and text manifolds to coincide each other. Additionally, a deviating perturbation is applied to generalize the persistence coincidence to unseen data. Experiments on recognition and generalization tasks show the superior performance.\n\n1. The paper is well-written with a straightforward motivation.\n2. A theoretically well-founded homology consistency (HC) constraint based on persistent homology is proposed for efficient transfer on VLMs.\n3. Experiments on recognition tasks show the superior performance.\n\nThe hyper-parameters η, λ, ω should be determined at 16 shots and then migrated to other few-shot settings. If the number of samples is less than 16, how should the aforementioned hyper-parameters be set, and will there be a significant difference in performance?\n\nThe hyper-parameters η, λ, ω should be determined at 16 shots and then migrated to other few-shot settings. If the number of samples is less than 16, how should the aforementioned hyper-parameters be set, and will there be a significant difference in performance?"
},
{
"confidence": 3,
"rating": 5,
"review_id": "Ao6bvzlaa8",
"review_text": "The paper identifies a key issue with existing methods for tuning pre-trained vision-language models to downstream tasks with limited data: they adjust the alignment between image and text based solely on observed samples, which may not generalize well beyond the training data. To address this issue, the paper proposes a novel constraint from the perspective of topological data analysis.\n\nThis constraint employs persistent homology to ensure the structural equivalence of image and text latent manifolds during tuning.\n\n1. The paper offers a new way of looking at model tuning through the lens of topological analysis, with a focus on understanding the structure of data spaces for better semantic alignment in vision-language tasks. I appreciate this perspective on the issue.\n\n2. The proposed method exhibits a thoughtful theoretical underpinning, using persistent homology to enhance the generalizability of image-text alignment adjusting. \n\n3. The paper is well-written and the reason for leveraging topological data analysis to enhance semantic alignment during the tuning process is reasonable and easy to follow up.\n\nThe paper does not adequately discuss how it relates to existing image and text alignment techniques, including those based on distance metrics, mutual information, adversarial training, and attention mechanisms. This lack of comparative analysis creates a gap in fully appreciating the distinctive contributions and potential advantages.\n\n1. Could you please provide insights into the fundamental differences and advantages of topological data analysis in your method over other alignment methods?\n\n2. Where do you foresee potential challenges in extending your proposed method to tasks outside the few-shot learning domain, particularly in scenarios such as zero-shot learning, or applications involving detection and segmentation?\n\n3. Could you elaborate on how incorporating higher-dimensional homology classes into the tuning process might impact the model's performance and behavior, beyond the computational cost?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "1pqTG3AkzE",
"review_text": "The paper introduces a Homology Consistency (HC) constraint for efficient transfer learning on vision-language models (VLMs), ensuring task-specific image-text alignment while preserving general knowledge by using structural equivalence based on persistent homology. This approach mimics the topology of latent manifolds and tracks the persistence of topological features.\n\n1. This paper is well motivated, and the motivation of using homology consistency is interesting. \n2. This paper has a good theoretic support.\n\n1. The performance of the proposed method is worse than the baseline method in low-shot (1-shot and 2-shot) tasks. \n2. The improvement in Table 2 is marginal. Is the comparison fair with the same random seed? How many runs did you conduct? Could the authors also report the standard deviation of the score? \n3. Moreover, is 16-shot common in this benchmark? 16 shot seems a lot in few-shot learning. \n4. Can you also elaborate more why with only DP, the performance drops in Table 3? \n5. In addition, could you elaborate more why choosing 0-th homology classes? What are the potential effects of using other homology classes?\n\n1. Can you visualise the topology of the data before and after adaptation with the HC constraint? It will be very interesting to see how actually the HC constraint can preserve the topology of the manifold during transfer learning."
},
{
"confidence": 4,
"rating": 3,
"review_id": "xSnMaWnDtt",
"review_text": "This paper proposes Homology Consistency (HC) constraint for transfer learning on VLMs, and it explicitly constrains the correspondence of image and text latent manifolds by structural equivalence based on persistent homology in downstream tuning.\n\n1. The proposed method is well-founded and clearly explains the proposed homology consistency (HC) constraint.\n\n2. Extensive experiments are performed on 11 benchmark datasets.\n\n1. The paper lacks discussions on the computational cost of the proposed techniques.\n\n2. The proposed method for constraining the structural equivalence of image and text latent manifolds seems generalizable to other learning tasks for vision-language models. However, the proposed method is only evaluated for few-shot learning of vision language models.\n\n3. Although the model outperforms other methods in most cases, the improvements are relatively marginal.\n\n4. The paper only applies the method to a limited number of adapter models (TaskRes and Tip-Adapter-F).\n\nsee weaknesses."
}
] |
vcGEV6m5m2 | Template-free Articulated Gaussian Splatting for Real-time Reposable Dynamic View Synthesis | While novel view synthesis for dynamic scenes has made significant progress, capturing skeleton models of objects and re-posing them remains a challenging task. To tackle this problem, in this paper, we propose a novel approach to automatically discover the associated skeleton model for dynamic objects from videos without the need for object-specific templates. Our approach utilizes 3D Gaussian Splatting and superpoints to reconstruct dynamic objects. Treating superpoints as rigid parts, we can discover the underlying skeleton model through intuitive cues and optimize it using the kinematic model. Besides, an adaptive control strategy is applied to avoid the emergence of redundant superpoints. Extensive experiments demonstrate the effectiveness and efficiency of our method in obtaining re-posable 3D objects. Not only can our approach achieve excellent visual fidelity, but it also allows for the real-time rendering of high-resolution images. | https://openreview.net/pdf/2f9205ca247e1a22333796012f9e65cf0b3497a6.pdf | [
{
"confidence": 4,
"rating": 7,
"review_id": "fLj6ps6UTo",
"review_text": "The authors propose to combine a reposable 4D reconstruction from multi-view video based on a skeletal LBS model with 3D Gaussian splatting. To this goal they introduce a novel strategy for estimation of the skeletal model from a superpoint clustering. The results demonstrate a superior image quality and, thanks to the representation, also fast rendering.\n\nS1) Implementation details provided for reproducibility.\n\nS2) The claims are validated on common datasets.\n\nS3) The result quality is visibly better than the prior work.\n\nS4) The skeleton construction is novel, technically sound and produces reposable models in variety of scenes.\n\nS5) The quality of exposition is good.\n\nW1) The skeletons are over-segmented and unnatural and likely would not be very friendly for a human animator. They may still be suitable for data fitting but do not provide nearly as much regularization as an \"optimal\" (ground truth) skeleton would.\n\nW2) The limitations and broader impacts are only discussed in the Appendix which I do not see as a responsible practice. It suggests that the authors do not give downsides of the method the same importance as to the upsides.\n\nW3) The authors claim to report Statistical Significance without further comments (checklist item #7), but I cannot see any such features in the paper.\n\nW4) It may be a good idea to consider a higher quality captured dataset than ZJU-Mocap. It does not seem to allow for a useful comparison between the methods.\n\n\nOther minor issues and suggestions:\n\n- Figure 1: Each superpoints -> superpoint\n\n- L145: related rotation matrix -> relative?\n\n- $\\mathbf{W}$ is overloaded in Eq. 1 and Eq. 6 for two distinct things which is not ideal.\n\n- Eq. 13: $\\mathcal{L}$ without suffix is not defined.\n\n\n------------------------\n**Justification of recommendation**\nA solid paper with its incremental but non-trivial contribution stemming mainly from the novel skeleton construction. The experimental results are convincing and the main downside is the clutter and complexity of the recovered skeleton. Despite this, I am currently comfortable recommending acceptance under the assumption that the exposition issues are addressed (especially limitations). My final decision might change based on the rebuttal.\n\nQ1) The Eq. 13 could use more discussion. Does the formulation avoid over-segmentation of large but correctly rigid parts to multiple sub-parts? It would be useful to see the LBS visualized for the final shapes.\n\nQ2) Why are the skeletons so complex and noisy? Is that perhaps related to my other question about over-segmentation in Eq. 13? Are such skeletons practical for re-animation? How was re-animation done in the video? How many parameters / joint motions had to be defined? \n\nQ3) Why are there different resolutions reported for each method (Table 2 & 3). Does it mean each method was validated against a different reference image resolution? Or does it mean the methods were all tested against a full resolution reference but some were only trained using low-resolution data? That could potentially introduce considerable bias.\n\nQ4) The authors also do not provide any failure case examples/images. Does the method work perfectly for all tested scenes? What about the robots from the limitation Figure 6 of AP-NeRF? Does the new method handle these cases better?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "s3tLvBcG15",
"review_text": "This paper presents a novel approach for learning articulated objects, alongside the skeletons/kinematic trees directly from input videos, eliminating the need for pre-defined meshes or hand-crafted skeleton priors.\n\nSpecifically, the paper introduces a hierarchical 3D Gaussian representation, where a set of superpoints are used as guidance for deforming the sub-points using linear blend skinning (LBS). The skeletal structure is further derived from the superpoints, providing articulated structures that enable explicit pose control. Rendering is done by Gaussian Splatting, enabling real-time performance.\n\nOverall, the paper tackles a very challenging and useful problem for 3D modeling, the manuscript is easy to follow, and the approach is interesting.\n\nThe strengths of this paper lie in how it brilliantly leverages 3D Gaussians for capturing the underlying articulated structures in a video, without the need for 3D annotations or pre-defined structure priors.\n\n The design of superpoints naturally models prominent candidates that can serve as control points. Furthermore, as the control points are “learned” automatically, it can potentially arrive at a representation that better suits the possible motion of the articulated subject.\n\nOverall, the approach presented in the paper is pretty neat, and the experiments show pretty promising qualitative and quantitative results.\n\nThe approach does have some room for improvement. Specifically,\n- Limited reposability. As mentioned in L372-373, the approach is limited to the motion space in the input video. It would be great if the papers could include visual results for these failure cases. It will be interesting to see how good the learned LBS weights are.\n- Evaluated on datasets with limited motions: the videos used in the paper mostly contain repetitive motion sequences, and/or with small motions. It will be interesting to see how the proposed method performs on videos with complex/diverse/large motions (e.g., AIST datasets). Also, it is similarly unclear how the method can perform on in-the-wild videos with uncontrolled lighting, or with only a single view. \n\nOverall, these weaknesses are very common among template-free approaches, not specifically to the proposed method itself. Nevertheless, it would be great if the paper could include more figures, visual results, and analysis regarding these cases.\n\nThere are also some issues regarding the experiments, which I detailed in the Questions section below.\n\nSome comments regarding the evaluation sections:\n- Tab 1, 2: does the performance gain come mainly from using a higher resolution, or does it come from “capturing better articulated structures”? While Gaussian splatting enables us to use higher resolution due to its rendering speed, it would be great if the paper could also include results with resolutions comparable to other setting (400x400 in Tab 1, and 800x800 in Tab 2). \n- Is there a way to properly evaluate how good the learned skeleton structure is? E.g., training a skeleton-based 3D articulated model using the skeleton from AP-NeRF v.s. WIM v.s. the proposed method. \n\nAlso, one small issue:\n- L147: should be (R^t_b)^-1 instead of R^t-1_b"
},
{
"confidence": 4,
"rating": 5,
"review_id": "GRc6wCTu66",
"review_text": "The paper introduces a method combining 3D Gaussian Splatting and superpoints for dynamic object modeling, achieving real-time rendering and high visual fidelity. Empirical results show that the proposed method achieves state-of-the-art results on several benchmarks.\n\n1. The paper is well-written and easy to follow. The main contribution and methodology are well illustrated.\n2. The use of an adaptive control strategy to manage superpoints is innovative and helps in optimizing the model, avoiding redundancy, and maintaining efficiency.\n\n1. Although this paper achieves real-time rendering compared to AP-NERF, I find it somewhat incremental and lacking in innovation since most parts of the method are existing concepts.\n2. This paper emphasizes the concept of \"Reposable,\" but the related experiments are very limited. A thorough analysis of this aspect could effectively distinguish this paper from AP-NERF.\n3. This method compares fewer baselines, and the quantitative results do not show significant improvements in rendering effects and speed compared to the baselines, as shown in Table 3, Table 4, and Table 5.\n\n1. The number of $M$ is not given in the paper, the author should do an ablation study on it.\n2. The proposed method and results should be discussed with the latest relevant methods as referenced in [1].\n\n[1] SC-GS: Sparse-Controlled Gaussian Splatting for Editable Dynamic Scenes."
},
{
"confidence": 4,
"rating": 7,
"review_id": "ZAxkyjGwky",
"review_text": "The paper proposes a novel approach for reconstructing reposable dynamic 3D objects from RGB videos using Gaussian Splatting, without requiring any template as input.\n\nTo achieve this, the paper suggests grouping Gaussians around superpoints, which are intended to represent rigid parts of the scene. By optimizing and analyzing these superpoints, a full skeleton model of an articulated object in the input video can be built, refined, and used for reposing purposes.\n\n**Details**\n\nThe approach consists of two main stages:\n\n1. *Dynamic Stage*. After optimizing a canonical 3D Gaussian Splatting (3DGS) representation for a few iterations, a set of superpoints is initialized in the scene. A deformable field mapping each superpoint to a time-variant 6DoF transformation is optimized. These transformations are used to derive the motion of Gaussians by interpolating transformations with neighboring superpoints through linear blend skinning (LBS). The paper also proposes a gradient-based strategy to control (prune, merge, or densify) the number of superpoints in the scene. Toward the end of the dynamic stage, a skeleton structure with joints is enforced and discovered in the scene by analyzing the distance between and configuration of superpoints.\n\n2. *Kinematic Stage*. After discovering the skeleton model of the scene, the number of Gaussians and superpoints is fixed and optimized along with a new MLP mapping skeleton joints to time-variant rotation matrices. These matrices are used to compute the motion of each Gaussian along the kinematic chains using LBS. After full optimization, the skeleton can be used for reposing and editing the reconstructed object.\n\nThe paper presents extensive experiments and demonstrates higher rendering performance and speed compared to concurrent reposable dynamic Radiance Field methods.\n\n1. The paper is well-written and easy to follow.\n\n2. The task addressed by the paper is challenging but crucial for many applications in both graphics and robotics. I appreciate the proposed strategy, which successfully retrieves kinematic chains from RGB videos.\n\n3. The quantitative evaluation presented in the paper is convincing and clearly demonstrates the superiority of the approach over concurrent methods.\n\n1. The paper may lack sufficient skeleton examples to effectively demonstrate that the proposed approach can recover meaningful structures from RGB videos. Indeed, the primary goal is to recover skeleton structures and enable reposing capabilities, but only a single skeleton example is provided (Figure 5). Including more qualitative examples would likely make the paper more convincing.\n\n2. The paper does not provide details on the optimization time and required resources (e.g., VRAM) for the proposed approach. It appears that a large number of training iterations is needed; a comparison with previous state-of-the-art models would be valuable.\n\n3. The limitations of the approach are interesting but are only discussed in the supplementary material, which is problematic in my opinion. These limitations are crucial for further research and should be included in the main text.\n\n1. What are the optimization time and required resources (e.g., VRAM) for the proposed approach? How does it compare to state-of-the-art methods?\n\n2. Do the authors have insights on how the method would perform in the context of a monocular video with a moving camera, which is a more realistic setting than multi-view videos?\n\n3. In Figure 5, the skeleton appears to be quite accurate, but some bones are located outside the geometry. Would it be possible to enforce the skeleton to be located “inside” the geometry, perhaps by applying a penalty that encourages superpoints to be close to the centroid of their associated Gaussians?"
}
] |
vYmvgxpgwH | An Empirical Analysis of Compute-Optimal Inference for Problem-Solving with Language Models | The optimal training configurations of large language models (LLMs) with respect to model sizes and compute budgets have been extensively studied. But how to optimally configure LLMs during inference has not been explored in sufficient depth. We study compute-optimal inference: designing models and inference strategies that optimally trade off additional inference-time compute for improved performance. As a first step towards understanding and designing compute-optimal inference methods, we assessed the effectiveness and computational efficiency of multiple inference strategies such as Greedy Search, Majority Voting, Best-of-N, Weighted Voting, and their variants on two different Tree Search algorithms, involving different model sizes (e.g., 7B and 34B) and computational budgets. We found that a smaller language model with a novel tree search algorithm typically achieves a Pareto-optimal trade-off. These results highlight the potential benefits of deploying smaller models equipped with more sophisticated decoding algorithms in end-devices to enhance problem-solving accuracy. For instance, we show that the Llemma-7B model can achieve competitive accuracy to a Llemma-34B model on MATH500 while using 2× less FLOPs. Our findings could potentially apply to any generation task with a well-defined measure of success. | https://openreview.net/pdf/61a858e80c83eb5855d0822eb52485e24f27bdac.pdf | [
{
"confidence": 2,
"rating": 6,
"review_id": "WX7AsB89Oo",
"review_text": "This paper explores compute-optimal inference for large language\nmodels (LLMs), focusing on designing models and strategies that\nbalance additional inference-time computation with improved\nperformance. The study evaluates the effectiveness and efficiency of\nvarious inference strategies, including Greedy Search, Majority\nVoting, Best-of-N, and Weighted Voting, across different model sizes\n(e.g., 7B and 34B) and computational budgets. Experimental results\nindicate that smaller models with advanced tree search algorithms can\nachieve a Pareto-optimal trade-off, offering significant benefits for\nend-device deployment. For example, the Llemma-7B model matches the\naccuracy of the Llemma-34B model on the MATH500 dataset while using\nhalf the FLOPs. These findings suggest that smaller models with\nsophisticated decoding algorithms can enhance problem-solving accuracy\nacross various generation tasks.\n\n- The paper focuses on an interesting topic and should be of interest\n to the audience of NeurIPS.\n- It considers a comprehensive experimental investigation to confirm\n the claims.\n- The proposed tree search algorithm is interesting and seems to\n outperform the competition.\n\n- Although the paper offers quite thorough experimental analysis, it\n does not look deep in terms of theoretical ideas (although there are\n 2 theorems), which may be a problem for a flagship venue like\n NeurIPS.\n- Overall findings on the possibility to train an equally accurate\n model with fewer computational resources do not look surprising.\n- The paper would benefit from additional proof-reading as there are a\n large number of typos present.\n\nN/A"
},
{
"confidence": 2,
"rating": 5,
"review_id": "ua1ffyQhGR",
"review_text": "The paper presents an approach to select an optimal inference strategy for LLMs and empirical analysis on Math problem solving tasks. The main idea is to select an inference strategy based on a computational budget (FLOPs). The underlying policy model samples solutions by generating tokens based on the budget and a ranking model consumes these tokens. A new reward model is developed to explore the solution space more effectively. The reward acts as a weighted majority function over the solutions.\nExperiments are performed on Math problem solving benchmarks. Some of the key insights from the experiments is that a smaller LLM can outperform the larger LLM in terms of using a smaller computational budget while maintaining similar accuracy. They also show that the proposed approach with a smaller budget has comparable accuracy than sampling with a larger budget.\n\n- The insights that inference time strategy can compensate for using smaller LLMs in generation seems to be interesting\n- The experiments also provide a basis for analyzing scaling properties of inference which can be significant\n\n- In terms of the method itself, I was not sure if it is very novel. It seems to be a smaller variation on the tree search methods that search for solutions in the generated space\n- In terms of comparisons, I was not sure about the significance of the benchmark, i.e., are there some properties that make the proposed reward reranking more optimal in Llema model specifically (due to the structure of math problems, etc.). In general, since the main contribution of the paper is empirical, I think there should be experiments or discussions different LLMs to make the contribution more significant. \n-Overall, the empirical conclusions seem very tied to the specific benchmarks, so I was a little unsure regarding the significance of the conclusions.\n\n- Is the comparison based on state of the art inference strategies for compute-optimal inference? Specifically, the other methods are all agnostic of the computational limits, so I was wondering if there are other approaches that do take computational limits into account (the related works do not mention any so it is possible there are not)?"
},
{
"confidence": 4,
"rating": 4,
"review_id": "2URCmdHgkV",
"review_text": "This paper investigates the optimal training configurations of large language models (LLMs) during inference. The proposed inference strategy, REward BAlanced SEarch (REBASE), combines the strengths of Monte Carlo Tree Search (MCTS) with reduced inference costs, resulting in improved performance on math-domain tasks.\n\n1. This paper provides a comprehensive overview, i,e, the inference scaling law, of the performance of different sampling strategies under various inference configurations.\n2. The novel REBASE inference strategy achieves better downstream task performance under the same computational budget or even less.\n\n### Major \n\n1. Did you take into account the inference cost of the reward model (RM) in your analysis? As the REBASE frequently uses RM to judge the quality of immediate solutions than other sampling strategies, such as, weighted major voting, It's crucial to consider this aspect to provide a holistic view of the efficiency and practicality of your proposed strategy.\n\n2. The base model with post-training techniques such as SFT and RLHF inherently limits the upper bound of performance. It seems that adding more tricks during inference could improve performance, but the marginal effect may result in diminished returns when using models already tuned by the RLHF process. Could you compare the performance gains of REBASE between the base model, the SFT model, and the Chat model? Is the performance gain only significant in models that have not been tuned?\n\n3. In Section 4.2, the observation in \"Scaling law of compute-optimal inference\" indicates that the optimal inference strategy is invariant to the amount of compute but depends on the model size, i.e., the model's inherent capacity. This raises a concern: does the inference strategy significantly improve the model's performance, or does it only take effect in certain scenarios, such as with base models that have not been aligned?\n\n4. The paper focuses solely on the math domain. To strengthen your claims, a more comprehensive evaluation across general domains using widely adopted benchmarks, such as MMLU, SuperGLUE, HumanEval, etc, is necessary. \n\n5. There appears to be no significant improvement in the GSM8K datasets than MATH500 dataset. \n\n### Minor\n\n1. Figures. 2 and 3 are not referenced in the main manuscript. \n\n2. Figures. 2 and 3 appear to be in draft form and are somewhat vague.\n\nSee Weakness."
}
] |
vYUx8j5KK2 | Curriculum Fine-tuning of Vision Foundation Model for Medical Image Classification Under Label Noise | Deep neural networks have demonstrated remarkable performance in various vision tasks, but their success heavily depends on the quality of the training data. Noisy labels are a critical issue in medical datasets and can significantly degrade model performance. Previous clean sample selection methods have not utilized the well pre-trained features of vision foundation models (VFMs) and assumed that training begins from scratch. In this paper, we propose CUFIT, a curriculum fine-tuning paradigm of VFMs for medical image classification under label noise. Our method is motivated by the fact that linear probing of VFMs is relatively unaffected by noisy samples, as it does not update the feature extractor of the VFM, thus robustly classifying the training samples. Subsequently, curriculum fine-tuning of two adapters is conducted, starting with clean sample selection from the linear probing phase. Our experimental results demonstrate that CUFIT outperforms previous methods across various medical image benchmarks. Specifically, our method surpasses previous baselines by 5.0\%, 2.1\%, 4.6\%, and 5.8\% at a 40\% noise rate on the HAM10000, APTOS-2019, BloodMnist, and OrgancMnist datasets, respectively. Furthermore, we provide extensive analyses to demonstrate the impact of our method on noisy label detection. For instance, our method shows higher label precision and recall compared to previous approaches. Our work highlights the potential of leveraging VFMs in medical image classification under challenging conditions of noisy labels. | https://openreview.net/pdf/d5aa08cc841e6f20d9d81a173a38781d31ff4224.pdf | [
{
"confidence": 4,
"rating": 4,
"review_id": "M37OWgl3Lm",
"review_text": "The authors propose Cufit, a curriculum fine-tuning method for improving the performance of medical image classification under the noisy labels setting. The method shows strong performance against other baselines on several medical datasets. The authors have also provided results on a non-medical dataset.\n\n**Strengths**\n\n1. The proposed strategy Cufit outperforms other baselines included in the evaluation\n2. The strategy is agnostic to models (CNNs and ViTs) and images (medical and natural)\n3. Cufit does not require knowledge about certain hyperparameters which are required in other methods.\n4. The authors have presented results on non-medical images as well.\n\n**Weaknesses**\n\n1. The paper is very poorly written with spelling and grammatical errors throughout the text. Certain elements in the text are written in a convoluted way that confuses the reader.\n2. The entire paper is about the proposed method “Cufit” but no section in the paper describes the algorithm in detail. Section 6.1 (“How does Cufit work?”) does not talk about the method but only about the results. \n3. It is important to cite previous work on PEFT in medical image analysis such as [1] in the text.\n4. The paper lacks an explanation of the baseline methods used in the evaluation. Methods like CoDis are briefly mentioned in the Related Work section but have not been defined anywhere else. People unfamiliar would not be able to understand Cufit and how it differs from previously proposed methods.\n5. The experiments should include the CheXPert dataset. Apart from being frequently adopted for medical image analysis problems, it would also evaluate the proposed methodology for multi-label classification problems. Furthermore, CheXPert is supposed to contain noisy labels due to automatic labelling from free-text reports. Hence, it would adequately test the proposed method Cufit under a noisy label setting.\n6. To make the experiments more extensive, more datasets and PEFT methods (see [1] for reference) should be included. LoRA is one of the most popular PEFT strategies used for transformer-based models (especially in the case of medical image classification [1]) and should be included in the experiments.\n7. For natural image classification, the authors have adopted the CIFAR dataset. Firstly, in order to provide conclusive results, several natural imaging datasets should be included. Secondly, there are many datasets much more appropriate than CIFAR that should have been used instead.\n\nReferences\n1. Dutt, Raman, et al. \"Parameter-Efficient Fine-Tuning for Medical Image Analysis: The Missed Opportunity.\" *Medical Imaging with Deep Learning*, 2024, https://openreview.net/forum?id=LVRhXa0q5r.\n\n**Questions**\n\n1. After reading the paper several times, I am yet to understand how Cufit actually works. There is no section dedicated in the text that explains this.\n\n2. Why are methods like LoRA that are very frequently used for doing PEFT in transformer-based models and datasets like CheXpert excluded from the analysis?\n\nPlease address the **Weaknesses** section for more questions."
},
{
"confidence": 5,
"rating": 6,
"review_id": "adl3p4QGD4",
"review_text": "In this paper, the authors propose a curriculum learning strategy for fine-tuning on noisy medical datasets. The key insight comes from that linear probing with limited training samples can be more robust to label noise. The performance is good compared to the former methods.\n\nGenerally, I think this is a good paper. Noisy label is a severe and practical problem for medical scenarios due to diagnosis uncertainty and the intuition behind the proposed method is clearly stated. The performance looks good and the proposed method is clean and efficient.\n\nHowever, I still have to point out some details are not clearly demonstrated. Please check the questions for more details. I will change my score based on the authors' responses.\n\n1. Figure 2 is confusing. The core content in the method may fall in the Curriculum order and the agreement criteria. But for now, it is not clearly reflected in the figure which makes section 4.2. I suggest the author reorganize the figure and detail the purple block.\n\n2. I am not very clear about how the mode is trained. Commonly, when referring to curriculum learning, I guess the model will be trained with multiple steps but in equation 6 authors also modify the loss function which makes it also like a multi-task training pipeline. This confuses me a lot and the authors have to clarify it more clearly.\n\n3. For the noise simulation. The authors have to state the details more, e.g., how are the correct labels changed just randomly or with some rules? and for multi-label or multi-class medical classification tasks, are there any differences? Similarly, this also confuses me in the method part. If a case with both pneumonia and bone fracture labels but the pneumonia part is wrong, will the whole case be dismissed for the latter training?\n\n4. From lines 203 to 221, in introducing the dataset, it is important to specifically point out what datasets are multi-label tasks and what are multi-class, since in medical these two types all wide exist.\n\n5. In section 6.2, it is also encouraging to explore more performance differences between general VFM and medical-wise VFM like PMC-CLIP and BiomedCLIP. While considering the rebuttal time limitation, this is just a bonus suggestion.\n\n6. A minor suggestion, I suggest the authors clarify the definition of noise in their introduction part more formally as though for ML domain noise is clear, in medical domain, the noise types are complex which may be hard to understand for the audience in clinical background."
},
{
"confidence": 4,
"rating": 7,
"review_id": "5pjxPzXXkF",
"review_text": "This paper presents a curriculum fine-tuning paradigm called Cufit. This method is designed to fine-tune Vision Foundation Models (VFMs) for medical image classification tasks under the presence of noisy labels. The approach leverages the robustness of linear probing and the generalization capabilities of fine-tuning adapters to improve model performance.\n\n- Cufit is technically sound and is well-validated through extensive experimental results.\n- The paper is well-structured and effectively conveys the motivation, approach, and outcomes.\n- Demonstrated significant improvements in medical image classification performance under label noise.\n- Applicability to both medical and natural image classification enhances the relevance of the framework.\n\n- The training process seems to be complex and computationally intensive.\n- I have concerns about the scalability of the proposed method. It may not scale well for very large datasets or in resource-constrained environments.\n\n- How does the computational cost of Cufit compare to other noise-robust training methods?"
},
{
"confidence": 3,
"rating": 6,
"review_id": "Vp5KkP2xaZ",
"review_text": "The paper presents Cufit, a curriculum fine-tuning paradigm for Vision Foundation Models (VFM) aimed at improving medical image classification under label noise. This method leverages the robust feature extraction capabilities of pre-trained VFMs and employs a linear probing strategy to mitigate the impact of noisy labels. The curriculum fine-tuning process then utilizes clean sample selection to enhance the classification performance. The experimental results demonstrate that Cufit outperforms existing methods on various medical image benchmarks, showing significant improvements in classification accuracy.\n\n1. The presentation is good.\n\n1. The paper includes experimental comparisons with methods like JoCor and CoDis, but the discussion about these methods' performance is insufficient. The authors should provide a more detailed analysis of why JoCor and CoDis do not perform as well as Cufit. Understanding the strengths and weaknesses of these methods in comparison to Cufit would offer valuable insights. \n\n2. The paper should further discuss the impact of noisy labels on different types of biomedical images. For some image types, noise may be less detrimental, while for others, it could significantly affect diagnostic accuracy. A more detailed exploration of how noise impacts various biomedical image datasets would enhance the comprehensiveness of the study.\n\n1. The paper includes experimental comparisons with methods like JoCor and CoDis, but the discussion about these methods' performance is insufficient. The authors should provide a more detailed analysis of why JoCor and CoDis do not perform as well as Cufit. Understanding the strengths and weaknesses of these methods in comparison to Cufit would offer valuable insights. \n\n2. The paper should further discuss the impact of noisy labels on different types of biomedical images. For some image types, noise may be less detrimental, while for others, it could significantly affect diagnostic accuracy. A more detailed exploration of how noise impacts various biomedical image datasets would enhance the comprehensiveness of the study."
}
] |
vWSll6M9pj | Unified Speech Recognition: A Single Model for Auditory, Visual, and Audiovisual Inputs | Research in auditory, visual, and audiovisual speech recognition (ASR, VSR, and AVSR, respectively) has traditionally been conducted independently. Even recent self-supervised studies addressing two or all three tasks simultaneously tend to yield separate models, leading to disjoint inference pipelines with increased memory requirements and redundancies. This paper proposes unified training strategies for these systems. We demonstrate that training a single model for all three tasks enhances VSR and AVSR performance, overcoming typical optimisation challenges when training from scratch. Moreover, we introduce a greedy pseudo-labelling approach to more effectively leverage unlabelled samples, addressing shortcomings in related self-supervised methods. Finally, we develop a self-supervised pre-training method within our framework, proving its effectiveness alongside our semi-supervised approach. Despite using a single model for all tasks, our unified approach achieves state-of-the-art performance on LRS3 for ASR, VSR, and AVSR compared to recent methods. Code will be made publicly available. | https://openreview.net/pdf/10f24815426948411d0ebd5225a50f68f81cf70b.pdf | [
{
"confidence": 2,
"rating": 7,
"review_id": "C9JloFz0hw",
"review_text": "This paper proposes a unified architecture and training method for auditory/visual speech recognition. Building upon this model, the authors introduce a semi-supervised pseudo-labeling method to leverage unlabeled audio-visual data, as well as self-supervised pre-training to enhance model performance. Experiments indicate that the model achieves state-of-the-art performance on A/V/AVSR.\n\n1. This work for the first time proposes an effective model and training procedure for unifying auditory and visual speech content recognition, which is of high novelty and practical significance. \n\n2. The author conducted comprehensive and extensive ablation studies, verifying the characteristics of the model and the effectiveness of each step in the training paradigm. The experimental results are robust and credible, offering significant guidance for related research.\n\nThe article has no obvious flaws, but there are some questions that I hope the authors can clarify (see questions).\n\n1. How is the weight of the teacher model in self-supervised pretraining initialized? Is it initialized randomly or with pretrained weight on another task?\n\n2. Did the author make a comparison between the teacher-student self-supervised pretraining in the paper with masked-autoencoding training of audio and/or visual features? Is the proposed pretraining method superior?\n\n3. Did the author investigate the effect of different masking ratios?"
},
{
"confidence": 3,
"rating": 7,
"review_id": "8CyQfJzuuT",
"review_text": "This paper proposes a training methodology for a *single* model which can use *either* audio, visual, or audiovisual features as input for automatic speech recognition. This is done by enforcing a training batch always includes (feature,label) pairs of all three modalities, using a 1D/2D ResNet-18 feature extractor for audio and video, respectively. These features are processed by a Transformer encoder-decoder model to obtain an ASR prediction. Furthermore, the authors explore a semi-supervised fine-tuning approach and a self-supervised initialization stage, both using a student-teacher approach, and within the same unified methodology. This allows the authors to produce a model which is competitive with state-of-the-art models while using a significantly less data.\n\nI think the proposed method is interesting for researchers in the audio-visual ASR domain and will spur future work. The paper is well-written with clear English, barring some questions I have stated below. The authors do a good job presenting their results, referring to details in the appendix where required. The ablation experiments clearly show readers how their proposed methodology behaves and why certain design decisions were made. The authors also shared their code and model checkpoints, which significantly increases the reproducibility and impact of this paper.\n\nThe model architecture seems a bit unclear to me. Specifically, line 88 states the use of a transformer encoder-decoder model. However, line 104 states a single FC layer on top of the encoder for vocabulary predictions, while line 107 states to use the decoder output sequence, which is subsequently not used as $1 - \\lambda_{ctc}=0$. So the decoder is not actually used during fine-tuning? How is inference actually done?\n\nI see no mention of a fixed random seed for running experiments, are all model initialized equal? This seems important as the paper does not have error bars/does not run experiments multiple times\n\nMinor editing comments:\n* Table titles must appear above the table as per the formatting instructions. \n* The table/figure combinations on Page 6 are confusing. Could you separate the figures as not part of a (sub)table?\n* A small description of LRS3 would be desirable for those not familiar with the dataset (e.g., how many hours does the unlabeled portion have (line 190), what is the data source, how was it collected, how large is the test set?)\n* line 97: 0.4 and 0.6 seconds for each second of ...\n\nIn which settings/experiments is the transformer decoder used?\n\nIn table 3 (A), is there a reason for not trying targets A + V + AV, as during fine-tuning?\n\nYou state in line 103 that features from the 3 modalities are concatenated along the batch dimension for efficient processing. However, Table 1 (B) shows that random sampling of modalities performs much worse, requiring 3x more epochs for similar performance. So it seems to me it's not only done for efficient processing, but also for effective optimization? \n\nAlso, do [13, 15] in line 179 share parameters for each task or not? According to Table 4 they do not, but if you use random sampling of modalities, how does this explain their relevance to Table 1 (B)?\n\nWhat is the CTC attention in Table 2 (C)? Is this simply equation 3 with $\\lambda_{ctc} < 1$? I might have missed it, but it seems to me the method section does not explain these 2 different loss types?"
},
{
"confidence": 5,
"rating": 5,
"review_id": "QHUA4lUi4x",
"review_text": "This paper proposes USR, a unified speech recognition model that leverages pseudo labels during fine-tuning. It introduces a single model capable of handling three tasks—ASR, VSR, and AVSR—simultaneously, delivering state-of-the-art performance.\n\n1. The paper is well-organized. Although the USR system is relatively complex, the paper presents each module with detailed descriptions and clear illustrations, making it easy for readers to follow.\n\n2. The experiments, including ablations, are extensive. All experimental details are included, making it easy to reproduce the results.\n\n3. The USR system leverages pseudo labels during the fine-tuning stage. While pseudo labeling is not a novel technique in ASR or AVSR, USR enhances the performance of ASR, VSR, and AVSR through carefully designed training procedures. The illustration of the pseudo labeling process is also clear.\n\n4. The system achieves nearly state-of-the-art performance across all tasks.\n\n5. The literature review is thorough.\n\n1. While not a unique weakness to this paper, the complexity of training current SSL-based VSR or AVSR systems remains a challenge. Introducing additional modalities significantly increases complexity compared to speech-only SSL systems. Notably, the reduction in GPU hours is minimal compared to previous works, and the convergence speed is exceedingly slow. Future work should address these issues.\n\n2. Performance is highly sensitive to certain configurations, such as the ratios of pseudo labels and the use of EMA. However, the paper lacks an analysis of why this sensitivity occurs or suggestions on how to mitigate it. These are common weaknesses in related work.\n\n3. The results do not consistently achieve state-of-the-art performance. The authors should experiment with other hyperparameters, such as learning rates, during fine-tuning to improve outcomes.\n\n4. Failure cases were not discussed too much.\n\n1. During pretraining, have you explored using audio-only targets? If so, what was the performance like compared to AV targets? How does it compare to AV-HuBERT?\n\n2. Why do you incorporate all three features (audio, video, audio-visual) during fine-tuning? Is there a rationale or experimental evidence supporting this approach?\n\n3. There's no need to adhere strictly to the architectures like AV-HuBERT or AV-data2vec. Consider experimenting with more advanced video encoders since visual features are often not well-extracted in previous studies.\n\n4. For pseudo label sampling, why opt for a greedy search? Have you considered trying soft sampling instead?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "UEyo2TCWtG",
"review_text": "This paper unifies the ASR, VSR, and AVSR tasks in a single model and shows the performance benefits of a single model in LRS3 data. There are several attempts at unifying these three models, but I think this is the first successful trial of realizing it. The paper proposes an effective training strategy to avoid losing performance on each task. Together with their self-supervised training, the model archives SOTA performance in a similar range of the training data.\n\n- the first successful method of realizing the ASR, VSR, and AVSR tasks in a single model while maintaining/improving the performance for each task\n- Good reproducibility based on the code release, use of the public data, and detailed experimental configurations/analyses.\n- Easy to read. Although the technique is a little bit complicated with a lot of terms depending on the architecture (CTC, attention, modality, training modes (self-supervised/supervised), the paper always provides some rationales (e.g., from the reference or experiments) to justify their methods\n - detailed ablation experiments support their design choices and strategies. \n- The paper also shows the effectiveness with multiple databases (LRS3, LRS2, and WildVSR)\n\n- the technical novelty is not very strong. Most techniques are well-known or straightforward (e.g., the use of CTC, pseudo-label filtering, etc.).\n\n- Page 4, line 110: I'm a bit confused about \"We set $\\lambda_{\\text{ctc}}$ to 1.\" Do you mean that you always set $\\lambda_{\\text{ctc}}$ to 1? No attention weights? Is it related to Table 2-d? Please clarify it.\n- Equation (4): Why didn't you prepare a different weight for a and av?\n- Section 3.2, Filtering: Did you use the same threshold for CTC and ATT? The dynamic range of c and a could be different, and I'm not sure that using the same threshold is optimal.\n- Section 4: Did you only use a Transformer architecture? How about using a Conformer architecture?\n- It is not a question but a suggestion. I recommend you emphasize the results of the multiple databases in the abstract to claim the generalization of this work across the database."
}
] |
vUrOuc6NR3 | DynaMo: In-Domain Dynamics Pretraining for Visuo-Motor Control | Imitation learning has proven to be a powerful tool for training complex visuo-motor policies. However, current methods often require hundreds to thousands of expert demonstrations to handle high-dimensional visual observations. A key reason for this poor data efficiency is that visual representations are predominantly either pretrained on out-of-domain data or trained directly through a behavior cloning objective. In this work, we present DynaMo, a new in-domain, self-supervised method for learning visual representations. Given a set of expert demonstrations, we jointly learn a latent inverse dynamics model and a forward dynamics model over a sequence of image embeddings, predicting the next frame in latent space, without augmentations, contrastive sampling, or access to ground truth actions. Importantly, DynaMo does not require any out-of-domain data such as Internet datasets or cross-embodied datasets. On a suite of six simulated and real environments, we show that representations learned with DynaMo significantly improve downstream imitation learning performance over prior self-supervised learning objectives, and pretrained representations. Gains from using DynaMo hold across policy classes such as Behavior Transformer, Diffusion Policy, MLP, and nearest neighbors. Finally, we ablate over key components of DynaMo and measure its impact on downstream policy performance. Robot videos are best viewed at https://dynamo-ssl.github.io. | https://openreview.net/pdf/a80285940d66984b6d99e1990c79614edb3af61b.pdf | [
{
"confidence": 4,
"rating": 6,
"review_id": "TPArBz3Pfa",
"review_text": "This paper presents a way of pre-training vision encoder for robot control. Specifically, instead of using vanilla contrastive or masked autoencoder approaches, this method creates two models: 1) an inverse dynamics model that estimates the transition latent (actions) and 2) a forward dynamics model that takes in the current encoded visual latent and the transition latent and predicts the next latent observation. The results suggest that the method improves upon existing visual pre-training methods for robotics.\n\n1. Few of the past works on visual pretraining for robotics consider the time / action but only focus on the visual observation aspect. This work presents a method that attempts to improve visual pretraining by modeling the dynamics present in the dataset. \n2. The results suggest that the method improves upon existing visual pre-training baselines\n\n1. Prior visual pre-training for robotics operates under the premise that we have an image or video dataset, where we pre-train on these datasets and then finetune for a particular task. However, this method performs pre-training on the task-specific dataset, which is better aligned with the downstream tasks. Instead of having pre-training and fine-tuning using the same dataset and solving the same task, the objective of visual pretraining (exemplified by MAE, MoCo, etc.) is that if we train on a mass amount of data, we can finetune to a specific task (i.e. ImageNet pre-training then COCO segmentation finetuning). \n2. A few prior works [1,2] have tried to model forward and inverse dynamics concurrently. [1] also uses forward and inverse dynamics to train a visual encoder. The key difference between these works is that here action is modeled as a latent variable. Why ground truth action values are not used in pre-training (especially when pre-training and fine-tuning happen on the same task) is not justified in the manuscript. It would be quite convincing if pre-training is done on natural videos, or large-scale robot datasets where action spaces cannot be standardized, and then shows improved finetuning performance. \n\n[1] Agrawal, Pulkit, Ashvin V. Nair, Pieter Abbeel, Jitendra Malik, and Sergey Levine. \"Learning to poke by poking: Experiential learning of intuitive physics.\" Advances in neural information processing systems 29 (2016).\n\n[2] Fragkiadaki, Katerina, Pulkit Agrawal, Sergey Levine, and Jitendra Malik. \"Learning visual predictive models of physics for playing billiards.\" arXiv preprint arXiv:1511.07404 (2015).\n\nThe reviewer wants to ask for two sets of experiments to address weakness 1:\n1. How do masked pre-training methods compare to DynaMo when they are trained on the same data? I.e. train two networks analogously with the method provided in VC-1 and MVP on the task-specific dataset, and evaluate their task performance. This experiment would demonstrate that even for specific tasks, pre-training with DynaMo outperforms existing visual pre-training methods on task-specific datasets. \n2. How does DynaMo generalize to unseen tasks (in the sense that it can generalize to tasks outside of the dynamics seen in training)? I.e. pre-train DynaMo on (1) Put Yogurt (2) Get yogurt (3) Get Tea and evaluate on (1) Put ketchup (2) Get Water."
},
{
"confidence": 4,
"rating": 6,
"review_id": "mfh7M71rVv",
"review_text": "This paper presents a self-supervised model, DynaMo, for pretraining visual encoders adopted for visuo-motor control. The targeted downstream task is imitation learning for robotic manipulation. Instead of using an out-of-domain dataset for pretraining and then transferring to a new domain using alternative techniques, the authors propose exploiting sequences of observations from in-domain demonstrations to pretrain three different models, a visual encoder, a forward and an inverse model. Once this is done a policy can be learned with observations encoded using the pre-trained visual encoder.\n\nThe most important benefit of DynaMo is that a visual encoder can be trained with limited risk of suppressing data dimensions necessary for visuomotor control, an otherwise frequently occurring problem.\n\nEven if similar models that combine training of forward and inverse models have existed in literature before, the action representation is assumed unobserved in the proposed model, which has rarely been the case before. The literature on imitation learning from observed state sequences is vast, with little cited in the paper. However, the way this is done for pretraining in the proposed model is innovative and easily applicable to a practical scenario. \n\nThe experiments are rather exhaustive with five different settings and embodiments tested, two of which are real-world scenarios. In experiments that compare to alternative self-supervised methods and pretrained representations, the proposed visual embeddings are shown to be very competitive. It is also shown that DynaMo can be used to finetune an encoder pre-trained on ImageNet for even better results while being relatively insensitive to the choice of policy class.\n\nThe paper is written as if there were no research in the area before the deep learning boom. Only one citation out of 70 citations is older than 10 years. The paper suggests that training exclusively on in-domain data is new, even if this used to be the way it was typically done before the arrival of data-hungry deep-learning-based models, models that forced people to a greater extent to rely on offline training on out-of-domain data with data augmentation, contrastive learning, etc. \n\nThe idea to train pairs of inverse and forward models online has existed in psychology and robotics for at least 25 years, such as in the works of Wolpert et al [1]. Using similar models, imitation learning has been a common theme over the years, with [2] being just an example. Without this connection back to earlier research, this paper gives the impression of trying to reinvent the wheel, and it becomes unclear what the contributions really are. \n\nEven if the experiments suggest that DynaMo can be beneficial also in real-world settings, the presented experiments are too few to be conclusive. The real world is way more diverse with more than just a small selected set of objects that can be manipulated. However, this weakness is pointed out in the conclusions, which makes it less problematic. \n\n[1] Wolpert and Kawato, “Multiple paired forward and inverse models for motor control”, Neural Networks, 11, 1998.\n\n[2] Demiris and Hayes, “Imitation as a dual-route process featuring predictive and learning components: a biologically plausible computational model”, in Imitation in Animals and Artifacts, MIT Press, 2002.\n\n* Are the visual encoder, inverse and forward models only trained on the demonstrations from the respective datasets? Even if this is only for pretraining, the demonstrations, at least the real-world ones, are very few compared to the complexity of the tasks learned. Why not exploit all possible sequences available on the same embodiment, even for tasks that will eventually not be of interest? \n* How restrictive is the assumption that forward models are unimodal? Has this become a weakness during the experiments?\n* Since both the inverse and forward models seem to be ignored after pretraining, what is the motivation for a separation between the two? Why not train a network to predict the next encoded observation from earlier ones, essentially with the inverse and forward models merged into one? Why is the latent action representation needed at all?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "Cj6jAkKuMw",
"review_text": "This paper presents a self-supervised learning method for robot learning that learns representations by using data from demonstrations. The objective is based on learning latent actions from inverse dynamics, and learning forward dynamics model that uses such latent actions as inputs. Several techniques are utilized to prevent the model from finding trivial solutions and thus collapsing. Experiments are conducted in both real-world and simulation environments.\n\n- Clear writing with good figures\n- Real-robot experiments!\n- Focuses on the important problem of pre-training representations from demonstrations, as utilizing such limited but in-domain data can be crucial in the context of robot learning where in-domain data is scarse but important especially for fine-grained control tasks.\n\n- As other self-supervised learning models are trained on non-standard robotic datasets, it is not clear whether they are trained well with good hyperparameters -- for instance without collapses -- is there a way to ensure that baseline methods are well-tuned?\n- I understand that the main focus of this paper is to introduce a self-supervised learning method and compare its performance to other baselines. But what would the performance look like if you consider the full fine-tuning setup that uses gradients from behavior cloning for updating the encoder? Can we squeeze more information and maybe performance boost from fully fine-tuning the encoder? How would all the methods perform in this setup? This could further strengthen the claims of this paper that we should focus on extracting more information from demonstrations.\n- One important missing baseline is [1] that pre-trains (optionally causal) transformer with masked modelling objective. Even though it uses a pre-trained visual encoder, using features from the causal transformer can be still a baseline Moreover, it's a bit awkward that MAE trained on demonstrations is missing from the baseline even though MVP is selected as a pre-trained representation baseline. Including MAE, maybe optionally its multi-view variant [2], can make results be more convincing.\n\n[1] Radosavovic, Ilija, Baifeng Shi, Letian Fu, Ken Goldberg, Trevor Darrell, and Jitendra Malik. \"Robot learning with sensorimotor pre-training.\" In Conference on Robot Learning, pp. 683-693. PMLR, 2023.\n\n[2] Seo, Younggyo, Junsu Kim, Stephen James, Kimin Lee, Jinwoo Shin, and Pieter Abbeel. \"Multi-view masked world models for visual robotic manipulation.\" In International Conference on Machine Learning, pp. 30613-30632. PMLR, 2023.\n\nSee Weaknesses"
},
{
"confidence": 5,
"rating": 4,
"review_id": "ggroeyzvk5",
"review_text": "This paper presents DynaMo, using in-domain data for self-supervision. It jointly learns a latent inverse dynamics model and a forward dynamics model over a sequence of image embeddings. The\n\nThis paper is easy to follow.\n\nSimplified Real-World Setup:\nThe real-robot experiments appear overly simplistic. Objects seem to be fixed in place, indicated by the red marker on the table, suggesting a lack of randomization in object placement. This setup makes the task easier for conventional imitation learning methods like diffusion policy and act, potentially allowing them to achieve a 100% success rate.\n\nSuggestion: Introduce spatial randomization to the scene. Conduct additional experiments under these conditions to demonstrate Dynamo's superiority in more complex and varied scenarios.\n\n2) Unfair Comparisons in Simulation: \nIn Table 1, Dynamo is compared with several baselines that use different backbones, which makes the comparison potentially unfair.\nImpact: The difference in backbones could skew the performance results, making it difficult to accurately assess Dynamo's relative performance. \n\nSuggestion: Include more experiments of Dynamo with various backbones such as ViT and ResNet-50. Compare these results against the baselines to provide a fairer and more comprehensive evaluation.\n\n3) The motivation for using SSL in this context is unclear. Typically, SSL is advantageous due to its ability to learn from massive datasets without human labels. However, in the field of robotics, in-domain data are often scarce. This could make the application of SSL less persuasive and potentially less effective.\n\nCurrently, I believe this paper has significant flaws in its experimental design, both in simulation and real-robot settings. As such, my initial score is 4, with the real-robot experiments being a notable strength. However, the existing experiments do not sufficiently support the claims made in the paper. If the authors can provide additional experiments based on my suggestions above, and if the results substantiate their claims, I would be willing to raise my rating."
}
] |
vU512K8vrR | Unveiling LoRA Intrinsic Ranks via Salience Analysis | The immense parameter scale of large language models underscores the necessity for parameter-efficient fine-tuning methods. Methods based on Low-Rank Adaptation (LoRA) assume the low-rank characteristics of the incremental matrix and optimize the matrix obtained from low-rank decomposition. Although effective, these methods are constrained by a fixed and unalterable intrinsic rank, neglecting the variable importance of matrices. Consequently, methods for adaptive rank allocation are proposed, among which AdaLoRA demonstrates excellent fine-tuning performance. AdaLoRA conducts adaptation based on singular value decomposition (SVD), dynamically allocating intrinsic ranks according to importance. However, it still struggles to achieve a balance between fine-tuning effectiveness and efficiency, leading to limited rank allocation space. Additionally, the importance measurement focuses only on parameters with minimal impact on the loss, neglecting the dominant role of singular values in SVD-based matrices and the fluctuations during training. To address these issues, we propose SalientLoRA, which adaptively optimizes intrinsic ranks of LoRA via salience measurement. Firstly, during rank allocation, the salience measurement analyses the variation of singular value magnitudes across multiple time steps and establishes their inter-dependency relationships to assess the matrix importance. This measurement mitigates instability and randomness that may arise during importance assessment. Secondly, to achieve a balance between fine-tuning performance and efficiency, we propose an adaptive adjustment of time-series window, which adaptively controls the size of time-series for significance measurement and rank reduction during training, allowing for rapid rank allocation while maintaining training stability. This mechanism enables matrics to set a higher initial rank, thus expanding the allocation space for ranks. To evaluate the generality of our method across various tasks, we conduct experiments on natural language understanding (NLU), natural language generation (NLG), and large model instruction tuning tasks. Experimental results demonstrate the superiority of SalientLoRA, which outperforms state-of-the-art methods by 0.96\%-3.56\% on multiple datasets. Furthermore, as the rank allocation space expands, our method ensures fine-tuning efficiency, achieving a speed improvement of 94.5\% compared to AdaLoRA. The code is publicly available at https://github.com/Heyest/SalientLoRA. | https://openreview.net/pdf/212291a21697f8ab6df33a882c29090800c75abd.pdf | [
{
"confidence": 4,
"rating": 5,
"review_id": "ijb6DR2EGY",
"review_text": "The work presents an algorithm for adapting the rank of the LORA matrices according to a novel “saliency metric” assigned to each singular value of the LORA matrices. \n\nThe saliency measure is computed taking into account a sequence of steps (time window) during training and computing two quantities at the end of each time window: the orthogonality-aware singular values and the domain influence of each singular value. The orthogonality-aware singular value is a weighted average of the singular value where the weight takes into account the orthogonality of the SVD decomposition at that step. The domain influence takes into account the correlation between singular values within each time window. \nAt the end of each step sequence, the ranks are adjusted based on this salience measurement.\n\nThe authors propose a novel and interesting algorithm. The chosen setup speeds up the LoRA fine-tuning while maintaining accuracy or slightly outperforming other methods on the reported benchmarks. The experimental evaluation is convincing since the authors compare the proposed algorithm with other LoRA improvements on a reasonable number of tasks.\n\nThe main weakness of the work is the clarity of the exposition, which is obscure in some parts. \n\nFor example, one of the methods' core building blocks is the decycling operation of the dependency graph mentioned between lines 164-165: the authors must reference the algorithm they use for “de-cycling” the graph, describing its steps at least in the appendix.\\\nI leave other statements that require clarification in the question section below.\n\nIn addition, the paper does not discuss the limitations of the methods.\n\n> 1. *line 143 to 146: The authors claim that the weight assigned to singular values of high loss should be small. However, from equation (2), it seems that the steps with higher loss receive the larger weight in the time window.* \n\n> 2. *line 148. What do the authors mean by “we normalize the weights from 0 to 1”? An equation would be helpful to clarify the operation here.*\n\nA key performance measure that is not discussed is memory consumption. \n> 3. *Given a fixed parameter budget what is the amount of VRAM consumed by SalientLORA compared to for instance AdaLora?*\n\nOther minor remarks:\n\nWhen the authors state that AdaLORA has been adopted *in numerous research studies* (line 50) they should cite at least the most relevant ones to support the claim.\n\nline 56 The sentence is somewhat obscure. \n> 4. *What do the authors mean by “dominant role of singular values in the SVD matrix”? What is the precise meaning of “dominant role” in this context?*"
},
{
"confidence": 3,
"rating": 5,
"review_id": "wDnA8DDVSQ",
"review_text": "The paper introduces SalientLoRA, an approach designed to optimize the intrinsic ranks of LoRA components in LLMs through salience measurement. The method first utilizes salience measurement to analyze the variations and inter-dependencies of singular value magnitudes over time, which helps assess matrix importance while mitigating instability and randomness. This analysis informs the adaptive adjustment of the time-series window used for significance measurement and rank reduction during training. This adaptive mechanism allows for rapid and stable rank allocation, permitting an initially higher rank setting to expand the allocation space for ranks.\n\n1. SalientLoRA's use of salience measurement to analyze and utilize the variations of singular values effectively addresses the challenges of instability and randomness in rank optimization. The adaptive adjustment of the time-series window for significance measurement during training enhances the efficiency and stability of rank allocation.\n\n2. Demonstrating substantial performance gains over state-of-the-art methods on diverse NLU and NLG tasks highlights the effectiveness of SalientLoRA in practical applications.\n\nThe proposed method incorporates a sophisticated multi-stage process that involves several critical hyperparameters, such as $\\beta$, $\\gamma$, $T_i$, and $T_f$. However, the paper currently lacks a detailed analysis of these hyperparameters, which is crucial for understanding their roles and optimal settings within the methodology. Systematically exploring how each hyperparameter impacts the model's performance, including sensitivity analyses or hyperparameter tuning results, would greatly enhance the paper's scientific rigor.\n\nTo fully evaluate the robustness of the proposed method, could you provide detailed ablation studies and analyses for the hyperparameters, including $\\beta$, $\\gamma$, $T_i$, and $T_f$?"
},
{
"confidence": 4,
"rating": 6,
"review_id": "VsvtqH4uNo",
"review_text": "This paper proposes SalientLoRA, a new method for adaptively optimizing the intrinsic ranks of low-rank adaptation (LoRA) matrices. The key ideas are:\n\nUsing singular value decomposition (SVD) to decompose the LoRA matrices and measure the salience/importance of each singular value based on its magnitude, orthogonality constraints, and influence on other singular values within a time window during training.\n\n- Novel salience measurement technique that considers singular inter-dependencies and temporal variations.\n- Comprehensive evaluation across many datasets and model types (encoder, decoder, encoder-decoder).\n- Achieves new state-of-the-art results on multiple benchmarks while being more efficient than prior LoRA methods.\n\n- The article contains some details that are not clearly explained, such as how the R function on line 145 is calculated, and what specifically is done in the de-cycling process introduced on line 165.\n- More analysis could be provided to interpret why the salience measurement works well. For example, are the average of influence domains consistent across models fine-tuned on different types of datasets?\n\n1. Taking the last row of Table 1 as an example, initially, the model uses a total rank that is 7.5 times the target rank, so the gpu memory usage is roughly equivalent to that of LoRA with r=8*7.5=60. Although the memory usage may decrease during the model optimization process, can you consider comparing with methods like LoRA and DoRA with r=60?\n2. Based on my understanding, the constructed influence domains form an undirected simple graph. If this graph forms a single cycle, how do you perform de-cycling?\n3. Do you calculate influence domains starting from vertices with a degree of 0, similar to topological sorting, and then update the degrees of the vertices connected to it, then repeat the process?"
}
] |
vU1SiBb57j | Learning Multimodal Behaviors from Scratch with Diffusion Policy Gradient | Deep reinforcement learning (RL) algorithms typically parameterize the policy as a deep network that outputs either a deterministic action or a stochastic one modeled as a Gaussian distribution, hence restricting learning to a single behavioral mode. Meanwhile, diffusion models emerged as a powerful framework for multimodal learning. However, the use of diffusion policies in online RL is hindered by the intractability of policy likelihood approximation, as well as the greedy objective of RL methods that can easily skew the policy to a single mode. This paper presents Deep Diffusion Policy Gradient (DDiffPG), a novel actor-critic algorithm that learns from scratch multimodal policies parameterized as diffusion models while discovering and maintaining versatile behaviors. DDiffPG explores and discovers multiple modes through off-the-shelf unsupervised clustering combined with novelty-based intrinsic motivation. DDiffPG forms a multimodal training batch and utilizes mode-specific Q-learning to mitigate the inherent greediness of the RL objective, ensuring the improvement of the diffusion policy across all modes. Our approach further allows the policy to be conditioned on mode-specific embeddings to explicitly control the learned modes. Empirical studies validate DDiffPG's capability to master multimodal behaviors in complex, high-dimensional continuous control tasks with sparse rewards, also showcasing proof-of-concept dynamic online replanning when navigating mazes with unseen obstacles. Our project page is available at https://supersglzc.github.io/projects/ddiffpg/. | https://openreview.net/pdf/c1d49e0a18ba30e91f8393bb7381467ee4dc0bc6.pdf | [
{
"confidence": 4,
"rating": 6,
"review_id": "wHQlxr08RP",
"review_text": "This paper introduces DDiffPG for online reinforcement learning with multi-modal behaviour discovery. DDiffPG consists of two parts: 1) a new policy improvement method to stabilise the diffusion policy by cloning a target action; 2) a mode discovery mechanism to train mode-specific and intrinsic Q functions. In their experiments, the authors have shown that DDiffPG can achieve comparable performance with the baselines while producing multi-modal behaviours, which provides a series of benefits like avoiding mode collapse.\n\nThe paper has introduced an interesting idea. To the best of my knowledge, this is the first work that allows diffusion policy to learn multi-modal behaviours during online RL. According to the experiments, the proposed method has produced a reasonable performance with nice multi-modal behaviours. Besides, the paper has provided nice visualisations and discussions to help understand the proposed approach.\n\nThere are several main weaknesses of the paper.\n\n- The paper is hard to follow and the presentation has a certain room for improvement.\n\n- In section 3, the formal theoretical derivation of the newly introduced policy improvement objective is missing. Although it shows that this method worked empirically, it remains unclear how the resulting policy theoretically maximises the expected return in general.\n\n- I feel the paper is a bit over-claiming for certain aspects. In section 5.3, the authors claimed that DDiffPG can *overcome* local minimum issues and encourage exploration. However, the exploration comes from the use of RND when learning $Q_\\mathrm{explore}$, rather than the architecture itself. In addition, it is a very strong claim that DDiffPG **overcomes** the local minimum issues. The experiments are conducted on only 8 state-based tasks, from my point of view, which is insufficient to support such a general claim. I understand that by capturing multi-modal distributions, DDiffPG allows better generalisation, but I would suggest the authors moderate the claims a bit.\n\nMinor issues:\n- In line 157, is this a typo? In $r^\\mathrm{intr}(s, a, s’) = \\max(\\mathrm{novelty}(s’) - \\alpha \\mathrm{novelty}(s’), 0)$, should this be $r^\\mathrm{intr}(s, a, s’) = \\max(\\mathrm{novelty}(s’) - \\alpha \\mathrm{novelty}(s), 0)$?\n\n- In section 4.2, line 183, I’m not fully convinced that the RL objective can skew the policy towards a single mode. Suppose we have a Q function that nicely captures two modes. During policy improvement, let’s say we sampled a batch of trajectories that equally captures both modes, and we perform policy improvement by $\\max \\mathbb{E}\\left[Q(s, a)\\right]$. Given that our Q function already nicely captures both modes, why does such an objective cause mode collapse? Could you provide more explanations? Considering the success of DDiffPG on capturing the multi-modality in the policy space, is this really because of the way you perform policy improvement in Eqn. 1, or is it because the DDiffPG used multiple Q functions for separate modes, which just better fits the multi-modal distribution?\n\n- Regarding the use of mode-specific Q functions, it is a bit unclear to me how to stabilise the training. One issue is that during online exploration, the dataset is continuously being updated and modes are being updated. In this case, how do we fix the correspondence between the Q functions being learned and the mode? Besides, according to line 167, DDiffPG requires no predefined number of clusters, and the number of modes could be dynamic. However, we have to initialise a fixed number of Q functions. This seems a bit contradictory to me. How to define the number of Q functions during training?\n\n- It seems to me the exploration is only guaranteed by the training of $Q_\\mathrm{explore}$ using RND. However, how do we balance the exploration and exploitation during RL?"
},
{
"confidence": 4,
"rating": 7,
"review_id": "hbzpzeWplQ",
"review_text": "This paper addresses the challenges associated with employing diffusion policy in online reinforcement learning (RL), particularly the intractability of policy likelihood approximation and the bias towards a single mode. The author introduces the Deep Diffusion Policy Gradient (DDiffPG) method, which decouples exploration from exploitation. For exploration, novelty-based intrinsic motivation and hierarchical clustering are utilized to identify modes, while for exploitation, the author describes the mode-specific Q-function and a multimodal data batch. Empirical evaluations demonstrate that DDiffPG effectively masters multimodal behaviors.\n\n+ The application of diffusion policy for multiple modes in an online setting is promising and addresses a previously unexplored area in the literature.\n+ The introduction of a diffusion-based policy gradient method is novel and represents a significant contribution to the field.\n+ The work is well-motivated, and the visualization of multimodal behaviors using antmaze examples effectively enhances understanding and illustrates the practical utility of the approach.\n\n+ Several claims require additional support. For instance, the author asserts that standard exploration-exploitation strategies may easily converge towards a single mode (Lines 25-27) without providing theoretical or experimental evidence. Similar issues are present in Lines 35-36 and Lines 52-53. These statements are crucial for constructing the paper's motivation and thus require more substantial support to enhance their reliability.\n\n+ In Lines 263-267, the author explains that DDiffPG could learn suboptimal paths. Is this statement intended to justify the suboptimal performance compared to TD3? The author suggests that this suboptimal issue can be mitigated by the mode embeddings. It would be more effective to present the best performance and use the suboptimal trajectories as ablations, specifically when blocking the optimal path, to highlight the significance of multiple trajectories.\n+ Why does directly using the action gradient to optimize the policy lead to vanishing gradients and instability? Is this due to the large denoising steps? Including corresponding ablation studies would provide a better illustration.\n+ Unlike the offline setting where trajectories are stable, the replay buffer with the updated Q function results in changed pairs of $(s, a^{target})$. Does training the diffusion model with a supervised framework on continually changing pairs lead to instability in learning? (Lines 126-129)\n+ Why does DIPO, which uses the original $a$ from the buffer, not know the true outcome? I understand that the replay buffer contains the past trajectories $(s,a,r,s')$ (Lines 134-137).\n+ Are the mode-specific Q-functions also applicable to other standard policies?"
},
{
"confidence": 4,
"rating": 5,
"review_id": "Wmj98qI544",
"review_text": "This paper aims to solve online RL problems with diffusion policy. It includes 1. a diffusion policy optimization method for diffusion online training. 2. A combination of intrinsic rewards motivated skill discovery method and model-seeking Q-learning to facilitate exploration and prevent mode-collapse behavior. 3. Several self-designed environments where there might be multiple optimal solutions and thus require expressive exploration policy.\n\n1. The paper shows diffusion policy has a big potential in online RL because it enables multimodal exploration.\n2. The self-designed environments are a good contribution to the research field by showcasing the necessity of diffusion exploration.\n3. Performance clearly surpasses several baselines.\n\nfollow up:\n\nThe experiments basically support comments in the paper. The paper sets out to handle the single-mode exploration problem in online RL, and the self-designed environments, unlike most previous classics, allow diverse optimal behaviors and can benefit from multimodal exploration. The experiments show that the proposed method outperforms several classic baselines including some diffusion-based methods.\n\n1. The proposed diffusion training objective seems handcrafted and requires a lot of tunning. This may limit the algorithms' further application.\n2. Besides the diffusion optimization methods. Other proposed techniques are more like a good combination of previous work. This indicates limited theoretical novelty.\n3. Code is not provided. For this style of paper, I think code quality is essential, and a mere promise to release the code is not convincing.\n\nfollow up:\n1. The ablation studies are not strong enough to prove the improved performance number actually comes from multimodal exploration. I cannot be certain which part of the method works from the experiments. More visualization/empirical results/analyses should be given.\n2. The formatting of the table/figure can be greatly improved. For instance, the title of Figure 4 is wrong/incomplete. Table 3/4 is referenced as the main results in the paper but only put in the appendix. \n3. The diffusion optimization results also lack very strong novelty. The loss function is basically a supervised learning loss adapted for online RL, without strong convergence or policy improvement guarantee. Still, the diffusion+online RL theories are a known unsettled and hard problem, so this kind of exploration is fine and meaningful.\n\nNone"
}
] |
vS5NC7jtCI | AdaNeg: Adaptive Negative Proxy Guided OOD Detection with Vision-Language Models | Recent research has shown that pre-trained vision-language models are effective at identifying out-of-distribution (OOD) samples by using negative labels as guidance. However, employing consistent negative labels across different OOD datasets often results in semantic misalignments, as these text labels may not accurately reflect the actual space of OOD images. To overcome this issue, we introduce \textit{adaptive negative proxies}, which are dynamically generated during testing by exploring actual OOD images, to align more closely with the underlying OOD label space and enhance the efficacy of negative proxy guidance. Specifically, our approach utilizes a feature memory bank to selectively cache discriminative features from test images, representing the targeted OOD distribution. This facilitates the creation of proxies that can better align with specific OOD datasets. While task-adaptive proxies average features to reflect the unique characteristics of each dataset, the sample-adaptive proxies weight features based on their similarity to individual test samples, exploring detailed sample-level nuances. The final score for identifying OOD samples integrates static negative labels with our proposed adaptive proxies, effectively combining textual and visual knowledge for enhanced performance. Our method is training-free and annotation-free, and it maintains fast testing speed. Extensive experiments across various benchmarks demonstrate the effectiveness of our approach, abbreviated as AdaNeg. Notably, on the large-scale ImageNet benchmark, our AdaNeg significantly outperforms existing methods, with a 2.45\% increase in AUROC and a 6.48\% reduction in FPR95. Codes are available at \url{https://github.com/YBZh/OpenOOD-VLM}. | https://openreview.net/pdf/7d0df0f32fd41d7ace28bcfe957e1aeb0ad53117.pdf | [
{
"confidence": 5,
"rating": 6,
"review_id": "RIIFNogE9p",
"review_text": "The paper \"AdaNeg: Adaptive Negative Proxy Guided OOD Detection with Vision-Language Models\" presents a novel approach to out-of-distribution (OOD) detection using pre-trained vision-language models (VLMs). The primary innovation is the introduction of adaptive negative proxies, which are dynamically generated during testing by exploring actual OOD images. This method addresses the semantic misalignment issues of previous approaches that use static negative labels. AdaNeg utilizes a feature memory bank to cache discriminative features from test images, creating task-adaptive and sample-adaptive proxies that better align with the specific OOD datasets. The approach combines static negative labels with adaptive proxies to enhance the performance of OOD detection, achieving significant improvements in benchmarks like ImageNet. The method is training-free, annotation-free, and maintains fast testing speeds.\n\n1.\tInnovative Approach: The introduction of adaptive negative proxies to address semantic misalignment is a significant advancement. This dynamic generation of proxies during testing offers a novel solution to improve OOD detection.\n2.\tEffective Use of Vision-Language Models: Leveraging VLMs to integrate textual and visual knowledge enhances the robustness and accuracy of OOD detection.\n3.\tPerformance Improvement: The method shows substantial improvements in standard benchmarks, particularly a 2.45% increase in AUROC and a 6.48% reduction in FPR95 on the ImageNet dataset.\n4.\tTraining-Free and Annotation-Free: AdaNeg does not require additional training or manual annotations, making it highly efficient and practical for real-world applications.\n5.\tScalability and Efficiency: The method maintains fast testing speeds and can dynamically adapt to new OOD datasets without significant computational overhead.\n6.\tComprehensive Evaluation: Extensive experiments and analyses demonstrate the effectiveness and robustness of the proposed approach across various benchmarks.\n\n1. Potential Overhead in Memory Management: The implementation of a memory bank for caching features may introduce significant overhead in memory management, especially when dealing with large-scale datasets or high-dimensional feature spaces.\n2. Generalization to Other Domains: Although the approach demonstrates promising results on existing public datasets, its effectiveness in other domains or with different types of data remains uncertain and requires further investigation.\n3. Testing Phase Dependency: It is unclear whether the approach can maintain the same level of reliable performance when only a small number of images are tested in practical applications. This dependency on the number of test images warrants additional examination.\n\nSee weakness."
},
{
"confidence": 5,
"rating": 5,
"review_id": "kXHqgABTra",
"review_text": "In this paper, the authors propose AdaNeg, a test-time adaption method for CLIP-based post-hoc OOD detection. AdaNeg is an extension of NegLabel and introduces a class-wise memory bank for each ID and negative labels. The memory bank is gradually filled with ID and OOD features during the model deployment. The author design a margin-based approach to select positive and negative samples with high confidence. And they propose a cache elimination mechanism to update the memory bank. Besides, AdaNeg uses cross attention between the input sample and the memory bank to reweight the cached features. The experimental results show the proposed method outperforms the baseline methods under various benchmarks.\n\n<1> AdaNeg uses dynamic OOD proxies instead of the static design of NegLabel, achieving SOTA performance in CLIP-based zero-shot OOD detection. \n\n<2> The multi-modal score is an interesting design and explanation that demonstrates the improvement brought by using both text and image encoding capabilities in a multi-modal model.\n\n<3> The paper is well organized and easy to follow.\n\n**Major concerns**\n\n<1> AdaNeg is a test-time adaption approach that caches features w.r.t. ID labels and negative labels by maintaining a class-wise memory bank. For OOD detection, the biggest problem of the test-time adaptation method is that the arrival time of the OOD sample is uncertain. Compared with non-TTA methods, AdaNeg has greater uncertainty in its performance during the deployment phase and may even risk causing model collapse. \n\nFor example, when the model is deployed to a close-world environment, almost all input samples are ID samples (I believe this is a very common scenario). In this case, the memory banks of negative labels will gradually be filled with ID samples (in long-term deployment, there will always be misclassified ID samples that enter the negative memory banks). Since the number of negative labels is much greater than that of ID labels, more and more ID samples will be misclassified as OOD over time. I suggest the author conduct an experiment using the 1.28M training set images of ImageNet-1k as input (this still meets the zero-shot setting of CLIP) and observe how the proportion of samples misclassified as OOD changes with the number of input samples. If 1.28M images are repeatedly input into multiple rounds, will the misclassification rate increase further? In contrast, the other case is that the OOD samples are far more than the ID samples. Will this cause a greater false positive risk? I hope the authors can test their method with different ID and OOD sample mixture ratios, such as 1:100, 1:10, 1:1, 10:1, 100:1.\n\nIn summary, I suggest the authors to further study the setting of TTA in OOD detection to improve the motivation of the work, since the input samples may come from two different distributions, ID and OOD. How to ensure the stability of TTA OOD detection algorithm when the input stream is a non-stationary process is a problem worth studying.\n\n<2> The negative labels provide the initial memory bank slots for AdaNeg, but it seems to me that the negative labels are not necessary. This suggests that we need to rethink AdaNeg's motivation for negative labels. Why do samples that are judged as negative need to be placed in the memory bank w.r.t. the negative label? What if the authors directly use the MCM score to judge negative samples and then let them organize themselves into OOD proxies? The authors need to provide a more detailed analysis (preferably theoretical analysis) to prove that the **negative label-based** memory bank design is necessary.\n\nFurther, negative labels simply select words that are semantically far from ID labels. For some OOD samples, they may be far away from both ID labels and negative labels. According to the mechanism of AdaNeg, they cannot enter the memory bank of negative labels. Is this a negative impact of designing a memory bank based on negative labels?\n\n**Minor concerns**\n\n<1> The authors need to provide more detailed experimental settings. The paper mentions that memory banks are task specific. When evaluating the model, taking the ImageNet-1k benchmark as an example, do the authors maintain an independent memory bank for each OOD dataset (precisely, each ID-OOD pair), or did the four OOD datasets share one memory bank?\n\n<2> There seem to be some typos and symbol issues in the paper.\n\na) L247: temperature $\\tau = 100$ seems to be $\\tau = 0.01$ because $\\tau$ is in the denominator.\n\nb) The subscript NL is not case-inconsistent, e.g., Eq. (4) and Eq. (8).\n\nsee Cons"
},
{
"confidence": 3,
"rating": 5,
"review_id": "Nx44vVzMAL",
"review_text": "This paper introduces a new algorithm for Out-Of-Distribution (OOD) sample detection. First, it analyzes the shortcomings of previous Vision-Language OOD detection methods and proposes improvements based on these findings. Specifically, the paper presents a scheme for online updating of the memory bank during testing to design better negative proxies. The authors conducted experiments on datasets such as ImageNet and CIFAR. According to the experimental results, the newly proposed method can enhance OOD detection performance.\n\n1. Currently, vision-language models are developing rapidly, and using them for OOD sample detection is a promising direction. Approaching from this perspective may yield better results.\n2. The experiments in this paper are relatively thorough, encompassing both large datasets based on ImageNet and smaller datasets based on CIFAR. According to the authors' experimental results, the newly proposed method can improve the accuracy of OOD detection.\n\n1. The motivation in this paper is not very clear. Specifically, in Figure 1(a), it is not evident why the newly proposed AdaNeg is better than NegLabel. On the contrary, the distribution of OOD samples seems to be closer to NegLabel.\n\n2. The method proposed in this paper is based on the features and results of test samples during testing, which limits the upper bound of the method. In my opinion, the effectiveness of the proposed method relies on the vision-language model's strong inherent OOD detection capability, meaning that most test samples can be correctly processed. Based on these correctly processed samples, the method can further improve the detection accuracy of other samples. However, if in a certain scenario, the model itself cannot correctly estimate most of the samples, this method might actually make the results worse.\n\n3. This paper merely performs optimizations based on the NegLabel framework, without many innovative points. The novelty of this improvement is insufficient to support a NeurIPS paper.\n\nAs shown in weakness. What is the meaning of Figure 1, and is the proposed method effective when the base model predicts most samples incorrectly?"
},
{
"confidence": 5,
"rating": 8,
"review_id": "a8RDzZ2FOt",
"review_text": "The authors introduce a new approach to leverage the pre-trained vision-language model for identifying out-of-distribution (OOD) samples. Compared to prior works that employ consistent negative labels across different OOD datasets, they introduce adaptive negative proxies to dynamically generate text labels during testing by exploring actual OOD images, thereby aligning more closely with the underlying OOD label space. Empirically, the proposed method demonstrates state-of-the-art performance across various OOD detection benchmarks especially on the large-scale ImageNet benchmark.\n\n- Dynamically generating negative proxies is a simple and effective strategy. \n\n- The setting studied is very natural and this paper can easily stimulate further research in the area.\n\n- The proposed approach performs well, particularly on large-scale datasets such as ImageNet, effectively demonstrating its scalability.\n\n- The paper is nicely written.\n\n- While the proposed AdaNeg shows clear improvements over training-free baselines, its overall performance on ImageNet still lags behind training-based methods. This raises the question of whether there are opportunities for complementarity between the two approaches.\n\n- Can the dynamic update of the memory bank and refinement of OOD proxies during the testing stage be considered a form of test-time training? The authors are requested to clarify the inherent connections and distinctions, especially from the perspectives of training versus training-free approaches.\n\n- If negative proxies can directly identify true out-of-distribution (OOD) test images during the testing phase, is it possible to use the identified OOD samples to update the model parameters online?\n\nPlease refer to the weakness."
}
] |
vP9qAzr2Gw | Supra-Laplacian Encoding for Transformer on Dynamic Graphs | Fully connected Graph Transformers (GT) have rapidly become prominent in the static graph community as an alternative to Message-Passing models, which suffer from a lack of expressivity, oversquashing, and under-reaching.
However, in a dynamic context, by interconnecting all nodes at multiple snapshots with self-attention,GT loose both structural and temporal information. In this work, we introduce Supra-LAplacian encoding for spatio-temporal TransformErs (SLATE), a new spatio-temporal encoding to leverage the GT architecture while keeping spatio-temporal information.
Specifically, we transform Discrete Time Dynamic Graphs into multi-layer graphs and take advantage of the spectral properties of their associated supra-Laplacian matrix.
Our second contribution explicitly model nodes' pairwise relationships with a cross-attention mechanism, providing an accurate edge representation for dynamic link prediction.
SLATE outperforms numerous state-of-the-art methods based on Message-Passing Graph Neural Networks combined with recurrent models (e.g, LSTM), and Dynamic Graph Transformers,
on~9 datasets. Code is open-source and available at this link https://github.com/ykrmm/SLATE. | https://openreview.net/pdf/acb194ce31b86916495f23d4c82ee0d79949b5cb.pdf | [
{
"confidence": 3,
"rating": 7,
"review_id": "KGO0gnONop",
"review_text": "This paper introduces a new method called Supra-Laplacian Encoding for spatio-temporal Transformers(SLATE) to deal with dynamic graph challenges. Its core approach is to enhance the graph transformer(GT) architecture by integrating spatio-temporal information more efficiently. It deploys a new technology to convert discrete-time dynamic graphs into multiplayer graphs and exploit the spectral properties of their associated super-laplacian matrices. SLATE also implements a cross-attention mechanism to explicitly model pairwise relationships between nodes. SLATE can capture the dynamic nature of graphs more accurately with this implementation. SLATE provides a powerful tool for applications ranging from social network analysis to understanding complex biological networks.\n\n1.SLATE applies spectral graph theory to the dynamic graph domain in a novel way. \n2.The quality of this study is evident in the rigorous experimental setup and the comparion with SOTA methods. It is able to outperform many existing models on nine datasets. \n3.The authors provide a detailed explanation of the method and the underlying theoretical concepts. And the open-source code and the instructions for reproducing the results enhances the clarity and accessiblility.\n\n1. The experimental results of CanParl in Table 2 is not very good.\n2.The permutation setting of SLATE may limit its ability to generalise to unseen nodes and large-scale graph data.\n\nHow does the model perform as the size of the graph increases?"
},
{
"confidence": 4,
"rating": 4,
"review_id": "aZcu8rwbaI",
"review_text": "This paper proposes SLATE, a novel method for link prediction in dynamic graphs. SLATE transforms dynamic graphs into multi-layer networks and generates a unified spatio-temporal encoding by leveraging the spectral properties of the supra-Laplacian matrix. It uses a fully connected transformer architecture to capture long-range dependencies between nodes across multiple time steps. The authors introduce a cross-attention-based edge representation module for dynamic link prediction. They claim that SLATE significantly outperforms existing state-of-the-art methods on several benchmark datasets\n\n1. The idea of transforming dynamic graphs into multi-layer networks and utilizing the supra-Laplacian is innovative.\n\n2. Extensive experiments were conducted on various datasets and baselines.\n\n3. The method shows superior performance compared to state-of-the-art approaches on multiple datasets.\n\nW1. The explanation for adding temporal connections in the supra-Laplacian construction stage seems insufficient.\n\nW2. The description of how to construct the supra-Laplacian is not comprehensive enough.\n\nW3. The characteristics of the SLATE model are not clearly defined. For example, the necessity of each step (a)-(d) in Figure 2 lacks convincing arguments.\n\nW4. This paper discloses all data and code upon acceptance, which limits the ability to verify the reproducibility of this paper.\n\nQ1. The depiction of adding temporal connections in Figure 2 is difficult to understand. The explanation for adding temporal connections seems inadequate, especially the description of AddTempConnection in Algorithm 1.\n\nQ2. When adding virtual nodes, are multiple virtual nodes created? The explanation regarding virtual nodes appears to be insufficient.\n\nQ3. What are the theoretical advantages of Supra-Laplacian encoding compared to existing graph positional encoding methods?\n\nQ4. What would be the impact of using sparse attention mechanisms instead of fully connected transformers?"
},
{
"confidence": 3,
"rating": 6,
"review_id": "PJkEfIZjkx",
"review_text": "This paper proposes a spatial-temporal encoding for transformers on dynamic graphs. Specifically, graphs at each time step are treated as a single multilayer graph and packed into a larger adjacency matrix, with temporal self-connections between each node and its past. Eigenvectors of the constructed Laplacian are used as positional encoding and concatenated with node features. A standard transformer encoder layer then generates all representations for each node at each time step within a selected time window. To predict a link between two nodes, the model does cross-attention between their representations within the time window. Experimental results show that the proposed model performs better than existing methods.\n\n* The positional encoding proposed by this paper aims to jointly model spatial and temporal dependencies, which is a plausible improvement over existing methods.\n* The proposed model shows strong empirical performance compared with existing approaches. In particular, the proposed positional encoding works better than simply concatenating LapPE and sin/cos\n\n* Scalability/efficiency may still be a concern on large graphs, though the paper shows good engineering (e.g., Flash-Attention) can help\n* Some finer-grained ablation studies are missing. For example:\n * Instead of removing isolated nodes in preprocessing, can we keep the disconnected graph and just use eigenvectors corresponding to non-zero eigenvalues?\n * The transformer itself can already get global information and I see no strong reason to use virtual nodes additionally. How would the model behave without virtual nodes? How would \"virtual nodes + GNN\" behave?\n\nPlease see my 2nd point in Weaknesses"
},
{
"confidence": 4,
"rating": 6,
"review_id": "yxdXqqJutt",
"review_text": "This work introduces Supra-Laplacian encoding for spatio-temporal Transformers (SLATE) which aims to learn both spatio and temporal information in a dynamic graph with a transformer architecture. The key is to convert Discrete Time Dynamic Graphs into multi-layer networks and then extract the spectral features of their supra-Laplacian matrix to improve upon existing dynamic graph transformer designs. Additionally, SLATE employs a cross-attention mechanism to accurately model nodes' pairwise relationships, improving dynamic link prediction. The proposed SLATE model performs competitively to both CTDG and DTDG methods on discrete graphs.\n\n- **originality**: connecting DTDGs into a multi-layer graph and then compute spectral properties of a Supra-Laplacian matrix is a novel approach in the literature. The empirical performance also demonstrates that this approach can outperform existing methods with its spatio-temporal reasoning capabilities. \n\n- **extensive evaluation**: The proposed SLATE method compares favorably to both CTDG and DTDG methods on discrete datasets with existing evaluation of testing 1 positive edge against 1 negative edge. In addition, model analysis experiments and ablation studies provides insights into the model components and choices. Additional experiments with hard negative samples are also included in the appendix.\n\n- **clear presentation**: the paper is easy to follow and the main idea is presented well\n\n- **scalability**: my main concern is the scalability of the method as the authors also pointed out as a limitation. Even with the time window (which truncates the history of the temporal graph), the $N^2 w^2$ complexity remains very high and only feasible for networks with up to thousands of nodes, In addition, there is a large amount of precomputation needed for the supra-Laplacian and computing its eigenvectors. \n\n- **window size**: one of the core hyperparameter of SLATE is the choice of window size, as the study in Figure 4 shows that there are some common optimal window size for the CanParl Colab and USLegis datasets. These datasets mostly contains a small number of snapshots thus might be why 4 is a good choice. In practice though, it might be difficult to tell which window size is optimal without extensive experiments to select it. It would also be interesting to see if the length of the window is related to other factors in the architecture, size of the multi-layer network, transformer dimension etc.\n\n- In the ROLAND paper, which is a close competitor to this work, the MRR metrics is used for evaluation, why is the AUROC adapted in this work instead as it has been shown to be very limited and biased towards the 1 negative sample that is compared against each positive test edge. \n\n- there are types in the paper for example \"loose\" on line 5 should be \"lose\"\n\n- how are the CTDG methods applied on the discrete datasets, some performance looks low."
}
] |
vMMzjCr5Zj | Large Pre-trained time series models for cross-domain Time series analysis tasks | Large pre-trained models have been vital in recent advancements in domains like language and vision, making model training for individual downstream tasks more efficient and provide superior performance. However, tackling time-series analysis tasks usually involves designing and training a separate model from scratch leveraging training data and domain expertise specific to the task. We tackle a significant challenge for pre-training a foundational time-series model from multi-domain time-series datasets: extracting semantically useful tokenized inputs to the model across heterogeneous time-series from different domains. We propose Large Pre-trained Time-series Models (LPTM) that introduces a novel method of adaptive segmentation that automatically identifies optimal dataset-specific segmentation strategy during pre-training. This enables LPTM to perform similar to or better than domain-specific state-of-art model when fine-tuned to different downstream time-series analysis tasks and under zero-shot settings. LPTM achieves superior forecasting and time-series classification results taking up to 40% less data and 50% less training time compared to state-of-art baselines. | https://openreview.net/pdf/1fe1368c9227b6a85b1429442439e39607b01c26.pdf | [
{
"confidence": 4,
"rating": 5,
"review_id": "Bhc8k0BGln",
"review_text": "Training large time series (TS) models is often limited by the scarce data available for a specific application. Existing pretraining methods use a simplistic tokenization scheme where the TS is cut up into equally sized parts, independent of its content. The newly proposed method *Large Pre-trained Time-series Models*, therefore, adaptively segments the input time series into (potentially) overlapping tokens depending on the TS itself. It shows very good forecasting performance in zero-shot and finetuning settings. It can also be used for classification.\n\n- The relevance of the problem and motivation for adaptive segmentation is convincing.\n- The method of adaptive segmentation is an interesting solution to the issue.\n- LPTM is compared against a plethora of appropriate and challenging baselines.\n- It shows promising empirical results.\n\n- I am under the impression that this paper may have found a strong method yet does not sufficiently investigate *why* it works. The interplay between learning the scoring function and training the encoder is not very clear. See below.\n- A lot of experimental claims are not adequately substantiated. It is claimed in question 6 of the checklist that error bars are provided and that statistical significance tests are performed, yet I did not find them. See below.\n- The overall presentation (language and formatting) should be improved.\n- The provided implementation is not accessible. (Error: \"The repository is expired\") In the current state, results are not reproducible since key hyperparameters are missing. The authors claim in question 6 of the checklist that they state how hyperparameters were chose, yet I could not find it in the paper.\n\n1. The problem statement (l. 96-98) only considers in-domain applications. Is it also interesting to consider domain generalization settings?\n2. The term \"Large Pre-trained Time-series Models (LPTM)\" (l. 48-51) is extremely generic and better describes the emerging field than a single method. Would it make sense to focus more on the \"Adaptive Segmentation\" contribution in the title and exposition?\n3. Why did you choose Eq. (2) to be that way? Are all parameters learned?\n4. The scaling of the method is not discussed sufficiently; could you comment on that? There are potentially a lot of tokens -- in the worst case, there are as many as input tokens. Even more, the number of evaluations of $s(i,j)$ scales quadratically in the input length. While the heuristic for selecting a good set of segments is defined well, a discussion of why it is sensible is missing. See also question 7.2.\n5. Why did you use masked reconstruction as the two pretraining tasks? Could you explain why this is more desirable than alternatives, like contrastive methods?\n6. The interplay of training the segmentation strategy and the encoder simultaneously requires a more nuanced discussion. To what extent can this cause something like a \"mode collapse\" (as in Generative Adversarial Networks), where the $s(i,j)$ always chooses the same segments since they were found to be beneficial at some point and stops \"exploring\" others? This should (1) be discussed in more depth and (2) may be a significant limitation of the method.\n7. This leads to the experimental evaluation.\n\t1. Following (6), can you provide insights into the relative convergence speeds of the scorer and the encoder/decoder? How good is the score function $s$ at predicting the final encoder loss?\n\t2. How many segments are there typically? A histogram would help judge the typical number of tokens resulting from the adaptive segmentation.\n\t3. L. 281: \"2x to over 10x smaller than other pre-trained time-series models\" -> Where do you provide that comparison of model sizes?\n\t4. Section 5 only discusses (almost) exclusively the forecasting setup. Why were the specific classification datasets chosen? Section 6 mentions 35 datasets (UCI has more time series classification datasets), but Table 4 only contains 32. Were three datasets removed at some point?\n\t5. L. 305: \"We observe that LPTM has highest mean rank\" -> Please provide that rank in (or alongside) Table 4.\n\t6. The caption of Table 4 talks about statistical significance. What are the test's results? Which significance test did you even perform? How were results aggregated (if it was the arithmetic mean, what is their std. dev.)?\n\nMinor comments:\n- The paper would benefit from a language pass.\n- Improvements to formatting: References are formatted confusingly, e.g., in lines 22, 23f, 36f, etc. This problem occurs throughout the paper. Table 4 in the Appendix is too wide. REVIN -> RevIN. Formatting of LASTMASK, etc., is inconsistent (l. 163 vs. l. 335), ...\n- The example in line 40ff could be given more bluntly and convincingly by discussing the same event in different sample rates.\n- L. 91, what is R? The same as in l. 152?\n- L. 89, isn't $\\mathcal{D}_\\text{pre}$ the union over the individual datasets?\n- The abbreviation SSL has never been introduced.\n- Red and green are difficult to distinguish for people with common color deficiencies. Therefore, Figure 1 could be improved with a different color palette.\n- Table 3 should mention that it is about finetuning. Are the baselines trained from scratch or merely finetuned as well?\n- The authors should check whether they intentionally want to cite many preprints (e.g., from Arxiv) or their published variants.\n- The capitalization of the title is inconsistent."
},
{
"confidence": 4,
"rating": 7,
"review_id": "B49KiiD3dO",
"review_text": "The paper introduces a new approach for creating pre-trained models for time-series data, similar to those used in language and vision tasks. The authors propose a model called Large Pre-trained Time-series Models (LPTM), which includes an innovative adaptive segmentation module to handle diverse time-series data from multiple domains.\n\nKey contributions include:\n\n- Developing a framework for pre-training time-series models on multi-domain datasets, using a novel adaptive segmentation module to tokenize inputs effectively. This is achieved via a self-supervised learning objective.\n- Demonstrating that LPTM performs as well or better than state-of-the-art domain-specific models when fine-tuned for various time-series tasks, such as forecasting and classification, with less training data and compute time.\n- Proving that LPTM achieves superior results in both zero-shot and fine-tuned settings across diverse domains like epidemiology, energy, and economics, requiring up to 40% less data and 50% less training time compared to existing methods.\n\nThe paper has the following strengths:\n\n- Well-written, clear, easy to follow. Algorithm is a nice plus.\n- Baseline choice reasonable: most recent methods are considered.\n- Experimental results good, when considered on the set of datasets chosen (more points on that in the weaknesses section).\n\n- It's a bit hard to get a good feel for the relative advantage of the proposed method. In table 2, the approach is clearly better, but we are left to infer that from that fact that it is commonly second or first in the rankings. Could the authors maybe add some for of aggregate metric, e.g. the average rank across datasets of a given method?\n- Despite mentioning code is available, the link does not work (subscript 3 on page 7, time of access 2024-07-12, and previously): \"The repository is expired\".\n- For a paper dealing in large part with forecasting, I was surprised by the absence of almost all of the classical long-term forecasting datasets used by other papers: traffic, electricity, weather, illness... Given that these are by far the most heavily studied ones in the literature, including them (as proposed in the questions section). While I don't find it a critical (but still important) concern, I strongly advise the authors to consider adding them as it will help avoid concerns other readers might have about cherry-picking of results.\n\n- Can the authors add error bars (in the appendix possibly) for their experiments? They mention that they already run 10 experiments per setup so these should be readily available and would give a good idea of the robustness of the findings.\n- Can the authors ensure that code to reproduce their experiments is available as stated? \n- Can the authors run the same experiments on the \"standard\" long-term forecasting datasets, as listed in the weaknesses section?\n\nNote: I feel the paper is definitely interesting and makes valid contributions. Addressing my questions, in particular the one about the long-term forecasting datasets would be a strong argument for me to raise my score.\n\nEdit: I've read the rebuttal provided by the authors, and since my open questions have been addressed I'm raising my score to 7."
},
{
"confidence": 4,
"rating": 6,
"review_id": "DVwkuUpQiR",
"review_text": "The paper proposes Large Pre-trained Time-series Models (LPTM), a novel method designed to improve the efficiency and performance of time-series analysis across multiple domains. \nThe key contribution is an adaptive segmentation module that automatically identifies optimal segmentation strategies for diverse datasets during pre-training. \nThis approach aims to overcome the limitations of fixed-length segmentation, which may not adequately capture the temporal patterns of heterogeneous time-series data. \nLPTM demonstrates superior forecasting and classification performance, requiring up to 40% less data and 50% less training time compared to state-of-the-art models.\n\nS1. This paper focuses on the time series segmentation problem.\nAs the basic semantic unit in time series is not as clear as in text, a proper segmentation is a promising direction towards better series modeling.\n\nS2. The proposed segmentation method is adaptively calculated over each specific input series.\n\nS3. The experiments are extensive.\n\nW1. Although time series has a weaker semantic structure than natural language, it is closer connection to images.\nIn both time series and images, a semantic unit, e.g., a small item or a texture in an image, can have different lengths and scales.\nThis raises a challenge against the main motivation: why a full self-attention-based architecture works for images (e.g., ViT), why for time series the segmentation needs to be explicitly done?\nIt would be interesting if the authors can further discuss this problem and provide their intuitions.\n\nW2. The introduction of the adaptive segmentation module seems to bring instability in the initial model training, as well as requiring longer training time (although the authors propose to backpropagate the gradients every 10 batches).\nSpecifically, the loss function for segmentation is a hard loss based on the selected subset of best segments.\nHowever, the parameters seem to be randomly initialized, which could provide highly random \"best\" segments.\nHence, the convergence stability and the training time with and without the dynamic segmentation modules should be discussed.\n\nW3. The dynamic segmentation modules seem not to be fine-tuned with specific attention.\nHowever, as the author(s) mentioned, different datasets could have very different best segmentation.\nHence, it would be interesting to discuss why this is sufficient and provide theoretical or empirical evidences.\n\nQ1 (cr. W1). Please provide intuitions and empirical evidences on why full self-attention-based architectures work for images while an explicit segmentation module is required for time series.\n\nQ2 (cr. W2). Please discuss the convergence stability, especially the initial convergence stability and the influence of random initialization, as well as the influence of the adaptive segmentation modules on the pre-training speed.\n\nQ3 (cr. W3). Please discuss the influence of fine tuning on the adaptive segmentation, e.g., which the current framework can make a large change to the existing segmentation. \nExperiments with and without fine-tuning the adaptive segmentation would be interesting to report.\n\nQ4. It would be interesting if the authors may show some case studies on the adaptive segmentation results, i.e., whether and how much the adaptive segmentation results conform to the source domain of the dataset, whether some periodic patterns can be well preserved after the adaptive segmentation module."
},
{
"confidence": 4,
"rating": 5,
"review_id": "6HQCzUeDka",
"review_text": "This paper proposes a novel contribution to pretrained time series models for forecasting and classification by paying attention to the fact that currently several transformer models take time series segmentations of the same size, regardless of the particular characteristics of the time series in consideration. For instance, time series that have yearly frequency or minute frequency might require different segmentation lengths, or it might be that dynamics are more complex in certain time intervals requiring a more detailed segmentation. Based on this observation the authors proposed a model that can find a suitable segmentation schema that later on allows to observe where are the time intervals where more complex dynamics are shown.\n\nThe authors perform several experiments and claim empirically that the proposed approach is at least competitive to the state of the art.\n\nThe authors study a clearly interesting problem: how to provide a suitable segmentation scheme for time series so that different time regions are segmented in different ways, depending on their complexity and amount of information. The motivation for this is well stated by the authors, leading to a novel approach to achieve this. \n\nThe authors further set up this in an Self-supervised learning setting, and consider multiple datasets to pretrain their model and further provide several evaluations. This is interesting because depending on the field/topic/area of time series a different segmentation scheme might be more suitable.\n\nSome of the main limitations are as follows:\n- The proposed framework is not differentiable. The authors have acknowledged this in the paper and propose a workaround for this, basically to update the segmentation scores every 10 batches. Yet, this poses challenges like the interpretation of the training loss, and discontinuities in the test loss.\n- It is unclear if the proposed approach is able to handle missing values. If not, is there anyway to overcome this? Missing values are very often present in practice and having a sound way to handle them is relevant.\n- It is unclear if the current evaluation is fair. The authors present a corpus of datasets for which they pretrained the proposed model, but it is unclear which datasets where hold-out from pretraining. This is relevant as several of the pretrained models considered might have not been exposed to these datasets, which gives an unfair advantage to the proposed model. Further, since the amount of pretraining datasets is rather limited, there is the possibility that the proposed model is overly focused on these datasets, whereas other models, like (Ansari 2024) and (Woo 2024) were trained in a larger corpus of datasets.\n\nQuestion: \n\n* Eq-1: is the GRU applied entry-wise to the time series? Does this imply that we apply $GRU_1$ to each entry of $y$ (which has $t$ entries), and then the resulting $t$ values constitute the hidden embeddings?\n* Eq. 2: what is $z_i$? So far we have talked about $z^{(i)}$.\n* Missing closing parenthesis in fig 1: $S(y^{(1...T)$\n* Eq-1: the larger values of S(i,j) the better? Does it mean that the correlation between $z_i$ and $z_j$ is high, or that $z_i$ and $z_j$ are related somehow? \n* Eq-3: index $k$ is never used in this definition of output embeddings.\n* Eq-5: as pointed out by the authors, the selection of segments is not differentiable and hence it can not be directly integrated to the loss function. Does it mean that the segments are updated every 10 batches? This means that the loss will not be continuous, and hence it will be unclear if there is progress or not in terms of the training loss. Is this correct? I guess here what nevertheless can hint at improvement is the test loss.\n* In line 225: why are time series with missing values removed? is the proposed model able to handle missing values?\n* The authors claim that their model is a pretrained model. What datasets were used to pretrain the model? Are all datasets used as well for evaluation in Table 1? If yes, then the comparison is not fair. Several of the pretrained models considered might have not been exposed to those datasets in pretraining, giving an unfair advantage to the proposed approach. Further, doing pretraining in such a small amount of datasets further gives more advantage to the proposed model, as the larger the datasets potentially gives a smaller amount of exposure to each dataset."
}
] |
vKwf15M5EE | Weakly-Supervised Cortical Surfaces Reconstruction from Brain Ribbon Segmentations | Deep learning-based cortical surface reconstruction (CSR) approaches typically rely on supervision information provided by pseudo ground truth generated by conventional CSR methods, subject to errors associated with the supervision information and also increasing computational cost of training data preparation. We propose a new method to jointly reconstruct multiple cortical surfaces using weak supervision from brain MRI ribbon segmentation results. Our approach initializes a midthickness surface, which is then deformed inward and outward to form the inner (white matter) and outer (pial) cortical surfaces, respectively, by jointly learning diffeomorphic flows by minimizing loss functions to optimize the surfaces towards the boundaries of the cortical ribbon segmentation maps. Specifically, a boundary surface loss drives the initialization surface to the inner and outer boundaries, while an inter-surface normal consistency loss regularizes the pial surface in challenging deep cortical sulci regions. Additional regularization terms are utilized to enforce edge length uniformity and smoothness of the reconstructed surfaces. Our method has been evaluated on two large-scale adult brain MRI datasets and one infant brain MRI dataset, demonstrating comparable or superior performance in CSR in terms of accuracy and surface regularity compared to alternative supervised deep learning methods. | https://openreview.net/pdf/e218982b73024d892585208a2f670c52177436b5.pdf | [
{
"confidence": 5,
"rating": 2,
"review_id": "MUiblPypDA",
"review_text": "The submission presents a deep learning-based approach for cortical surface reconstruction (CSR) from brain MRI data using weak supervision derived from cortical brain segmentation maps. The claimed contributions are: \n\n1. Weak Supervision Paradigm: The authors introduce a new weakly supervised paradigm for reconstructing multiple cortical surfaces, significantly reducing the reliance on pseudo ground truth (pGT) surfaces generated by conventional CSR methods.\n2. New Loss Functions: Two novel loss functions are designed to optimize the surfaces towards the boundaries of the cortical ribbon segmentation maps. Regularization terms are also introduced to enforce surface uniformity and smoothness.\n3. Evaluation and Performance: The proposed method is extensively evaluated on two large-scale adult brain MRI datasets and one infant brain MRI dataset, demonstrating comparable or superior performance to existing supervised DL-based CSR methods.\n\n1. The paper presents an approach to leverage weak supervision from segmentation maps instead of relying on pGT surfaces, which is a significant departure from traditional methods.\n2. The methodology is explained and the experimental setup is described. The authors conduct evaluations on multiple datasets, evaluating the efficacy and efficiency.\n3. The paper is well-structured, with clear descriptions of the problem, methodology, and results. The figures and tables effectively illustrate the performance and comparisons.\n4. The approach addresses a critical bottleneck in CSR by reducing the dependency on time-consuming and error-prone pGT surfaces, potentially broadening the applicability of CSR methods to more diverse datasets and clinical scenarios.\n\nMethod\n1. It seems that this work combines [1] and [2], and thus has limited technical novelty. The architecture in Figure 1 and the circle consistency loss (Eq. 5) are almost identical to CoCSR [1]. The boundary surface loss and inter-mesh normal consistency loss (Eq. 3-4 and Figure 2) are very similar to the loss functions proposed by [2].\n\n2. Additionally, the customized edge length loss (Eq. 6) has also been proposed by [3]. Considering the large individual differences across human brains, how did the authors choose the area A without knowing the pGT cortical surfaces?\n\n3. It is confusing that the ribbon segmentations are used as both input and pGT. The authors claimed that the ribbon segmentations are inaccurate weak supervision, but still generated the initial surface based on ribbon segmentations according to Figure 1.\n\n4. The velocity field defined in Eq. 1 is time dependent. How did the authors learn non-stationary velocity fields through a 3D U-Net?\n\n5. In line 156, a bijective mapping with continuous inverse is called homeomorphism. A diffeomorphism is defined as a smooth/differentiable bijection with smooth/differentiable inverse.\n\n6. As shown in Figure 2 (b), it is clear to observe that the WM and pial surfaces do not have the same normal directions in some regions. The inter-mesh normal consistency loss could cause inaccurate surface reconstruction. Could the authors provide more insights to solve this problem?\n\n\nResults\n1. The experimental results are unreliable and unconvincing. After careful comparison, it seems that the baseline results (CorticalFlow++, CortexODE, Vox2Cortex, DeepCSR) on the ADNI and OASIS datasets in Table 1 were directly copied and pasted from Table 2 in [1]. This leads to unfair comparisons.\n\n2. Furthermore, as reported in Table 1, SegCSR produced no more than 0.061% of self-intersecting faces (SIF), whereas the authors claimed in line 264 that there are ∼0.3% on average for both white and pial surfaces. This is confusing. Which result is correct?\n\n3. In line 263, the authors claimed that DeepCSR and U-Net produced a large number of SIFs without post-processing. However, the Marching Cubes algorithm only produces topological errors such as holes no SIFs.\n\n4. The BCP dataset only includes 19 test subjects. Cross-validation should be conducted to ensure fair evaluation of the performance.\n\n5. The flow ODE was integrated using the forward Euler method with T=5 steps. Such a large step size could cause unstable ODE solutions and failure in preventing self-intersections. The value of the Lipschitz constant should be reported to examine the numerical stability of the ODE solver.\n\n6. The authors reported that SegCSR requires only 0.37s of runtime per brain hemisphere. However, SegCSR adopted a topology correction algorithm, which may take several seconds to a few minutes, to create an initial midthickness surface for each subject. This should be included in the total runtime. A breakdown of runtime should be reported and compared to SOTA baseline approaches. \n\n\n[1] Zheng, H., Li, H. and Fan, Y. Coupled reconstruction of cortical surfaces by diffeomorphic mesh deformation. Advances in Neural Information Processing Systems, 2023.\n\n[2] Ma, Q., Li, L., Robinson, E.C., Kainz, B. and Rueckert, D. Weakly Supervised Learning of Cortical Surface Reconstruction from Segmentations. arXiv preprint arXiv:2406.12650\n\n[3] Chen, X., Zhao, J., Liu, S., Ahmad, S. and Yap, P.T. SurfFlow: A Flow-Based Approach for Rapid and Accurate Cortical Surface Reconstruction from Infant Brain MRI. MICCAI, 2023.\n\n1. Can the authors elaborate on the key differences between their approach and [1,2,3], particularly in terms of methodology and experimental setup?\n2. How does the proposed boundary surface loss function improve upon the traditional bi-directional Chamfer loss used in existing methods?\n3. Can the authors provide more details on the computational efficiency and runtime comparisons with existing CSR pipelines?"
},
{
"confidence": 4,
"rating": 4,
"review_id": "sFmMLkhUYA",
"review_text": "The authors proposed a novel new method to jointly reconstruct multiple cortical surfaces using weak supervision from brain MRI ribbon segmentation results, which deforms midthickness surface deformed inward and outward to form the inner (white matter) and outer (pial) cortical surfaces. The proposed method is evaluated on two large-scale adult brain MRI datasets and one infant brain MRI dataset, demonstrating comparable or superior performance in CSR in terms of accuracy and surface regularity.\n\n1.\tPropose a new weakly supervised paradigm for reconstructing multiple cortical surfaces, reducing the dependence on pGT cortical surfaces in training, unlike existing DL methods.\n2.\tDesign two loss functions to optimize the surfaces towards the boundary of the cortical ribbon segmentation maps, along with regularization terms to enforce the regularity of surfaces.\n3.\tConduct extensive experiments on two large-scale adult brain MRI datasets and one infant brain MRI dataset.\n\n1.\tIt seems overclaim in the manuscript. The ‘pseudo’ ground-truth surface mentioned in the manuscript is actually the ground-truth mesh in other approaches, obtained by Marching cube/Free surfer. Since the chamfer distance is used to guide the network training, why do the authors claim the proposed method is weakly supervised?\n2.\tIt is not clear how the original images are overlaid with the predicted mesh. Is any registration used? Details are missing.\n3.\tIt seems the main contribution of the proposed SegCSR is the boundary loss function?\n\n1.\tWhy not use the total ADNI datasets for network training as what is used in previous research like DeepCSR and voxel2cortex?\n2.\tHow the predicted meshes are overlaid on the original images? Details should be given.\n3.\tWhat does the ‘L-Pial Surface’ and ‘L-WM Surface’ in the tables mean? The Pial and WM surface of the left hemisphere. Why not also present the results for the right hemisphere?"
},
{
"confidence": 4,
"rating": 7,
"review_id": "iUyxJyL19E",
"review_text": "The paper presents a deep learning approach to jointly reconstruct multiple cortical surfaces using weak supervision from brain ribbon segmentations derived from brain MRIs. The method leverages the midthickness surface and deforms it inward and outward to fit the inner and outer cortical surfaces by jointly learning diffeomorphic flows. Regularization terms are included to promote uniformity, smoothness, and topology preservation across the surfaces. Experiments are conducted on large-scale adult and infant brain MRI datasets.\n\n- The approach is novel in its use of weak supervision from readily available segmentation datasets, which reduces the burden of preparing pseudo-ground truth surfaces.\n- The paper is well-written and structured, with a clear motivation for the method.\n- The methodology is explained in detail, and the experiments are comprehensive.\n- The approach has the potential to democratize the use of deep learning in cortical surface reconstruction by leveraging existing segmentation datasets.\n\n- The paper's central contribution of weak supervision is undermined by the fact that the model is trained on pseudo ground truth surfaces for white matter and pial surfaces.\n- The experimentation is limited to brain cortical surfaces and MRI images. Broader experiments involving different anatomies (e.g., bone cortical surfaces, heart walls) and imaging modalities would enhance the paper's impact.\n- Results lack statistical significance analysis to validate sub-millimeter reconstruction errors.\n- There is no evidence showing that improvements in mesh reconstructions correlate with enhanced performance in downstream analysis tasks.\n- The robustness of the method regarding input noise/perturbation and images from multiple centers is not evaluated.\n- There is no analysis of the computational complexity, including the resources and time savings provided by the proposed weak supervision.\n- There is no sensitivity analysis on the choice of weights used to weigh the different components of the overall loss.\n- The impact of ribbon segmentations quality (e.g., voxel spacing) as weak supervision is not investigated.\n\n1. Can you provide evidence or analysis showing that improvements in mesh reconstructions lead to enhanced performance in downstream analysis tasks?\n2. How does the method perform with input noise or perturbations? What is the expected performance under domain shifts?\n3. What are the computational resources and time requirements saved by using weak supervision compared to traditional methods?\n4. How does the quality of ribbon segmentations (e.g., voxel spacing) impact the reconstruction accuracy?"
},
{
"confidence": 4,
"rating": 5,
"review_id": "EpntPeFTHG",
"review_text": "The paper presents a novel deep learning method for the reconstruction of cortical surfaces from 3D MRI. The proposed method follows an approach learning explicit surface deformations, in which a CNN is used to predict three velocity fields, corresponding to the pial, white matter and midthickness surfaces. Unlike previous techniques which use cortical surface pseudo ground truth (e.g., generated using FreeSurfer), the proposed method trains the network with faster-to-obtain segmentation pseudo ground truth. In addition to the standard surface prediction losses (based on Chamfer distance), the method uses 1) an Inter-Mesh Normal Consistency loss that encourages the pial and WM surface to be locally parallel, 2) an Intensity Gradient loss that place the surfaces at regions of high intensity gradients, 3) a Cycle Consistency loss enforcing inverse consistency between the midthickness-to-pial deformation and the midthickness-to-WM one, and 4) a Mesh Quality loss that helps having regular surface meshes (uniform sized triangles and smoothly varying normals). The method is evaluated on the ADNI, OASIS and BCP datasets, where its performance is compared to that of implicit and explicit approaches. Results show that the method obtains a better reconstruction accuracy compared to other techniques trained in a weakly supervised setting (pGT segmentation mask), but a lower performance than those trained with pGT cortical surfaces.\n\n* The proposed method differs from previous approaches that explicit surface deformations by predicting a midthickness surface and incorporating additional loss terms that compensate for the weak supervision of pGT segmentation.\n\n* Experiments, involving three different datasets and comparing against several recent baselines, as well as including various ablation variants, are well designed. Results indicate superior performance in the weakly supervised setting.\n\n* The main motivation of the proposed method is doubtful. Authors motivate the need for their weakly-supervised cortical reconstruction method by the \"prolonged processing time for generating pGT surfaces\". However, as the pGT cortical surfaces can be generated automatically in an offline step, I believe the argument is weak. Moreover, recent pipelines for brain image processing, such as FastSurfer, can extract surfaces with comparable accuracy in a fraction of the time.\n\n* The accuracy of the proposed method is considerably lower than approaches which train on cortical surfaces. Furthermore, while it produces fewer topological artifacts like self-intersecting faces, those can be removed via post-proicessing in implicit methods like DeepCSR. Combined with my previous comment, the advantages of the method are unclear.\n\n* The ablation study in Table 2 indicates that most of the proposed loss terms have limited impact on the overall performance. For example, adding the Mesh quality loss seems to actually degrade performance in terms of CD, ASSD and HD.\n\n* How does your method compare to other approaches in terms of training and inference time ? \n\n* The proposed method has several hyper-parameters (lambda1-5) that need to be tuned. How were the values selected for these hyper-parameters, and how sensitive is the method to the chosen values?\n\n* In Figure, why is the pial surface represented with two different colors (orange and purple) ?\n\n* In Eq (4), how do you compute the pial and WM surface normals if the point is on the midthickness surface?\n\n* p6: \"where npG and npW are the normal vectors of the deformed vertex p on SM and SG respectively\": Do you mean on S_G and S_W ?\n\n* p6: \"segmentaions\" \n\n* Section 4.2: Do you mean Table 1 ? \n\n* p8: \"nromal\"\n\n* p9: \"Also, We can\"\n\nSee weaknesses for main comments to answer."
}
] |
vJSNsSFO95 | Flaws can be Applause: Unleashing Potential of Segmenting Ambiguous Objects in SAM | As the vision foundation models like the Segment Anything Model (SAM) demonstrate potent universality, they also present challenges in giving ambiguous and uncertain predictions. Significant variations in the model output and granularity can occur with simply subtle changes in the prompt, contradicting the consensus requirement for the robustness of a model. While some established works have been dedicated to stabilizing and fortifying the prediction of SAM, this paper takes a unique path to explore how this flaw can be inverted into an advantage when modeling inherently ambiguous data distributions. We introduce an optimization framework based on a conditional variational autoencoder, which jointly models the prompt and the granularity of the object with a latent probability distribution. This approach enables the model to adaptively perceive and represent the real ambiguous label distribution, taming SAM to produce a series of diverse, convincing, and reasonable segmentation outputs controllably. Extensive experiments on several practical deployment scenarios involving ambiguity demonstrates the exceptional performance of our framework. Project page: \url{https://a-sa-m.github.io/}. | https://openreview.net/pdf/d9d0ed08e91694b0c1b594f2e8d5bece62aa7179.pdf | [
{
"confidence": 4,
"rating": 5,
"review_id": "fUQM9wfPxg",
"review_text": "The paper presents a novel approach to handling the inherent ambiguities in the SAM used for image segmentation. SAM, despite its robustness, often exhibits sensitivity to slight variations in prompts and object granularity, leading to inconsistent predictions. The authors propose a new framework leveraging a conditional variational autoencoder to model these ambiguities probabilistically. This approach enables SAM to produce diverse and reasonable segmentation outputs by adapting to the inherent ambiguities in the data. The paper details extensive experiments demonstrating the effectiveness of this framework across various practical scenarios involving ambiguous segmentations.\n\n1.\tThis work addresses a critical challenge in image segmentation, especially in medical imaging and other fields where ambiguous data is common. By turning SAM's sensitivity into an advantage, the paper contributes to the advancement of robust and adaptable segmentation models.\n\n2.\tprovides a thorough analysis of SAM's sensitivity to prompt variations and object granularity, backed by detailed experiments and statistical evaluations.\n\n3.\tThe paper is well-structured, with clear definitions and explanations of the proposed methods. The use of figures and tables enhances the understanding of the framework and its performance.\n\n1.\tThe paper primarily tests the framework on specific medical imaging and synthetic datasets. There is a lack of diverse real-world datasets, such as those from different domains (e.g., natural scenes, industrial applications), which might exhibit different types and degrees of ambiguity.\n\n2.\tI have a concern that the framework might be overfitted to the specific characteristics of the tested datasets. This concern is evidenced by Table 6, where the \"No Prompt Ambiguity\" configuration demonstrated metrics comparable to those of A-SAM. Would it be possible that the test datasets might be biased, exhibiting little ambiguity in prompts?\n\n1.\tEquation 13 mentions learning weights to assemble multiple masks into a final output. Where are these weights predicted from? Does the method obtain multiple results through random sampling or a weighted averaging process? If it's the latter, how does it learn multiple sets of weights? If it's random, how does it correspond to the ground truth?\n\n2.\tWhat is the average inference speed for the entire dataset? What percentage of the images contain reasonable masks?\n\n3.\tCan you elaborate more on why those specific datasets were being chosen?\n\n4.\tPlease refer to the weakness section, can you be more specific on what datasets were used in Ablation and Robustness studies?"
},
{
"confidence": 3,
"rating": 5,
"review_id": "ZCyj4cRCCe",
"review_text": "This paper proposes a SAM-based framework to address the ambiguous image segmentation problem. The authors present an optimization framework based on a conditional variational autoencoder, which simultaneously models the prompt and the granularity of the object using a latent probability distribution. This approach allows the model to adaptively perceive and represent the real ambiguous label distribution, enabling SAM to controllably produce a series of diverse, convincing, and reasonable segmentation outputs. Experiments on multiple datasets and metrics demonstrate the effectiveness of the method.\n\n1. To the best of my knowledge and as indicated by the authors, this paper is the first work that leverages the inherent properties in vision foundation models (SAM) for ambiguous image segmentation.\n2. The experimental results demonstrate impressive advantages. Compared to the original SAM, the proposed method shows significantly better performance in the presence of prompt shifts. This high level of robustness is extremely valuable in practical applications.\n\n1. The task setup of ambiguous image segmentation in this paper is somewhat confusing for me. I have read some referenced and comparative works cited in the paper, such as [a], and found that their task objective is providing multiple segmentation hypotheses for ambiguous images. However, this paper seems to focus more on increasing the accuracy and stability of the model's output when the input prompt has noise or shifts. More explanation about the task setup is needed. Accordingly, it is recommended to include a section in the main text that introduces the task setup, which can help readers who are not experts in this research area understand the paper better.\n\n2. The comparison with conventional ambiguous segmentation models seems unfair because most of the compared methods do not use a network structure as large as SAM. Therefore, it is unclear whether the performance advantage comes from the increased number of network parameters in SAM or from the innovative designs proposed in this paper. I noticed that some of the compared methods, such as [b], can be applied with any encoder-decoder-based segmentation models. Thus, the results of these methods using SAM as the segmentation model should also be reported and compared. This would help evaulate whether the effectiveness of the proposed model is solely due to SAM's larger number of parameters.\n\n3. The writing structure of the paper is somewhat unclear, making it a little difficult to read. For example, the inference method is illustrated in Section 3.1, but the training method is introduced in Section 3.4. It is recommended to create a section titled “Training and Inference,” which contains two subsections that respectively introduce the training and inference methods.\n\nMinor Problem:\n\n1. In Line 169, `Previous research indicates that...' should have corresponding citations added.\n\n[a] A Probabilistic U-Net for Segmentation of Ambiguous Images\n\n[b] MODELING MULTIMODAL ALEATORIC UNCERTAINTY IN SEGMENTATION WITH MIXTURE OF STOCHASTIC EXPERTS\n\n1. Will the proposed method perform better than the original SAM if there is no point/box/mask shift?\n\n2. Why is the proposed method trained from scratch using randomly initialized weights? Would it be better to finetune from the pre-trained SAM?"
},
{
"confidence": 3,
"rating": 6,
"review_id": "QSthAlrX3T",
"review_text": "This paper builds a framework for amigous object segmentation on top of SAM prompted with bounding boxes, which is known to be sensitive to small prompt changes. \n\nThe framework is based on a VAE, and the main idea is to jointly model the prompt and the object granularity with a latent probability distribution to gain more control over SAM’s output. In practice, the prompt embeddings and image embeddings (controlling granularity) are formulated as a distribution.\n \nThe method is evaluated on 3 medical imaging datasets and on a synthetic driving dataset, showing superior performance over the baselines.\n\n1. The method is the first to use a promptable large-scale pretrained model like SAM for ambiguous image segmentation\n2. The methodology is in general clearly written and easy to follow, figure 2 provides a great overview of the method\n3. Extensive evaluation and ablations were performed, showing the method’s superior performance compared to baselines on all of the datasets. (the method is not evaluated on any non-medical real dataset though, see weaknesses)\n4. The joint modeling of promts and image embeddings of the proposed method is efficient since the probability sampling is only performed after the SAM encoder and thus the image embedding needs to be computed only once (SAM decoder is lightweight)\n\n1. The paper contains several unclear statements or missing details, which make the reproducibility of the method difficult.\n2. The evaluation is carried on a niche domain (medical) or on synthetic datasets only. It is hard to judge the performance of this method in general real-world setting.\n\n## General remarks\n1. Evaluation on a real-world (not synthetic) non-medical dataset would help to show the generality of the method.\n2. It would help readability if it was mentioned that the evaluation metrics are defined in the appendix, also it would help to see the related references in the main paper\n3. Is there some intuition/more details on why the granularity is modelled within the image embedding?\n\n## Reproducibility\n3. How were the trade-off coefficients tuned?\n4. More details on how the three masks from overlapping instances on the SIM10k dataset were obtained should be provided.\n5. How was the best checkpoint selected? Was there any hyper-parameter tuning?\n6. What does 'achieving significant segmentation outcome‘ mean on line 97? Improvement in segmentation performance over SAM without adapter?\n\n## Fig. 1:\n7. SAM outputs multiple predictions for a prompt, how is this handled in Fig. 1a and 1b?\n8. Medical domain is not in the training domain of SAM so higher uncertainity/instability of prediction is expected, maybe it is not the best example to showcase the behaviour.\n9. What are canonical box prompts from description of Fig. 1? Ground truth bounding boxes?\n10. I assume granularities in 1c correpsond to the three output masks of SAM, what is full granularity then?\n11. The prompt variation experiment depicted in Figure 1 includes bounding boxes that do not cover the whole region to be segmented. It is not unrealistic to control for that in real scenarios, and it would be interesting to see how the figure would change since SAM seems to be quite sensitive to whether the whole object is covered or not – making the bounding box smaller than an object impacts segmentation more than making it larger.\n12. It would be helpful to see how the experts annotate the example \n\n## Fig 2: \n13. Why is image embedding concatenated with the IGN sample, but the prompt embedding is the output of PGN directly?\n14. Incomplete description – ‚by jointly probabilities‘‘\n\n## Add Weakness 1. – unclear statements and missing details \n15. How were the trade-off coefficients set?\n16. How exactly was the three masks generated from overlapping instances on the SIM10k dataset?\n17. How was the best checkpoint selected? Was any hyper-parameter tuning performed (if yes, on what data)?\n18. Line 153 – parameters of axisymmetric Gauss. Distribution „including mean and std“ – the gauss. distirbution does not have any other parameters.\n19. What does 'achieving significant segmentation outcome‘ mean on line 97? Improvement in segmentation performance over SAM without adapter?\n20. What is meant by the 'final integrated SAM output that integrates multiple candidates‘ on line 44? The only part of SAM that integrates multiple predictions I am aware of is SamAutomaticMaskGenerator class provided by the authors (it features non-maxima suppression) but it prompts SAM with a uniform grid of points while the paper discusses bounding box prompts.\n21. The explanation of GT generation for the datasets is confusing since it is incomplete in the paper, it would be nice to at least have a link to the appendix for more details.\n22. On lines 38-42, it would help to see an example of such behaviour – what is meant by SAM amalgamating the candidates at different granularities? AFAIK, SAM outputs multiple predictions for each prompt specifically to deal with ambigous prompts.\n23. What does 'diminutive adapters‘ on line 93 mean?\n24. What is meant by encoder lenght in line 123?\n \n## Add Weakness 2. – evaluation \n25. Evaluation on a real-world (not synthetic) non-medical dataset would help to show the generality of the method.\n26. Why is original SAM not included in the comparison from subsection 4.2?\n\n## Typos: \n27. Line 83 – promotable instead of promptable"
},
{
"confidence": 3,
"rating": 5,
"review_id": "lEVAcm7qmv",
"review_text": "This paper aims to convert the flaws in the vision foundation model (e.g., SAM) into advantages for ambiguous object segmentation. To this end, the authors propose a novel framework that employs latent distribution and an optimization architecture. The authors validated the performance of the proposed methods through comprehensive experiments.\n\nUnlike existing approaches that aim to stabilize the sensitivity to ambiguous objects in SAM, this paper suggests leveraging the vulnerability for ambiguous object segmentation. The proposed approach seeks to harness SAM's sensitivity, redeemed as a weakness, to address ambiguous and uncertain predictions.\n\n1. The explanations are unclear and hard to follow. Specifically, it needs further explanation of how to extract the mean and standard deviation from the convolution blocks and how to utilize the ground truth labels in the posterior version of the prompt generation network.\n2. Some symbols are used without explanation (e.g., Θ, Φ, N_i, N_p).\n3. Missing reference: Previous research at line 169.\n4. Since this paper focuses on clinical scenarios for ambiguous object segmentation, it seems unfair to compare the performance without including existing medical segmentation methods such as OM-Net [1], DC-UNet [2], and CE-Net [3].\n\n[1] https://arxiv.org/pdf/1906.01796v2\n[2] https://arxiv.org/pdf/2006.00414v1\n[3] https://arxiv.org/pdf/1903.02740v1\n\n1. What is the difference between PGN and posterior PGN?"
}
] |
vJMMdFfL0A | The Benefits of Balance: From Information Projections to Variance Reduction | Data balancing across multiple modalities and sources appears in various forms in foundation models in machine learning and AI, e.g., in CLIP and DINO. We show that data balancing across modalities and sources actually offers an unsuspected benefit: variance reduction. We present a non-asymptotic statistical bound that quantifies this variance reduction effect and relates it to the eigenvalue decay of Markov operators. Furthermore, we describe how various forms of data balancing in contrastive multimodal learning and self-supervised clustering can be better understood, and even improved upon, owing to our variance reduction viewpoint. | https://openreview.net/pdf/e19715b92b9edf482506f5332dc738e0bd203da9.pdf | [
{
"confidence": 2,
"rating": 6,
"review_id": "te2iCpT8UN",
"review_text": "This paper introduces a technique called iterative data balancing—altering data distributions to match predefined marginal distributions—that can lead to variance reduction in model predictions. The authors highlight its utility for self-supervised learning, which has been used to train several foundation models. The results demonstrate that iterative rebalancing of data leads to improvements in zero-shot learning performance and a reduction in variance among the empirical marginals with more than one iteration (k>1) of their technique.\n\nThe paper has theoretical contributions that include the derivation of non-asymptotic bounds that quantify the variance reduction achieved through their data balancing technique. The authors also present empirical studies that demonstrate the effectiveness of their proposed balancing technique. The authors discuss the utility of data balancing across different tasks, such as image-caption pair matching and self-supervised clustering, identifying the utility of their approach. Their approach has the potential for adoption in various domains, including in the training of foundation models.\n\nThe authors could expand the range of experiments to include a more diverse set of tasks, which in turn could enhance the generalization of the findings. Furthermore, their iterative data balancing technique relies heavily on predefined (uniform) target marginal distributions (see questions about this in next section). Finally, the iterative nature of the proposed data balancing technique may introduce significant computational demands. The paper could benefit by a more comprehensive overview of how the iterative technique computational overhead is impacted by very large datasets and/or models.\n\n1. In your work, your target marginals were uniform; how would your method respond to non-uniform marginals?\n2. Your target marginals were accurately specified (as uniform). What if the target marginals of the two distributions were less accurately specified (i.e., the true underlying distributions are not well-known)? How do you think that this would influence the empirical results of your technique (e.g., zero-shot average per-class recall)?\n3. Are there existing methods for variance reduction and data balancing? If so, why did you not include an empirical comparison to existing methods?\n4. You mention that the zero-shot evaluation metrics are difficult to produce intervals for (i.e., you are missing error bars). Why is this the case?"
},
{
"confidence": 3,
"rating": 5,
"review_id": "SkHeONy4Ex",
"review_text": "This paper explores the use of data balancing in various self-supervised learning (SSL) frameworks. The authors argue that this iterative algorithm, which is typically used to avoid representation collapse in SSL models, also provides a benefit of reducing the variance of empirical functionals of the distribution over data sources. The paper establishes non-asymptotic bounds quantifying this variance reduction and relates them to the eigendecays of specific Markov operators.\n\n1. the paper provides a new perspective on the benefits of data balancing\n2. provide different examples of data balancing in practice and prove a non-asymptotic bound on the MSE of balanced estimators.\n3. The findings may have implications for improving SSL models\n\n1.the experiments are somewhat limited in scope\n2.adding more visualizations or intuitive explanations may be better for understanding the key finding of the paper.\n3.will the assumptions limit the applicability of the findings?\n\n1. Can the findings be extended to other areas?\n2. Can the authors provide more details on how the assumptions, such as the spectral gap condition, hold or when will they not hold in practical scenarios? Are there specific types of data or models where these assumptions are more likely to be satisfied?\n3. For Figure 2, Can the authors provide experiment results with more iterations?\n4. Are there any other variance reduction techniques and how does data balancing compare to those techniques?"
},
{
"confidence": 3,
"rating": 5,
"review_id": "23pFttbvDD",
"review_text": "This work focusses on data balancing strategies in context of self-supervised learning. The main claim of the paper is that data balancing, commonly used to avoid representation collapse, has a variance reduction effect. The authors introduce an upper bound on the MSE of a balancing estimator, relating it to empirical risk minimisation. The main paper covers the key elements of the proofs, which is given in detail (and is extensive) in the appendix. Experiments are conducted to illustrate the impact of data balancing on examples described in the paper.\n\nThis paper attempts to shed light on SSL training and the role of data balancing. The paper formalises the problem and develops extensive theory. The main results is pretty cool and insightful in the sense that the upper bound on the MSE shows that data balancing has a variance reduction effect. The topic is of interest to the community and the work is focussing on a poorly understood paradigm that is becoming dominant.\n\nI have three main concerns with this work:\n\n1/ The theory is *very* extensive. The Appendix contains several pages of proofs that are difficult to parse and come on top of the formalism presented in the main paper. It seems like the main body could be simplified and made more to the point to convey the main gist of the contribution and make it more accessible.\n\n2/ It is unclear how the data balancing examples in Section 2 map to the formalism introduced in Section 3. For example, what would (4) look like for example 1 and example 2?\n\n3/ It is unclear what the experiments bring to the table and how they provide evidence to the main result. Making the link more explicit and explaining what are the key take aways from these results would help the reader.\n\nI have the following questions for the authors:\n\n- Line 54: What do you mean by \"X and Y are forms of the data that are related to, but distinct from, the form of Z\"\" given that Z is equal to (X,Y)?\n- p4, example 1: What would the target marginals correspond to? What is \\psi_n^(k) and P_n^(k) here? \n- p4, example 2: What would the target marginals correspond to? What is \\psi_n^(k) and P_n^(k) here?\n- Why do we need \\tilde{\\psi}_n^{(k)} in (12) and how does it relate to \\psi_n^{(k)}?\n- How does (15) relate to the clip example introduced earlier in the paper and why is this a valid and sensible simplification to study?\n- Does the main result have implications in practice in terms of design of algorithm?"
}
] |
vJLTcCBZVT | Improving Subgroup Robustness via Data Selection | Machine learning models can often fail on subgroups that are underrepresented
during training. While dataset balancing can improve performance on
underperforming groups, it requires access to training group annotations and can
end up removing large portions of the dataset. In this paper, we introduce
Data Debiasing with Datamodels (D3M), a debiasing approach
which isolates and removes specific training examples that drive the model's
failures on minority groups. Our approach enables us to efficiently train
debiased classifiers while removing only a small number of examples, and does
not require training group annotations or additional hyperparameter tuning. | https://openreview.net/pdf/a3f46e22e6e41370e2c814be79b1e92e6e971d7c.pdf | [
{
"confidence": 4,
"rating": 7,
"review_id": "84ZYmQ7e3J",
"review_text": "This paper proposes a data-centric model debiasing technique to identify and remove data which harm worst-group accuracy. This method removes fewer data than standard balancing techniques and can be adapted for settings with and without group annotations. Experiments are provided on standard group robustness benchmark datasets, and the method is shown to promote bias discovery on ImageNet in the absence of group annotations.\n\n1. The differences between the full-information, partial-information, and no-information regimes are clearly delineated, and the advantages of D3M and Auto-D3M in each setting are comprehensively discussed. In the no-information regime, which is important and yet understudied in the literature, the authors propose a novel and elegant Auto-D3M algorithm based on TRAK, which I expect to be a strong baseline for future work in this setting.\n2. D3M and Auto-D3M compare favorably to other common data balancing techniques such as subsampling a class-balanced dataset, removing far fewer points while achieving better WGA performance.\n3. Sections 5.2 and 6 are comprehensive and very useful for developing an intuitive understanding of the proposed group alignment scores and TRAK matrix. The ability to discover spurious correlations in complicated datasets without group annotations is likely to be useful for practitioners.\n4. The explanations of each algorithm -- D3M, Auto-D3M, and TRAK -- are clear and well-written. The mathematics is well-explained and sufficiently technical without being convoluted.\n\n1. In Table 1, the only comparison to previous work provided for the no-information regime is ERM, which is generally understood to be a weak baseline for group robustness tasks. Some examples of comparisons I would expect to see in this setting include MaskTune [1], uLA [2], DivDis [3], or CB-LLR [4]. Similarly, in the partial-information regime, additional comparisons may include AFR [5] or SELF [4]. (I do not expect the authors to include all these comparisons, but it would benefit to discuss the most appropriate ones).\n2. In Section 6, I believe a reference and comparison to [6] is missing. Similarly to this paper, [6] uses a data-centric method to discover and mitigate spurious correlations in the ImageNet dataset.\n3. Tables 1, 2, 3, and Figure 6 lack error bars. It would improve the scientific rigor of the paper to run these experiments over multiple random seeds and provide standard deviations or confidence intervals.\n4. There are a couple typos and grammatical errors in the writing, e.g., on lines 482 and 484. Also, the bibtex could use an update, as some references are out of date (e.g., Kirichenko et al. -- [21] in the paper -- is listed as an ArXiv preprint but appeared at ICLR 2023).\n\n***References***\n\n[1] Taghanaki et al. “MaskTune: Mitigating Spurious Correlations by Forcing to Explore”. NeurIPS 2022.\n\n[2] Tsirigotis et al. “Group Robust Classification Without Any Group Information.” NeurIPS 2023.\n\n[3] Lee et al. “Diversify and Disambiguate: Learning From Underspecified Data.” ICLR 2023.\n\n[4] LaBonte et al. “Towards last-layer retraining for group robustness with fewer annotations”. NeurIPS 2023.\n\n[5] Qiu et al. “Simple and Fast Group Robustness by Automatic Feature Reweighting.” ICML 2023.\n\n[6] Moayeri et al. “Spuriosity Rankings: Sorting Data to Measure and Mitigate Biases.” NeurIPS 2023.\n\n1. Is the hyperparameter k from D3M (number of examples to remove) the same as the hyperparameter k from TRAK (dimensionality of the gradient projection)? If not, it would be helpful to use different letters and detail how the k in TRAK is chosen.\n2. Why is random initialization used for CelebA, as opposed to standard ImageNet initialization? Do the CelebA comparisons in Table 1 also use random initialization? \n3. In the appendices, the tables reference proposed methods TRAK and Auto-TRAK. Is this meant to read D3M and Auto-D3M respectively?\n4. While not strictly necessary, I would be curious to see a qualitative comparison of the results from Section 5.2 and Figures 3 and 4 with other data selection techniques from the robustness literature. How do the data with negative alignment scores compare with data selected via misclassification [1], disagreement [2], other influence functions [3, 4], or Shapley values [5]? Are negative alignment scores perhaps more interpretable than these other techniques?\n\n***References***\n\n[1] Liu et al. “Just Train Twice: Improving Group Robustness without Training Group Information.” ICML 2021.\n\n[2] LaBonte et al. “Towards last-layer retraining for group robustness with fewer annotations”. NeurIPS 2023.\n\n[3] Koh and Liang. “Understanding Black-box Predictions via Influence Functions.” ICML 2017.\n\n[4] Feldman and Zhang. “What Neural Networks Memorize and Why: Discovering the Long Tail via Influence Estimation.” NeurIPS 2020.\n\n[5] Ghorbani and Zou. “Data Shapley: Equitable valuation of data for machine learning.” ICML 2019."
},
{
"confidence": 4,
"rating": 7,
"review_id": "gcBL0QjCxO",
"review_text": "The paper introduces a method called Data Debiasing with Datamodels (D3M) that addresses the problem of model bias (using the worst-case loss over groups as the metric). The approach leverages a process known as datamodeling to predict model behavior based on training data influence, focusing on removing data points that contribute heavily to worst-group error. The paper illustrates D3M’s effectiveness across various datasets, showing that it can outperform both traditional model and data intervention strategies. Moreover, the method is adaptable to the setting without explicit subgroup labels.\n\nOriginality: The paper introduces an innovative approach, Data Debiasing with Datamodels (D3M), which creatively combines elements from influence functions and data-centric fairness strategies to address model bias. D3M focuses on optimizing the dataset by identifying and removing specific training instances that disproportionately skew model performance against minority subgroups. This methodological innovation brings high originality to the paper.\nQuality: The authors conduct a thorough analysis across multiple datasets, effectively demonstrating how D3M enhances worst-group accuracy. The use of comparative baselines and the examination of different scenarios (including those without explicit subgroup labels) shows the robustness and reliability of D3M.\nClarity: The paper is relatively well-structured. The effective use of diagrams and necessary mathematical definitions help demonstrate the results. Moreover, case studies help readers understand the use cases of D3M.\nSignificance: The significance of this work is relatively substantial, addressing the issue of the subgroup biases of models. Moreover, by providing a tool that can improve model fairness without needing subgroup labels, the paper contributes to the applications where the group labels are unavailable.\n\nOne weakness of the method is its exclusive focus on improving worst-group accuracy without presenting results on how it might affect the overall accuracy for all groups. This raises concerns about potential trade-offs, where enhancing fairness for the worst-performing subgroup could compromise the model's general performance. Additionally, the paper does not thoroughly explore how different model configurations might influence the outcomes. Understanding how variations in model architectures, initial parameter settings, or training procedures affect the effectiveness of the method is useful for validating its robustness and adaptability to diverse scenarios. Finally, a relatively minor weakness is that the demonstration of the paper could be more organized and coherent.\n\nAfter improving the worst-group accuracy, does the model still maintain good overall accuracy? How does the method impact the performance across all groups? Were the results of the method tested across various model architectures to confirm its generalizability? In scenarios lacking explicit group labels, were there any experiments conducted to assess the effectiveness of the pseudo group labeling approach using the datamodel matrix in the setting or case studies in this paper?"
},
{
"confidence": 5,
"rating": 5,
"review_id": "w3ekAn1X5D",
"review_text": "This paper introduces a new data debiasing technique called Debiasing with Data Attribution (DDA). DDA utilizes data modelling framework to identify and eliminate training examples that negatively impact the accuracy of the worst-performing groups. Additionally, the paper presents AUTO-DDA, an extension of DDA that can identify biases even without prior knowledge of group information. The proposed methods are validated through experiments on various datasets such as CelebA-Age, CelebA-Blond, Waterbirds and MultiNLI.\n\n1. The proposed approach is simple and effectively improves the performance on real-wolrd datasets such as ImageNet.\n2. The paper is presented well and easy to follow.\n\n1. The performance of ImageNet is only reported on selected classes. How are the classes selected for evaluation? Is it based on the amount of bias present in the classes? \n2. I am unsure if the proposed approach is effective when the majority of the data consists of bias-aligned points. For example, if there are only a few conflicting points and the rest are bias-aligned, how will the data be removed? I doubt the approach would still be useful for debiasing since a large part of the data is still going to be majorly biased. Even if the authors claim that the majority of the bias aligned points will be removed, I believe the model would still overfit to the data since the final dataset would be extremely small. Analyzing the performance of the approach with varying numbers of bias-conflicting points (1%, 5%, 10% of CMNIST(10 class classification)) in the dataset would be beneficial to understand this scenario. This experiment would provide insights into how well the approach scales to real-world scenarios where the degree of bias is significantly high.\n\nPlease refer to the questions in the weakness section."
},
{
"confidence": 2,
"rating": 5,
"review_id": "BdgJG24Yj2",
"review_text": "The paper proposes Data Debiasing with Datamodels (D3M), a method to improve machine learning model performance on underrepresented subgroups by removing specific training examples that cause failures. Unlike traditional balancing methods, D3M efficiently debiases classifiers without needing group annotations, significant dataset reductions or additional hyperparameter tuning.\n\n- Significance: This work effectively identifies and removes training samples to improve the worst-group accuracy. As demonstrated by the experiments, this method outperforms both standard model-based and data-based approaches.\n- Comprehensive Datasets: A wide range of datasets is used for image and text classification tasks, with corresponding benchmarks evaluated against existing methods as listed in Appendix B, \"Details of Experiments.\"\n\n- Writing and Format: the presentation of the paper needs readability improvement:\n - Redundant section start: Line 81\n - Excessive parenthetical comments and irregular format: Lines 20, 25, 28, 32, 67-71, 83, etc.\n\n- In addition to isolating problematic training data, experiments should be conducted to assess the impact on the necessity of further hyperparameter tuning and to strengthen the case for the effectiveness of the proposed method.\n\nN/A"
}
] |
vIP8IWmZlN | Speaking Your Language: Spatial Relationships in Interpretable Emergent Communication | Effective communication requires the ability to refer to specific parts of an observation in relation to others. While emergent communication literature shows success in developing various language properties, no research has shown the emergence of such positional references. This paper demonstrates how agents can communicate about spatial relationships within their observations. The results indicate that agents can develop a language capable of expressing the relationships between parts of their observation, achieving over 90% accuracy when trained in a referential game which requires such communication. Using a collocation measure, we demonstrate how the agents create such references. This analysis suggests that agents use a mixture of non-compositional and compositional messages to convey spatial relationships. We also show that the emergent language is interpretable by humans. The translation accuracy is tested by communicating with the receiver agent, where the receiver achieves over 78% accuracy using parts of this lexicon, confirming that the interpretation of the emergent language was successful. | https://openreview.net/pdf/542546bc3b321700b242332d3fe1d91c56e85f07.pdf | [
{
"confidence": 4,
"rating": 6,
"review_id": "1WQkvViO3O",
"review_text": "This work investigates the presence of spatial deixis (e.g., spatial references in language dependent on the context of the utterance) in a signalling game within the paradigm of emergent communication.\nIt begins by introducing a variant of the signalling game which requires the sender to communicate the relative position of some element in a sequence of integers visible to the receiver.\nAnalysis of the semantics of the emergent communication is performed primarily through looking at normalized point-wise mutual information.\nThese analyses show that the emergent communication regularly uses spatially-referent messages (or sub-message units), validating the presence of spatial deixis in the environment.\n\nIn conjunction with standard criteria, there are three characteristics that are particularly important for emergent communication research: reusability (how easily can another researcher use the products of this research), generalizability (how much do the findings of this research apply broadly to our knowledge of emergent communication), and directedness (does this research contribute concretely to particular questions in emergent communication research).\n\n### Quality\n- (major) The experimental design is of good quality; the methods are in line with the standards of the field.\n### Clarity\n- (major) The language of this paper is easy to read and illustrates the central points effectively.\n### Reusability\n- (major) The provided code appears to be of good quality (have not attempted to run it); this work would be very easy to reuse in subsequent experiments.\n### Generalizability\n- (minor) The environment is relatively simple, with few confounding factors, making it easier to draw conclusions about more general tendencies in EC.\n### Directedness\n- (major) This paper is directed toward an important goal in emergent communication: discovering/engineering more human-like features in EC, namely deixis or context-dependent reference.\n- (minor) Secondarily, this paper also demonstrates some degree of compositional semantics and syntactic features.\n\n### Quality\n- Nothing of note.\n### Clarity\n- (minor) One or two more of the points in Section 5 need to be illustrated (likely with just a table); although the text is mostly clear, some tables which aggregate what is said there would make referencing the paper much easier.\n- (minor) It is a little bit confusing that your hypotheses are what you expect to be false; it might be clearer to state hypotheses in positive (even if the statistical test you are using is rejecting a null hypothesis of a random baseline).\n### Reusability\n- Nothing of note.\n### Generalizability\n- (major) It is not clearly the case that the environment addresses deixis in a way that applies to emergent communication environments more generally (see first question for more).\n- (major) There is not much discussion on how the deixis investigated in this paper is applicable to emergent communication more generally.\n### Directedness\n- Nothing of note.\n\n- The biggest question regarding the paper for me is how do we make the jump for the simple example of deixis presented in the empirical investigation of the paper to a more robust form of deixis. It is not wrong (and likely correct, in fact) to have started out with a toy problem, but I believe there either needs to be empirical work and/or some light theoretical work on what exactly is meant by \"deixis\" in this paper, how the environment investigated satisfies that definition, and how this is relevant for further environments. I might raise my score if the authors could respond with a slightly more formal characterization of deixis and how it maps both to the current environment and more sophisticated, \"natural\" environments (e.g., embodied multi-agent environment).\n- In the environment, are the integers actually represented as integers in the neural network or are they encoded as one-hot vectors? If they are OHVs, it is not clear that it is the case; if they are not OHVs, it seems like an odd design choice to feed a scalar into an NN when it is representing something categorical.\n- How are the various inputs to the receiver agent actually fed into the network. Are they just concatenated \"temporally\" and given as a sequence to the GRU?\n\n\n### Comments\n\n- The fact that this paper is inducing a segmentation of emergent communication is (minor) contribution in and of itself, so I think it deserves a mention in the introduction.\n- The \"td\" variable needs to be introduced before the example; I am assuming it is the list of distractors plus the correct answer, but it should be stated explicitly when defining the vectors earlier.\n- Table 2: Don't reuse X; use other variables.\n- Include the actual URL for the (Anonymous) GitHub so that it is obvious it is a link.\n- I don't understand the point being made at Line 337.\n- I do not think it is appropriate to specifically mention \"SVO\" as an interpretation of the language since there is no clear way to distinguish between nouns, verbs, subjects, or objects; I think it is fine to say that there is syntactic structure, but I a skeptical of there being evidence to make any claim further than that here."
},
{
"confidence": 4,
"rating": 7,
"review_id": "RGr2jt0fsK",
"review_text": "This paper proposes a new communication game in the emergent communication framework to analyze the emergence of _deictic reference, i.e. expressions akin to demonstratives like \"this\" and \"that\". These are important expressions in natural language and especially in this emergence literature, since their meaning is context-dependent and \"functional\", i.e. cannot be reduced to objective properties of the object of reference. The paper also introduces an application of normalized pointwise information to the analysis of the emergent communication protocol in order to identify holistic messages (where a message refers to an entire meaning) and compositional ones (where certain n-grams and/or positions refer to specific \"components\" of the meaning). Both of these are welcome contributions and will be of interest to many people working on emergent communication. The core idea in their game is to use integers within longer sequences as the object of reference, provide a _partial observation_ of the true context to the sender (so that absolute positional information cannot be used) and to _mask out_ the target object in a sequence (so that the integer itself cannot be used); what remains as possible information to convey are things like \"two to the right of 13\".\n\n* A carefully designed emergent communication scenario which requires something like spatial deixis to emerge for successful communication. This is an important component of human language that goes beyond what has been done in existing literature.\n* Interesting and useful application of NPMI for the analysis of (non-)compositionality of the resulting messages.\n* Engages well with existing literature to situate the new contribution of this paper.\n* Results also show a robustness to things like random seed, which is not always the case in emergent communication.\n\n* Some experimental details could be more carefully reported and some analyses could be more systematic/quantitative (see questions below).\n* The artificial messages used to validate their NPMI metric does not yield results as strong as one would like (as discussed in the Limitations section); this makes it not entirely clear that the metric does what its intended to do.\n\n* Why did you choose a fixed-length of 3 for the messages (as opposed to either a single token, or variable-length)? \n* Line 92 and 114: should \"target integers\" and \"targets\" both be singular? There's one target integer, correct? Or is the plural here just over a batch of examples? If the latter, the wording is a bit confusing since the worked case in the paper is just one example (\"batch size 1\" so to speak). \n* Can $PMI_c$ and $PMI_{nc}$ be seen as one metric, with the latter a special case of the former (i.e. for the full tri-grams)? The discussion just before Section 5 seems to suggest so, so I would encourage more elaboration on whether these are really two separate degrees or not. For instance: does high $nc$ entail low $c$, and vice versa?\n* \"The analysis provided in this section is based on the messages collected from the test dataset after the training has finished\". What was the train/test split here? Appendix A provides model / optimizer hyper-parameters, but what are the game/environment/data choices?\n* Can the observations in Section 5.1 and 5.2 be made more quantitative? I would appreciate a more detailed analysis of the types of composition observed, their frequency, and other factors like that.\n* H2 and Table 1: while the Comp-P case is above chance, if the NPMI method correctly identified \"genuinely compositional\" messages, we would expect nearly perfect accuracy in this case, right?\n* Very minor typographic point: I think that the \"n\" in \"$n$-gram\" and similarly in the main text should be in math mode."
},
{
"confidence": 4,
"rating": 5,
"review_id": "KUqFu9Tidq",
"review_text": "The authors design a referential game environment intended to motivate the emergence of spatial references, cast in the form of a task where the target integer must be selected from an integer sequence. The character vocabulary for the message is smaller in size than the set of integers in the list, and this necessitates an alternative to directly specifying the target integers. Using a traditional GRU-based speaker/listener architecture, the model achieves high task accuracy. Using existing information theoretic measures the authors are able to roughly decode the semantics of the messages and show some degree of compositionality in the messages.\n\n- The experimental design is simple but I think straightforward and correct for what the authors want to test.\n\n- Similarly, it appears from the analysis in sections such as Table 2 that the resulting messages do seem to exhibit a variety of communication strategies, including the desired type in some messages (compositional positional)\n\n- The approach of decoding the meaning and segmentation of the messages via NPMI (though I would have also liked to see some discussion of where this would/would not be appropriate in terms of a general evaluation metric for EC. It seems some strong expectation over what the emergent language needs to say may be necessary? In this case, the presence of the integers, for instance)\n\n- Overall I find the biggest weakness to be in the scope of the paper and the degree to which the design of the environment caters to the type of messages the authors want to elicit here. It comes across as a toy problem, and through the lens of the field as a whole, I think it raises the question of whether there is sufficient novelty in making such small and targetted tweaks to the referential game formula.\n\nThis might be best highlighted by revisiting the motivating example, such as \"a blue vase with intricate motifs on the table\". Why refer to this as \"the vase over there\" or \"that vase near you\"? There are pressures in the referential game to draw out these spatial references, but these feel artificial and devoid of broader understanding about linguistic pressures when we compare them to the pragmatic concerns that would motivate a spatial reference (and what type of spatial reference) in the motivating example.\n\nI also think claims are over-stated. The authors claim this is the first paper in EC to have syntax and make comparisons to SVO ordering. This comes across as a flimsy attempt to connect to human language and to signal a degree of progress in the complexity of ECs, but a lengthier discussion is warranted. Syntax can't be treated as something that does or does not exist, but rather, discussion of what formal language class the emergent language falls into would be relevant, and nothing here would necessitate a CFG or the degree of syntax that is meaningful when it comes to discussing natural language.\n\nDespite the simplicity I do see value in this work but I would have liked to have seen a less contrived environment with a more difficult learning problem / substantial scope in the necessary semantics to feel comparable to the degree of contributions typical of a paper at this venue. I think it would be far more appropriate at a more targetted venue where it can also recieve appropriate attention and discussion.\n\n- There are some fairly trivial solutions to this problem. It seems compositional integer style gets at this -- are there cases where compositional integer would fail? I'm not seeing the need to learn a spatial solution to this problem when it seems that a two-character code could cover all possible target integers. Does this vary as the length of sequence or size of alphabet are increased/decreased?\n\n- Similarly, is there any reason to motivate the choice of vocabulary size with respect to latin alphabet? The chunks of the messages are more akin to words than characters. To me it just read as an attempt to have a connection to human language, but that relationship was not meaningful.\n\n- It is mentioned that the hyperparameters for MI are optimized for translation accuracy. I can make some guesses as to what might be done here, but it wasn't clear to me exactly what is being compared here."
},
{
"confidence": 4,
"rating": 5,
"review_id": "3CJQwIUwcb",
"review_text": "This paper shows that emergent communication can learn spatial references. They first create a modified referential game which requires the agents to communicate by messages that indicate the relative position of a number. The proposed agent architecture shows that the GRU-based agents can achieve good performance. The analysis uses NPMI to identify the meaning of the ngrams in the message (i.e., the correlation between the ngram and the referred positions). This paper further shows that the mapping generated by NPMI is correct by generating additional datasets based on the identified dictionaries to show that both non-compositional and compositional messages carry the intended meanings.\n\n- This paper proposes a novel spatial game to study the emergence of spatial references.\n- This paper shows that NPMI is an effective measure to decompose the messages by finding correlations with the intended meanings.\n\n- The paper is not very easy to follow especially the definition and design of different types of sequences, examples of the messages, and how the hypotheses are tested. The presentation can still be improved.\n- It is unclear how much the test set overlaps with the training set when measuring the accuracy. There is no control of generalization tests such as varying full sequence length or observation of certain patterns of sequences. So, it is hard to understand if the learned messages are effective or memorization of part of sequences in training. For example, does the ngram that means “leftmost” can effectively communicate in a longer sequence?\n- The design of the game put high communication pressure on the agents. The agents need to develop messages conveying relative positions in order to succeed. How does the success relate to the communication protocol, for example, when the message length is longer, is it still necessary to develop messages that convey relative positions? It is unclear about the role of channel bandwidth, effective communication, and developed messages.\n- The test in Compositional-NP is to generate the dataset by removing the positional component of the message. This is an extreme case of H2. In reality, the message is most likely to be corrupted rather than removed. To reject the null hypothesis, it will be more convincing to have a corrupted message version.\n\n- In the experiment, the observation is always fixed length which makes the communication easier. What happens when the observation is longer? I can imagine if the longer sequence contains the same number at different positions, it will introduce some ambiguity in the messages."
}
] |
vIOKLMl6wu | LOVA3: Learning to Visual Question Answering, Asking and Assessment | Question answering, asking, and assessment are three innate human traits crucial for understanding the world and acquiring knowledge. By enhancing these capabilities, humans can more effectively utilize data, leading to better comprehension and learning outcomes. However, current Multimodal Large Language Models (MLLMs) primarily focus on question answering, often neglecting the full potential of questioning and assessment skills. In this study, we introduce LOVA3, an innovative framework named ``Learning tO Visual Question Answering, Asking and Assessment,'' designed to equip MLLMs with these additional capabilities. Our approach involves the creation of two supplementary training tasks GenQA and EvalQA, aiming at fostering the skills of asking and assessing questions in the context of images. To develop the questioning ability, we compile a comprehensive set of multimodal foundational tasks. For assessment, we introduce a new benchmark called EvalQABench, comprising 64,000 training samples (split evenly between positive and negative samples) and 5,000 testing samples. We posit that enhancing MLLMs with the capabilities to answer, ask, and assess questions
will enhance their multimodal comprehension, ultimately improving overall performance. To validate this hypothesis, we train MLLMs using the LOVA3 framework and evaluate them on a range of multimodal datasets and benchmarks. Our results demonstrate consistent performance gains, underscoring the critical role of these additional tasks in fostering comprehensive intelligence in MLLMs. | https://openreview.net/pdf/c6c15295d63406edbf7ea78fdfc7b0b77523de41.pdf | [
{
"confidence": 4,
"rating": 7,
"review_id": "PSC4de1UOM",
"review_text": "This paper augments presents a data augmentation / multi-task learning technique to improve model quality for Visual Question Answering (VQA). The key idea of the paper, motivated by analogy to humans, is that asking questions and assessing answers are also key skills, apart from just answering questions. The paper seeks to train a model to \"Answer, Assess, Ask\" jointly, relying existing datasets for a variety of answering and asking tasks, and deriving a new dataset called EvalQABench for the assessment task. The introduction of the EvalQABench dataset, initially created by LLMs and later filtered by experts, is another potentially valuable and lasting contribution. Multiple tasks on augmented data are implemented on the LLava backbone which is an existing SOTA model. The paper compares their technique (called LOVA) to several SOTA models on a variety of datasets showing robust gains in a multitude of settings, providing confidence in the technique's validity.\n\nThe paper is well motivated: assessing and asking (evaluation and question generation) are closely associated tasks to question answering, that can be performed on datasets that are easily derived from question answering datasets. The argument that training on these closely related tasks improves generalization on question answering is intuitive though reliant on analogy with humans, which has its own traps. \n\nThe paper evaluates their technique against a variety of SOTA models, and across a multitude of tasks, proving that the gains are robust. The paper also provides ablation studies for various components, showing their utility. In general the experiment section is detailed, extensive and is a highlight of the paper. \n\nThe paper has 100 citations, and extensive references to related work, making it easier to assess the novelty of the work.\n\nAs the authors point out, due to cost considerations, the authors only evaluate the technique on smaller (relative to MLLMs) models. This is important as model size is a confounder when it comes to assessing the usefulness of data augmentation or multi-tasks. A technique useful for a 7B model is not necessarily useful for a 70B model. However, given the cost of inference of larger models, improving smaller models to be competitive with larger models has its own benefit. \n\nThere is prior work that already includes question answering and question generation, for example the InstructBLIP paper. Viewed in that sense, this paper makes an incremental contribution adding the assessing task to the answering, asking combination that was already shown to be useful earlier. However, the EvalQABench dataset is potentially very useful for the whole subfield of visual question answering. One minor but interesting finding in the paper is that in a balanced dataset split rightdown with model with 50% Yes and 50% No answers, not all models predict Yes/No close to 50% of the time.\n\nIn section 1, there's a claim that the EvalQABench datasets is generated via a \"new automatic pipeline\". However, in section 3.2 the authors say \"... acknowledging that the Fuyu-8B model is not flawless and recognizing that no multimodal model, including GPT-4V, is perfect, we have implemented both manual filtering and error correction...\". Do the earlier claims about the pipeline being automatic overstate the case? Are they necessary? \n\nDoes the feedback add value beyond just rephrasing the answer is a longer sentence. A lot of the feedback seems trivial and already captured in the q,a pair. For e.g. \"What does the woman have on her back\". \"backpack\" vs \"No, the woman has a backpack on her back\". As another e.g. \"What are the people doing?\", \"motorcycling\", vs \"No, the people in the picture are motorcycling\"."
},
{
"confidence": 4,
"rating": 4,
"review_id": "4jY6XOgKog",
"review_text": "This paper enhances the MLLM's visual understanding capability by training it to ask questions about an image and evaluate the correctness of given question-answer pairs about an image. To achieve this goal, new data is extracted from existing datasets and a new model is fine-tuned on the new data. The experiment shows that the newly added data can improve the MLLM's capability of understanding of images with higher scores on VQA tasks.\n\n- The paper is generally well-written and easy to understand.\n- The argument that training a MLLM to ask questions and evaluate answers can improve its visual understanding is reasonable and, verified by the well-conducted experiments in the paper.\n- The experiment setups are carefully designed to avoid unfair comparisons.\n\n- The three key capabilities of MLLMs covered by the paper--asking, answering, and evaluation--should be characterized in an interactive environment (like in an embodied environment where the MLLM is treated as the high-level perceiver/planner/controller of robots) instead of in the static environment. Consider, for example, an MLLM doing an embodied task that needs asking about some key questions, this is where the asking capabilities really make sense. However, the paper only trains and evaluates the MLLM in simple VQA problems as in previous literature. In the paper's current state, the value of the paper is limited and, from my perspective, does not meet the bar of acceptance if VQA tasks are considered only. The scope of the paper needs to be increased to a significant extent that touches the essence of MLLMs with higher-level capabilities that incorporate iterative/interactive thinking and planning.\n\n- The added synthesized data only gives the model a limited improvement in performance, while adding a large amount of computation overhead. In fact, if we use models like GPT-4(V) to synthesize random VQA data, the performance will increase as well [1], so I do not see the clear benefit of specifically doing the asking and evaluation data augmentation. This problem is relevant to the first problem: the capability added to MLLM should not be evaluated in VQA tasks.\n\n[1] Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs.\n\n(Table 6) Why does adding GenQA-Grounding data improves ScienceQA performance?"
},
{
"confidence": 4,
"rating": 3,
"review_id": "UpaeIyfhP0",
"review_text": "The paper introduces LOVA3, a framework designed to enhance Multimodal Large Language Models (MLLMs) by incorporating not only visual question answering (VQA) but also the capabilities of generating questions (GenQA) and evaluating question-answer pairs (EvalQA). The primary objective is to improve the comprehensive multimodal understanding of AI models. \n\nLOVA3 includes the development of EvalQABench, a benchmark with 64,000 training samples to evaluate VQA data quality. The framework uses the LLaVA-1.5 model as a base, incorporating datasets like VQAv2 and GQA to train these new tasks. Experimental results on ten multimodal benchmarks demonstrate that LOVA3 significantly improves the models' performance, highlighting the benefits of incorporating comprehensive questioning and evaluation abilities into MLLMs. The paper emphasizes the approach and robust results, despite noting the increased computational cost and the need for further testing on larger models and domain-specific tasks.\n\n1. LOVA3 introduces a strategy that extends beyond traditional VQA tasks by incorporating question generation and evaluation.\n2. The creation of EvalQABench provides a rigorous way to test and improve MLLMs.\n3. The multiple perspectives of experimental results provide insights of the proposed framework across multiple benchmarks.\n\n1. Incorporating additional tasks like GenQA and EvalQA, but the two tasks are also the existing steps of the visual language instruction generation for visual question answering (e.g. SEED-Bench) or visual instruction tuning (e.g., LLaVa-Bench). They also used LLMs or MLLMs for the dataset generation and validation. To explained the special novelty or contribution would be better.\n2. The work doesn't provide detailed explanations on how to validate the generated data quality from humans instead of using imperfect models (LLMs or VLMs). It uses Fuyu-8B for data generation but employs a stronger MLLM (LLaVA 1.5) as the base model for instruction tuning. Since LLaVA 1.5 is stronger than Fuyu-8B, the generated negative samples would be less challenging and easier to recognize by stronger models.\n3. The paper lacks a more in-depth analysis of potential data biases and strategies to mitigate them.\n4. The proposed benchmark is relevant to visual question answering and data generation for visual question answering. It would be necessary to survey and discuss the recent existing datasets (e.g., VQA-GEN, CrossVQA, OVQA, STAR) and generated benchmarks (e.g., LLaVA-Bench, SEED-Bench, SOK-Bench, CinePile) as fully considered. \n5. The paper does not provide the generated dataset for review, which is important for the validation of the work.\n\n1. How about the prompt stability of the QA generation and the differences when using the different variants of prompts?\n2. Why does the work apply Fuyu-8B instead of LLaVA 1.5 for the data generation and is there any comparison between the different new VLMs?"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.