paper_id
stringlengths
10
10
title
stringlengths
17
149
abstract
stringlengths
468
2.59k
pdf_url
stringlengths
71
71
reviews
listlengths
2
7
0VeSCjRDBy
Adversarial Moment-Matching Distillation of Large Language Models
Knowledge distillation (KD) has been shown to be highly effective in guiding a student model with a larger teacher model and achieving practical benefits in improving the computational and memory efficiency for large language models (LLMs). State-of-the-art KD methods for LLMs mostly rely on minimizing explicit metrics measuring the divergence between teacher and student probability predictions. Instead of optimizing these mandatory cloning objectives, we explore an imitation learning strategy for KD of LLMs. In particular, we minimize the imitation gap by matching the action-value moments of the teacher's behavior from both on- and off-policy perspectives. To achieve this moment-matching goal, we propose an adversarial training algorithm to jointly estimate the moment-matching distance and optimize the student policy to minimize it. Results from both task-agnostic instruction-following experiments and task-specific experiments demonstrate the effectiveness of our method and achieve new state-of-the-art performance.
https://openreview.net/pdf/b457fae677f39b3ff5c9ac1d197f5bc822c3d55b.pdf
[ { "confidence": 3, "rating": 7, "review_id": "CT80M1IVhh", "review_text": "To improve knowledge distillation for large language models, the authors first motivate an RL-based formulation that aims to minimize the imitation gap while matching on and off-policy moment bounds, and then introducing an adversarial training algorithm that achieves this by posing it as a two-player minimax game. They showcase the efficacy of the approach with instruction-following and task-specific experiments.\n\n**S1.** The problem is an important one and, as far as I can tell, the method is novel and well motivated.\n\n**S2.** The experiments are reasonable to establish the efficacy of the method.\n\n**S3.** The paper is well written and clear.\n\n**W1.** *The paper is missing a more thorough comparison of the training costs of each of the compared methods.* The authors briefly mention in Section 5 that their method induces a larger computational and memory cost compared to some of the other baselines. This discussion should be fleshed out more and potentially backed by experimental evidence.\n\nMinor comments:\n- It does not have any practical influence, but the definition of $\\mathbf{y}$ in line 101 is not very elegant as $y_0$ will have a different dimensionality than $y_i$ for $i \\in \\\\{1,\\dots,T\\\\}$. It might be cleaner to define $\\mathbf{y} = \\mathbf{x} || (y_1,\\dots,y_T)$, where $||$ is a concatenation operator.\n\n- Is the comparison between SFT and the other methods fair in terms of compute budget? It might be interesting to see if increasing the number of epochs still leads to a performance improvement, or if it has plateaued.\n- Could there be any practical benefit to having a different $\\alpha$ for on and off-policy optimization in Algorithm 1? The intuition for this comes from the results of Section 4.3 and particularly Figure 2." }, { "confidence": 3, "rating": 6, "review_id": "k9MUPmS7EI", "review_text": "This paper applies a reinforcement learning (RL) framework to the problem of auto-regressive text generation, framing knowledge distillation as a task of minimizing the imitation gap between teacher and student policies. The authors provide a theoretical analysis demonstrating that the proposed momentum-matching method offers a tighter bound on this imitation gap compared to traditional distribution-matching approaches, potentially leading to improved optimization. To efficiently optimize the momentum-matching target, the paper introduces an adversarial training procedure that alternates updates between the student policy parameters and the Q-value functions, which are used to assess the imitation gaps. Experimental results showcased within the paper indicate that the proposed momentum-matching method outperforms existing distribution-matching baselines in terms of effectiveness.\n\n1. The paper is well-structured and presents a clear, enjoyable narrative, facilitating ease of understanding for the reader.\n\n2. Utilizing an RL framework, the paper theoretically demonstrates that the momentum-matching target provides a tighter bound for minimizing the imitation gap compared to conventional distribution-matching targets. Furthermore, an adversarial training procedure is proposed to effectively optimize the momentum-matching target, aiming to approximate a Nash equilibrium between the parameters of the student policy and the Q-value functions.\n\n3. Comprehensive experiments empirically demonstrate that the proposed method outperforms existing distribution-matching methods in performance. Additionally, the presented analysis of training loss curves illustrates the stability of the proposed adversarial training procedure.\n\n1. Unlike distribution-matching methods, the RL-based momentum-matching adversarial framework requires significantly more computational resources and runtime due to the necessity of calculating policy gradients and updating the parameters of auxiliary networks involved in Q-value functions. While the authors acknowledge this limitation in the conclusion section, the paper lacks quantitative analysis concerning the computation of policy gradients. It does not detail the resource consumption of the overall procedure. This omission limits the reader’s ability to assess the practical applicability of the method.\n\n2. The presented experiments primarily examines knowledge distillation performance within similar or identical architectural frameworks. However, it does not demonstrate the method's generalizability across models with different architectures, thus leaving the robustness of the approach across diverse settings untested.\n\n1. Could you provide a detailed comparison of resource consumption and memory costs relative to the baseline methods? Additionally, can you discuss the impact of the number of samples used to estimate the policy gradient on the performance of your method?\n\n2. It seems that the performance of the auxiliary models used to compute Q-value functions significantly impacts the effectiveness of your method. However, the paper does not provide details on the training of these models. I am curious to know whether these models are trained from scratch during the adversarial procedure or if they are pre-trained on certain datasets before inclusion. Is it possible to directly fine-tune teacher models to compute Q-value functions?" }, { "confidence": 4, "rating": 7, "review_id": "NbBj8onTXB", "review_text": "The paper introduces a novel approach to knowledge distillation for Large Language Models (LLMs) using an adversarial training method that incorporates both on and off-policy distillation. The method jointly learns a critic that estimates Q-values while updating both the Q-function and the student model to more closely match the teacher model. The authors employ a policy gradient method to update the student model.\n\n+ As far as I am aware, a novel approach to knowledge distillation for LLMs -- although I am not an expert.\n+ A well-presented method, tying together some previous ideas on IRL into the distillation application.\n+ Demonstrates a boost to accuracy.\n\n+ Some crucial details are unclear, particularly regarding the parameterization of the Q-value function (see questions section)\n+ Not very much discussion of the computational complexity or additional overhead in memory of having multiple models and requiring rollouts from the teacher and student model while training\n+ Lack of ablation studies -- the method is evaluated as a single monolithic method, when there are many variants that could be applied, such as a weighted combination of the two upper bounds. In particular, I'd like to see how the method using only the on-policy upper bound and the method using only the off-policy upper bound would compare against the method using the linear combination of the on and off policy upper bounds.\n\n+ How exactly is the Q-value function parameterized? Is it an extra head on the model, or a new model entirely?\n+ Regarding the use of policy gradients for training the student function:\n a. Did you use a baseline to reduce variance?\n b. What is the variance of these policy gradients? In applications such as RL, policy gradients typically have quite high variance compared to other methods.\n c. Have you considered lightweight baseline methods, such as those presented in [1]?\n+ Can you provide an ablation of the different elements of the approach, such as investigating the relative importance of the two upper bounds?\n+ Can you provide an analysis -- even if it is brief -- on the computational cost and memory usage of using the additional Q-value critics while training?\n\n\n\n[1] Ahmadian, Arash, et al. \"Back to basics: Revisiting reinforce style optimization for learning from human feedback in llms.\" arXiv preprint arXiv:2402.14740 (2024)." }, { "confidence": 2, "rating": 5, "review_id": "OeGy8W4va1", "review_text": "This paper proposed an adversarial moment-matching approach for knowledge distillation of LLM. The idea is to reformulate the knowledge distillation from an imitation learning perspective and derive both on-policy and off-policy bounds for the imitation gap between the teacher and student models. The authors proposed an adversarial training algorithm to estimate and minimize the on-policy and off-policy moment-matching distances. The moment-matching distance is evaluated by the value function and the student is updated using policy gradients to minimize this distance. Experiments on instruction-following and task-specific datasets show that the proposed approach outperforms other knowledge distillation methods.\n\nIt is novel to reformulate the knowledge distillation as a moment matching problem, where the matching distance is evaluated by the Q-value function.\n\nThe authors derive the imitation gap bound for both on-policy and off-policy setup, and optimize the gap to achieve the knowledge distillation.\n\nThe proposed method demonstrated good performance on both instruction-following and task-specific datasets. The seven baselines are either distribution matching based or supervised finetuned.\n\nI found the connection between the proposed moment-matching approach and distribution distances matching interesting. Specifically, the authors show that minimizing the total variation distance can achieve sub-optimal results for the moment-matching bounds\n\n1. I would recommend having ablation studies and analysis of the impact of on-policy and off-policy objective. And analysis how each of them effect the overall performance.\n\n2. Solving Eq.(9) is involving optimizing the minmax problem. (a) First, the optimization requires additional computational steps for the inner-loop gradient update. How expensive is the computation? such as the time/memory cost (b) Is the optimization robust with respect of hyperparameter changes like $K$ and $\\alpha$.\n\n3. The method requires an auxiliary network for Q-value estimation. It make the training system even more delicated. What network is used for Q-value estimation? any analysis here?\n\nOverall, I think the idea is novel. However, as authors pointed out in the limitation part, the required time/memory cost/training efforts can assumed to be high. I would recommend have a comparison here with other distribution matching methods or knowledge distillation methods.\n\nSee the weakness section for my questions." } ]
0TUMAAb3of
Queueing Matching Bandits with Preference Feedback
In this study, we consider multi-class multi-server asymmetric queueing systems consisting of $N$ queues on one side and $K$ servers on the other side, where jobs randomly arrive in queues at each time. The service rate of each job-server assignment is unknown and modeled by a feature-based Multi-nomial Logit (MNL) function. At each time, a scheduler assigns jobs to servers, and each server stochastically serves at most one job based on its preferences over the assigned jobs. The primary goal of the algorithm is to stabilize the queues in the system while learning the service rates of servers. To achieve this goal, we propose algorithms based on UCB and Thompson Sampling, which achieve system stability with an average queue length bound of $O(\min\\{N,K\\}/\epsilon)$ for a large time horizon $T$, where $\epsilon$ is a traffic slackness of the system. Furthermore, the algorithms achieve sublinear regret bounds of $\tilde{O}(\min\\{\sqrt{T}Q_{\max},T^{3/4}\\})$, where $Q_{\max}$ represents the maximum queue length over agents and times. Lastly, we provide experimental results to demonstrate the performance of our algorithms.
https://openreview.net/pdf/86d730d17255ddad3b2b4d57e58dbd5c3594c0b0.pdf
[ { "confidence": 3, "rating": 4, "review_id": "qf05YoxCpt", "review_text": "This paper proposes an algorithm that learn optimal allocation in a multi-arm bandit problem involving queues.\nThe algorithm ensures the system stability while having a sub-linear regret.\n\nThe algorithm leverages on both max-weight and UCB (or TS) to provide stability and no-regret in a multi-arm bandit problem involving queueing.\n\n- There are notation confusions.\n\n - The use of $\\mu_{n,k} (S| \\theta)$, $\\mu_t(n | S, \\theta)$, $\\mu (n | S, \\theta)$, $\\mu (n|S)$ make reading the technical parts very hard to understand and confusing\n\n-V is the classical Lyapounov function and also used for the norm of $x$.\nThe norm $|| . ||_{V_{k,y}^{-1}} $ is used for both $\\theta$ ( a matrix) and $x_n$ (a vector) while only defined using $x$. This is not clear to me.\n\n\n- Some constants are not well explained, for example, $\\lambda$ is used in line 225 and 227 without any explanation, then disappears in all further results.\n\n1. The authors explain that the notion of stability that they use is the existence of Cezaro limit of the queue sizes. This is weaker than what they call uniform stability (expected queue lengths are bounded).\nThen they introduce $ Q_{max}$ the expectation of the maximum queue length over $T$ steps. $Q_{max}$ appears in the regret bound as a constant independent of $T$. This seems to imply that the stability is uniform. But this is not the case here. Can you explain where is the catch?" }, { "confidence": 1, "rating": 7, "review_id": "356GY9BEJ8", "review_text": "This study examines multi-class multi-server asymmetric queueing systems, where jobs arrive randomly, and unknown job-server service rates are modeled by a feature-based Multinomial Logit (MNL) function. The proposed UCB and Thompson Sampling algorithms aim to stabilize the queues while learning service rates, achieving system stability with an average queue length bound and sublinear regret, and demonstrating their performance through experimental results.\n\nThe authors propose a novel and practical framework for queueing matching bandits, introducing feature-based multinomial logit functions for service rates and preference feedback, which are investigated for the first time. Their UCB and Thompson Sampling algorithms achieve stability with average queue length bounds and sublinear regret, outperforming previously suggested methods, as demonstrated by their experimental results.\n\nThe quality of the figures can be improved.\n\nHow about the performance and calculation efficiency when N, K, L, and d become larger?" }, { "confidence": 4, "rating": 7, "review_id": "nbuSoIcFnC", "review_text": "The work introduces a new bandits framework that handles the problem of matching queuing jobs (agents) with preferential servers (arms). The work extends beyond the match making problem with the objective of stabilizing the queue by learning the preferential nature of agents to arms. The authors propose two new algorithms based on UCB and Thompson sampling algorithms and analyze theoretical guarantees with respect to stabilizing the length of the queues. They also extend their study by analyzing the regret bound for the proposed algorithms.\n\n* The authors develops a new bandits formulation for addressing queueing matching fixing some of the nuances of the earlier developed framework for the same. \n\n* The work captures the important aspect of this problem with a structured model using features for estimating service rates in comparison with bandits for queues or earlier works. Also, the consideration of availability of assignment of an agent available only in an non-empty queue is an welcome addition.\n\n* The formulation is reasonable and interesting mathematically and the work studies the stability of the queueing matching problem and analyses the theoretical guarantees with regards to regret as well.\n\n* The authors developed two new algorithms for the framework and showed theoretical bounds for the same highlighting the stability comparing it with the existing work.\n\n* The developed bandits framework is an incremental extension of bandits for queues & match making problem discussed in the work. Though the work highlights some important consideration, the previous algorithm seem to work for the proposed algorithm. \n\n* In the experiments, the authors have only studied the setting when N=KL and One does wonder why the other regime N<KL is not included.\n\n* How does this work in terms of theory compares with some of the existing algorithm in this problem space. A discussion on regret comparison can help draw conclusion on the performance of the algorithms in this work.\n\n* The experimental section compares the authors’ proposed algorithm QCB-QMB and TS-QMB with others algorithms that are applicable to this problem. The experiments compare them in one settings when N=KL. Also, experimental setting when N<KL can help the behavior of the proposed algorithms in different operating regimes. \n\n* The regret analysis in Theorems 2 and 4 provides bounds in terms of the maximum queue length $Q_{max}$. But, the relationship between $Q_{max}$ and the system parameters N, K, \\& \\epsilon is not clearly established. Providing insights on Q_{max} in terms of these parameters would give more insight into the regret performance.\n\n* The experimental results in Section 6 compare the proposed algorithms with previously suggested methods for queueing bandits or matching bandits. However, the choice of baselines seems limited. Providing comparisons with a wider range of relevant algorithms, such as those handling contextual bandits with queueing constraints, would strengthen the empirical evaluation.\n\n* The paper establishes sublinear regret bounds, but it does not provide any lower bounds on the regret. Deriving problem-specific lower bounds would give a better understanding of the optimality of the proposed algorithms and highlight any potential gaps between the upper and lower bounds." }, { "confidence": 2, "rating": 6, "review_id": "2Fo3oLWVdu", "review_text": "This paper studies queueing matching bandits. It proposes a framework that involves multiple queues and multiple servers: in each round, jobs arrive randomly at each queue; the learner assigns jobs to servers; and each server picks and serves its preferred job according to a feature-based linear model. The goal is to stabilize the system, so that the average queue length is finite as the horizon grows to infinity. The paper proposes two algorithms, one based on Upper Confidence Bound and the other based on Thompson sampling. They achieve system stability as well as sublinear regret.\n\n- The proposed framework seems interesting and introduces a few novel elements, including a feature-based service function and a preference-based job assignment model. The setting is motivated by important real-world applications, such as ride-hailing platforms.\n\n- The paper proposes two algorithms, and they both achieve the same asymptotic bound on the average queue length as an oracle baseline.\n\n- The paper also provides regret analyses for the algorithms, which are not considered in the closely related works.\n\n- Both algorithms require solving an NP-hard combinatorial optimization problem, which may be impractical in large-scale systems.\n\n- The empirical evaluation is limited to synthetic data.\n\n- The presentation can be improved. Since the setting is rather specialized, the first two sections can be challenging for readers who do not know much about queueing systems. For example, in the introduction section, it would be nice if the authors could provide explanations for key terms such as service rates, stability of systems, multi-class queueing, etc. In addition, in Section 5, there are detailed, step-by-step descriptions of the algorithms, but not a discussion on the intuition behind (which probably would not take too long).\n\n- Different jobs from each queue have a common, known representation $x_n$, which may not reflect real-world scenarios.\n\n- While the paper presents regret bounds, it is unclear to me what they tell us beyond the stability result. Could you provide some intuition behind the definition of cumulative regret, specifically how $Q_n(t)$ is incorporated and what it signifies?\n\n- Could you briefly discuss the technical challenges in the analysis due to having a feature-based service function and only preference feedback, given existing works on bandits for queues and MLN bandits?" } ]
0SRJBtTNhX
IntraMix: Intra-Class Mixup Generation for Accurate Labels and Neighbors
Graph Neural Networks (GNNs) have shown great performance in various tasks, with the core idea of learning from data labels and aggregating messages within the neighborhood of nodes. However, the common challenges in graphs are twofold: insufficient accurate (high-quality) labels and limited neighbors for nodes, resulting in weak GNNs. Existing graph augmentation methods typically address only one of these challenges, often adding training costs or relying on oversimplified or knowledge-intensive strategies, limiting their generalization. To simultaneously address both challenges faced by graphs in a generalized way, we propose an elegant method called IntraMix. Considering the incompatibility of vanilla Mixup with the complex topology of graphs, IntraMix innovatively employs Mixup among inaccurate labeled data of the same class, generating high-quality labeled data at minimal cost. Additionally, it finds data with high confidence of being clustered into the same group as the generated data to serve as their neighbors, thereby enriching the neighborhoods of graphs. IntraMix efficiently tackles both issues faced by graphs and challenges the prior notion of the limited effectiveness of Mixup in node classification. IntraMix is a theoretically grounded plug-in-play method that can be readily applied to all GNNs. Extensive experiments demonstrate the effectiveness of IntraMix across various GNNs and datasets. Our code is available at: [https://github.com/Zhengsh123/IntraMix](https://github.com/Zhengsh123/IntraMix).
https://openreview.net/pdf/2df08c17e88734ce3e56147658624ff49f7b1898.pdf
[ { "confidence": 4, "rating": 5, "review_id": "1fkqznrTf4", "review_text": "The paper introduces IntraMix, a novel method for augmenting graph data to improve the performance of Graph Neural Networks. IntraMix addresses two major issues in graph datasets: the lack of high-quality labeled data and inadequate neighborhood information. The proposed method leverages Intra-Class Mixup to generate high-quality labeled data from low-quality labels and uses a neighbor selection strategy to enrich the neighborhoods. Extensive experiments demonstrate that IntraMix enhances GNN performance across various datasets.\n\n1. IntraMix effectively addresses both the lack of high-quality labels and inadequate neighborhood information, providing a comprehensive solution to two significant challenges in graph data augmentation.\n2. The authors conduct thorough experiments on multiple datasets and GNN models, showing the robustness and effectiveness of the proposed method.\n3. The paper provides theoretical analysis and guarantees for the reduction of noise in generated labels and the correctness of neighborhood enrichment, adding credibility to the method.\n4. IntraMix is designed to be easily integrated into existing GNN frameworks without additional training costs, making it practical for real-world applications.\n\n1. The method involves several steps, including pseudo-labeling, mixup, and neighbor selection, which might complicate the implementation and increase the computational overhead.\n2. The initial pseudo-labeling step is crucial for the success of IntraMix. If the pseudo-labels are of very low quality, the subsequent steps may not generate significantly improved labels.\n3. While the paper shows results on multiple datasets, the performance of IntraMix on graphs with highly heterogeneous or unusual structures is not thoroughly explored.\n4. The method assumes a normal distribution for label noise, which may not hold in all real-world scenarios, potentially limiting the generalizability of the theoretical guarantees.\n\nSee weaknesses." }, { "confidence": 4, "rating": 7, "review_id": "HInjUaYiH4", "review_text": "This paper aims to improve the performance of graph neural networks (GNNs) for node classification problem by generating high-quality labeled nodes and enriching node neighbors. It first uses pseudo-labeling to transform the unlabeled nodes into low-quality labeled nodes and performs mixup to generate high-quality labeled nodes. It then adopts an ensemble technique to establish an information exchange path between nodes of the same classes. Extensive experiments have been conducted to verify the proposed method.\n\n1. The proposed method IntraMix could simultaneously generate high-quality labeled nodes and enrich node neighbors, and the experimental results on seven datasets show the superiority of the proposed method compared with other graph augmentation methods.\n\n2. The authors describe the motivations of the intra-class mixup and neighbor selection (section 3.1 and 3.2) clearly, and provide the corresponding theoretical proofs to guarantee the effectiveness of the method.\n\n3. Time complexity analysis is discussed (section 3.4), which illustrates the complexity of IntraMix is in the same of order of magnitude as the traditional GNN.\n\n4. Since IntraMix is a data augmentation framework, it is could be easily incorporated with existing GNNs.\n\n1. In Table 1, the improvements of the node classification accuracy of the proposed method compared with other graph augmentation methods, such as NodeMixup and Local Augmentation, is not significant, which should be discussed more clearly.\n\n2. Although Theorem 3.1 and 3.2 proof that the noise in the generated data is smaller than that in the original data, the useful information for node classification should not be reduced and the theoretical analysis of the generated data containing sufficient information for classification is needed.\n\n1. One of my concerns is that whether the proposed method might loss some useful information when eliminating the noise in the original graph? The proposed method should contain sufficient information for node classification and the corresponding analysis should be proofed or discussed.\n\n2. Since the neighbor selection operation is to enrich the node neighbors, it is similar to graph structure learning (GSL) methods that learn optimal graph structure for GNNs. What is the advantage of the proposed method compared with GSL methods?" }, { "confidence": 4, "rating": 5, "review_id": "ISlpskwOIA", "review_text": "This paper proposes an intra-class mixup generation method to generate high-quality labeled data to improve the performance in the node classification task.\n\n1. The extensive experimental results show that the proposed IntraMixup outperforms most of the baseline methods across several datasets.\n2. The authors provide the theoretical analysis to support the effectiveness of the proposed method.\n3. The presentation of this paper is good and it's easy to follow.\n\n1. I am not fully convinced by the effectiveness of IntraMix by only generating node based on pseudo-labeled nodes and labeled nodes. NodeMixup has both the intra-class mixup and inter-class mixup. NodeMixup has similar node selection method to generate nodes from the same class but the different neighbor selection criteria. Why does the proposed method achieves much better performance than NodeMixup? What's the advantage of the proposed IntraMix over NodeMixup? \n2. The novelty of the proposed method is somehow limited as it is similar to NodeMixup.\n\n1. What are the values of $\\eta_1$ and $\\eta_2$ when the proposed method achieves the best performance on different datasets?\n2. In the ablation study, by saying \"replacing the generated nodes with all-zero vector\" do you mean that the node feature is a all-zero vector?\n3. NodeMixup has both the intra-class mixup and inter-class mixup. Why does the proposed method achieves much better performance than NodeMixup? What's the advantage of the proposed IntraMix over NodeMixup?\n4. What's the performance of Vanilla Mixup on the heterophilic graphs? Does the proposed method has better performance than Vanilla Mixup?" }, { "confidence": 4, "rating": 5, "review_id": "PV9MSrsUh1", "review_text": "This paper propose IntraMix, a data augmentation approach for node classification with graph neural networks. IntraMix effectively mixes node features of nodes in the same class based on pseudo labels to generate new nodes, and then link the generated nodes to selected nodes in the graph. The authors conduct some mathematical analysis on the method and demonstrate its effectiveness empirically.\n\n- The proposed data augmentation approach is interesting.\n\n- This paper conduct experiments on many datasets to justify the effectiveness of the proposed data augmentation approach. The authors report both accuracies and error bars in the results.\n\n- Sec. 4.5 & 4.6 additionally assess the over-smoothing problem and performance on heterophilic graphs, broadening the scope of the study.\n\n1. The statement of theoretical results looks problematic.\n- Theorem 3.1:\n - Note that $P_{noise}(\\cdot | x)$ and $P(\\cdot | x)$ are both probabilistic distributions. Thus, the noise satisfies $\\epsilon_1+\\cdots+\\epsilon_{|C|}=0$. The authors should clarify the distribution assumption on noises, since the constraint cannot be satisfied if $\\epsilon_1,\\cdots,\\epsilon_{|C|}$ are i.i.d. Gaussian random variables. Besides, the proof seems to have overlooked this fact.\n - As a formal theorem statement, Theorem 3.1 should state line 128 using math equations for clarity.\n - I encourage the authors to discuss how $\\lambda$ should be chosen based on this theorem.\n- Theorem 3.2\n - The first sentence should be clearly stated as an assumption.\n - I'm suspicious about the correctness of the theorem. For example, if one sets $\\eta_1=\\eta_2=-\\frac{1}{8(1+\\lambda^2+(1-\\lambda)^2)}-1$, then $(\\lambda^2+(1-\\lambda)^2)+\\frac{1}{4(2+\\eta_1+\\eta_2)}=-1$. Thus, the formula in line 164 is wrong.\n\n2. Some details of the experiment seem missing in the paper.\n- I believe that semi-supervised learning is the most standard setting for node classification. The authors provide no reference on the inductive learning setting and supervised learning setting. It's not clear what those settings are and why they are important.\n- The method relies on pseudo labels. The experiment section needs to state the way that the pseudo labels are generated and the cost of doing so.\n\n3. Other issues.\n- Line 91: Current definition implies that $|D_l|=|D_u|$, which is not general enough.\n- Line 146: dropout rates $\\to$ dropout probabilities.\n- Line 169: lines 1 $\\to$ line 1.\n\n- Does Eq. (5) mean that for any $(\\hat x, y)\\in D_m$ and any $(x_i, y)\\in D_m$, $\\hat x$ and $x_i$ are connected? If no, Eq. (5) should have been written differently, and the authors need to state how to select $(\\hat x, y)\\in D_m$ and any $(x_i, y)\\in D_m$ and generate new edges." } ]
0NMzBwqaAJ
Not All Tokens Are What You Need for Pretraining
Previous language model pre-training methods have uniformly applied a next-token prediction loss to all training tokens. Challenging this norm, we posit that ''Not all tokens in a corpus are equally important for language model training''. Our initial analysis examines token-level training dynamics of language model, revealing distinct loss patterns for different tokens. Leveraging these insights, we introduce a new language model called Rho-1. Unlike traditional LMs that learn to predict every next token in a corpus, Rho-1 employs Selective Language Modeling (SLM), which selectively trains on useful tokens that aligned with the desired distribution. This approach involves scoring training tokens using a reference model, and then training the language model with a focused loss on tokens with higher scores. When continual continual pretraining on 15B OpenWebMath corpus, Rho-1 yields an absolute improvement in few-shot accuracy of up to 30% in 9 math tasks. After fine-tuning, Rho-1-1B and 7B achieved state-of-the-art results of 40.6% and 51.8% on MATH dataset, respectively - matching DeepSeekMath with only 3% of the pretraining tokens. Furthermore, when continual pretraining on 80B general tokens, Rho-1 achieves 6.8% average enhancement across 15 diverse tasks, increasing both data efficiency and performance of the language model pre-training.
https://openreview.net/pdf/322b6309b2e8e9565af8f7bd497dae2d47861bc5.pdf
[ { "confidence": 4, "rating": 7, "review_id": "lH0lVFIg2y", "review_text": "The paper analyzes token-level training dynamics in continued pretraining, identifying four loss patterns: persistent low loss, persistent high loss, increasing loss, and decreasing loss. Motivated by these patterns, the paper proposes a modification to language modeling called Selective Language Modeling (SLM), which only trains on a subset of the input tokens. This subset is selected by training a high-quality reference model and computing the \"excess loss\" of the target model -- i.e., the token-level difference between the target model and reference model loss. The model trained using SLM, Rho, achieves strong performance on math and other benchmarks relative to a model using normal continual pretraining.\n\nS1. The four categories of token-level loss are interesting and (to the best of my knowledge) not a previously noted phenomenon. The authors provide an interesting analysis of this phenomenon, and use it to motivate their method.\n\nS2. The idea of selecting a subset of tokens to train on is clever and appears effective. The results, particularly on math benchmarks, after continual pretraining with SLM are quite strong and compared to sensible baselines. \n\nS3. The analysis is relatively comprehensive and contains several interesting points, especially the section comparing the correlation between token losses and downstream performance for selected/unselected tokens.\n\nW1. *Concerns about training time/cost*. The \"10x faster\"/\"5x faster\" claims in Figure 1 don't factor in the cost of pre-scoring each token by both the reference and training model. Can you measure and report this cost? It seems like what the figure actually shows horizontally is *data efficiency*, not speed of training. More generally, I think the claims about efficiency need to be more clearly explained-- the *data* efficiency claim is well-supported, but \"efficient\" used more broadly (e.g. lines 83, 114-115, 205) generally suggests a time or space efficiency claim, which I don't think the paper supports (or even really intends to claim). \n\nW2. *Scope of claims in title/abstract*. The title and start of the abstract suggest that the method is meant to be applied throughout pretraining, but the paper focuses on continued pretraining. Additionally, the eval focuses predominately on math datasets, which involve many tokens which may be relatively infrequent in pretraining corpora but frequent in-domain. This seems like the ideal domain for this kind of strategy--and, as Figure 5 shows, the gains are much more modest for other tasks. It seems the main finding is that \"SLM is a strong method for continual pretraining for math tasks (and slightly beneficial for general domain tasks)\", but the title/first 10 lines seem to suggest \"SLM should be used instead of CLM for pretraining from scratch,\" which isn't supported or claimed elsewhere in the paper. \n\nW3. The reference model should be included in the results tables as well. Does Rho outperform the reference model used for token selection?\n\nW4. Doing hard selection cutoffs seems a bit heavy-handed; it's possible that weighting examples according to their \"excess loss\" might lead to higher performance. The authors do mention this as a future direction in the appendix.\n\nQ1a. The contents of each token category seem quite important to the paper’s stated motivation of removing noisy tokens from pretraining. Can you provide example sets of tokens / themes?\n\nQ1b. It's not really clear from the few examples provided in Figure 11-14 how to interpret these four loss categories. Are there specific tokens which are generally in one category regardless of the document they occur in (e.g. very rare tokens, or numbers in math equations)? \n\nQ1c. Figure 14: What conclusion should we draw from the differences in tokens selected over time? It's hard to interpret this figure. \n\nQ2. What artifacts do you plan to release? In particular, do you plan to release the model Rho? The 0.5 B and 1.9B datasets you compiled? Checkpoints trained on increasing selection percentages? \n\nQ3. I understand if this is not possible to address in the rebuttal period, but I'm curious if using this method has any impact on the downstream memorization of pretraining data.\n\nOther suggestions/line comments (no need to address in rebuttal):\n* Line 33: \"limiting LLM's potential to merely mediocre intelligence\" is a pretty meaningless phrase -- what does \"mediocre intelligence\" mean? I suggest revising to be more specific about the claim here (e.g. \"limiting the model's capabilities\"). \n* Figure 8 is hard to understand\n* Figure 11 is not colorblind-friendly\n* Line 764: typo in spelling of Tinyllama" }, { "confidence": 4, "rating": 7, "review_id": "0TkBRH4ZJl", "review_text": "The authors explore how loss for specific tokens changes in continued pre-training and note that they fall into four categories (high->high, high->low, low->high, low->low) with each category having at least 10%. They run continued pre-training on tokens that are learnable and domain-useful (judged by reference model) and find that this leads to higher accuracy with less tokens used. The main results are in the math domain, but there are also a variety of other results (tool use generalization, general domain, etc.).\n\nFor originality, I'm not deeply acquainted with related work, but it seems that the authors are (based on Related Work section in appendix). This works seems novel and well-contextualized with respect to related work. The experiments are of high quality and explore a few domains/problems. The paper is generally clear and easy to read. I think this work seems significant in that future research/application could use it (especially with a particularly low-quality dataset).\n\n* The greatest weakness (in my opinion) is that \"tokens\" in many cases could refer to \"number of tokens after % filtering\" and \"total number of tokens before filtering.\" This may be making some results misleading. This ambiguity is present throughout the paper. Just one example is in 3.3 - is 80B the total before filtering or after filtering?\n * Relatedly, in the case when it's after the % filtering, the \"x-axis\" should be total number of tokens before filtering in my opinion, because the % filtering isn't making training cheaper. I believe the results will still look good after these changes (the numbers in Table 1, for example, are great). But I think Figure 1, for example, should use total tokens (not after % filtering) if it's not already.\n* It seems like OpenWebMath is very messy. How would this method work on a clean dataset (like the small, high-quality reference dataset). Is the benefit of the method mostly in \"cleaning\" the data, or in selecting useful tokens?\n* How was the % for filtering chosen for the experiments?\n\nPlease see weaknesses above for some explicit and implicit questions." }, { "confidence": 4, "rating": 9, "review_id": "MhQWANJqdF", "review_text": "The authors propose a method to train LLMs on the most influential tokens selectively. They suggest training a reference model on a small high-quality corpus using the standard CLM loss. They then compute the excess loss of each token in the training corpus as a difference in losses of the reference model and target model on that token. Finally, the target model is trained of the k% subset of the training corpus with the highest excess loss. The paper describes continued pre-training experiments for 1b and 7b models to demonstrate the effectiveness of this method. The experiments show improvements compared to standard continually pre-trained baselines and some open models in terms of performance on popular benchmarks and training efficiency (number of training tokens required to match the performance of open models).\n\nThe Selective Language Modelling method proposed in the paper is a novel approach to pre-training LLMs. The authors' experiments demonstrate significant improvements in training efficiency which is an important problem in LLM pre-training. The paper also describes a study of LLM training dynamics which could provide useful insights to other researchers working in the field for further exploring efficient token selection strategies for LLM pre-training.\n\nThe experiments in the paper are performed in the continued pre-training setting and the impact of the original pertaining performance is not discussed in the paper. It is possible that the method might not work well if the base model is undertrained.\n\nThe end of section 2 talks about how tokens are selected for training in practice, it says that token selection can be implemented by ranking the tokens by their excess losses and only using the top k% for training. This seems like a crucial detail to ensure that the efficiency gains translate to training wall-clock time. How can this be done while maintaining token sequencing within samples?" } ]
0MXzbAv8xy
GFT: Graph Foundation Model with Transferable Tree Vocabulary
Inspired by the success of foundation models in applications such as ChatGPT, as graph data has been ubiquitous, one can envision the far-reaching impacts that can be brought by Graph Foundation Models (GFMs) with broader applications in the areas such as scientific research, social network analysis, drug discovery, and e-commerce. Despite the significant progress of pre-trained graph neural networks, there haven’t been GFMs that can achieve desired performance on various graph-learning-related tasks. Building GFMs may rely on a vocabulary that encodes transferable patterns shared among different tasks and domains. Unlike image and text, defining such transferable patterns for graphs remains an open question. In this paper, we aim to bridge this gap by rethinking the transferable patterns on graphs as computation trees -- i.e., tree structures derived from the message-passing process. Based on this insight, we propose a cross-task, cross-domain graph foundation model named GFT, short for Graph Foundation model with transferable Tree vocabulary. By treating computation trees as tokens within the transferable vocabulary, GFT improves model generalization and reduces the risk of negative transfer. The theoretical analyses and extensive experimental studies have demonstrated the transferability of computation trees and shown the effectiveness of GFT across diverse tasks and domains in graph learning. The open source code and data are available at https://github.com/Zehong-Wang/GFT.
https://openreview.net/pdf/addf28c235542c44a5f2fcfaf5e172021a4802de.pdf
[ { "confidence": 4, "rating": 5, "review_id": "WdEc9MM4eo", "review_text": "The paper proposes GFT, a graph foundation model based on computation tree. Extensive experiments and theoretical analyses are conducted to show the effectiveness of GFT across diverse tasks.\n\n1. Addresses a significant challenge (identifying transferrable patterns) in graph foundation models.\n2. Includes both theoretical and empirical evaluations on synthetic and real-world graphs.\n3. Conducts extensive experiments including multiple graph tasks and domains.\n\n1. The experiments are unconvincing (and most likely unfair): Only a few outdated supervised baselines are used, and their results are weaker than expected. For instance, the GCN's performance on Cora should surpass 80, and the results for GIANT on Arxiv are notably low. Are all baselines using SentenceBERT embedding? If not and the raw features are used, the improvements might be contributed to better text encoders.\n2. Assumes a common feature space across different graphs, limiting its applicability to non-textual graphs.\n3. The method is complex, involving a broad hyperparameter space, including computation tree parameters and the betas in Equation 3.\n4. Certain sections of the paper are unclear, such as the sampling/construction of the computation trees and the interpretation of the y-axis in Figure 5.\n5. The distinction between the computation tree and subgraph is not clearly defined.\n6. The analysis of time complexity should consider the size of subgraphs, similar to GraphSAGE. Furthermore, the discussion should focus on actual wall time to better demonstrate the efficiency of the GFT.\n\nHow are baseline results obtained? What kind of features are leveraged?" }, { "confidence": 4, "rating": 6, "review_id": "13IwCMgUG5", "review_text": "This paper explores the concept of the transferable token in graph foundation models. Specifically, the paper proposes to use the computation tree as the transferable token for graph learning and prove its efficiency from both theoretical and empirical perspectives. Then, the paper proposes GFT model, GFT first pretrain a graph tokenizer using vector quantization to tokenize text-attributed graphs across different domains through a computation tree. Next, it is fine-tuned on downstream tasks. The model shows great results on generalization ability, especially on few-shot and zero-shot experiments.\n\n1. Investigating transferable tokens in graph learning is important and fundamental to the community. This paper is the first one that demonstrates the potential existence of the transferable token in text-attributed graphs. \n2. The paper is well-structured with solid theoretical and experimental results to demonstrate the advantages of the proposed model.\n\n1. In Theorem 2.2, the distance bound is associated with the distance between the $j$-th neighbor of node $v_1$ and $v_2$. How to define the $j$-th neighbor related to two nodes? What if the two nodes have different numbers of neighbors?\n2. In Theorem 2.2, the distance bound is related to the distance of the feature between two nodes (as well as the distance between node neighbors, as it is defined recursively). However, in the real world, the distance of the feature itself can be unbounded, which makes Theorem 2.2 only applicable when features are close and less useful in real scenarios (Even in text-attributed graphs, sentence embedding from different domains can still be too diverse). \n3. For Section 3.1, I am not sure if I understand it wrongly or if I missed something. There are a few parts I think it is not true. For Equation 2, should loss compute between $z_i$ and $c_i$, instead of $q_i$? Otherwise, it does not make sense to me. Meanwhile, what is the $\\delta_j$, I cannot find its definition in the paper. Please correct me if I am wrong.\n\n4. Although the authors did extensive ablation studies, there is one I am particularly interested in about cannot find in the paper. How well can be model be if the pretraining datasets are totally from different domains of the downstream tasks? For example, what is the performance if the model is pre-trained on molecular datasets but tested on citation networks or vice versa?\n\n1. I am curious about Table 23. Since the graph tokenizer needs to tokenize both structural and textual information. The token size should be much larger than 512 even within a single domain from my sense given the abundance of both the graph structure and textural information. Could authors explain more about it? Have authors explore what are these tokens learned exactly?\n\n2. The approach in this paper is somewhat similar to the VQGraph [1]. It is worth discussing the relation and difference between the two methods.\n\n3. The author claims the proposed method does not need to extract subgraphs. I am wondering how is the method deal with the case like the graph is too large to fit into one GPU? \n\n\n[1] Yang et al., VQGRAPH: RETHINKING GRAPH REPRESENTATION SPACE FOR BRIDGING GNNS AND MLPS, ICLR 2024." }, { "confidence": 3, "rating": 6, "review_id": "OiWaKsLTDO", "review_text": "This paper proposed a novel computation tree method to improve the transferability between the pre-train model and downstream tasks. This paper rethinks the transferable pattern in graphs as computation trees and validate their transferability both empirically and theoretically. The proposed GFT leverages computation tree reconstruction to acquire general graph knowledge from cross-domain datasets and uses computation tree classification to facilitate adaptation to various target tasks Comparing with other previous methods, the proposed GFT can improve model generalization and reduce the risk of negative transfer, which is suitable for the cross-domain and cross-task situation.\n\na. Originality: The computation tree has been proposed in several previous works that have been adequately cited in this paper. The author reconstructs the computation tree in a new way and combines it with Vector Quantization method, effective and preventing over-fitting. b. Quality: The paper is technically sound, and the loss function proposed is comprehensive, enabling a deep understanding of the structural and semantical attribute of computation trees. The experiments in the manuscript are very comprehensive and effectively demonstrate the superiority of GFT, and important claims are well supported by theoretical analysis. c. Clarity: The manuscript is well organized and clearly written, although some descriptions could be clearer. d. Significance: The experimental results advance the SOTA methods, and the computation tree method proposed is easy for other researchers to use.\n\nIt is not very clear how this work differs from previous contributions. The explanation of the superiority of computation trees over subgraphs is unconvincing because the computation tree in this paper is very different from some proposed computation trees (e.g. junction tree, H-tree, etc.) but more like the subgraph. It would be better if the author could provide detailed comparison and analysis between the computation tree and the subgraph; The quality is a bit limited. The Lfeat is to minimize the discrepancy between the local structure of a node and its feature, but is there any inherent relationship between the two? Besides, the method might not be able to deal with link prediction in a good manner. In a graph, if node va and vb are isomorphic while the links (va, vc) and (vb, vc) are not isomorphic, the vanilla GNN with the same node representations va and vb gives the same prediction to links (va, vc) and (vb, vc), but GFT does not address this issue; The clarity should be improved. The explanation for Ltree should be detailed. For example, what are decoders used for q projection? In line 269, the definition of R(f) is not given; The significance of the paper is slightly limited. It might be difficult to extend the GFT to zero-shot scenarios because the fine-tune process is necessary for GFT but it will be excluded in zero-shot scenarios.\n\n1. In Appendix C.5, the paper argues that the subgraph method incurs additional time and memory costs. Why can the proposed computation tree avoid these problems? \n\n2. The Lfeat is to minimize the discrepancy between the local structure of a node and its feature. Why do you use Lfeat and is there any intrinsic connections between the local structure of a node and its feature? \n\n3. For Ltree, what are decoders used for q projection? \n\n4. In a graph, if node va and vb are isomorphic while the links (va, vc) and (vb, vc) are not isomorphic, the vanilla GNN with the same node representations va and vb gives the same prediction to links (va, vc) and (vb, vc). How does GFT address this issue in link prediction? \n\n5. What is the definition of R(f)? \n\n6. From the experimental results, GFT outperforms other methods. Is this due to the computation tree or the more comprehensive loss function? Would the overall performance be worse if the computation trees were replaced with subgraphs? 7. Is it possible for GFT to be applicable in zero-shot scenarios?" }, { "confidence": 4, "rating": 7, "review_id": "kIm1DJPmVe", "review_text": "The paper proposes a new graph foundation model based on the tree structure, which is called GFT. GFT leverages computation trees to define tokens within the transferable vocabulary, which improves model generalization and reduce the risk of negative transfers. Comprehensive experiments and theoretical analyses are provided to demonstrate the effectiveness of GFT across diverse tasks and domains in graph learning. Overall, this is a good work with good motivation, a novel method with theoretical support, comprehensive experiments, and good writing.\n\n+ The paper is well motivated by the fact that current graph learning models lack identification of a vocabulary that can encode transferable patterns shared among different tasks and domains. Filling the gap is challenging and meaningful. \n \n+ The proposed GFT model is novel. Unlike existing models, it introduces computation tree as transferable patterns and encodes general knowledge of graph into a tree vocabulary, which is new. It aims to improve the model generalization and reduce the risk of negative transfers. The theoretical analyses are also provided to support the design of GFT model, which is solid. \n\n+ Extensive experiments over different graph learning tasks (e.g., node classification, link prediction, graph classification) are conducted across many datasets. The model outperforms SOTA baseline methods. As a graph foundation model, it is good to see experiments over cross-domain and cross-task datasets. Many more analytical experiments (including Appendix) are also provided, which is impressive. \n\n+ The paper presentation is good. The organization is clear and easy to follow.\n\n- The proposed model assumes tree structure as transferable patterns. Besides the examples (i.e., basic blocks in Fig. 2) shown in the paper, is there any other structures transferable? It is not clear whether these patterns are transferred during the model training or not. How to demonstrate this?\n- As a graph foundation model, besides general and few-shot tasks, it would be better to see experiments on zero-shot task. Can the proposed model be applied to this scenario and what is performance compared to baseline methods?\n\nPlease see the weaknesses." } ]
0Lr9HQijA1
Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations
Learning with reduced labeling standards, such as noisy label, partial label, and supplementary unlabeled data, which we generically refer to as imprecise label, is a commonplace challenge in machine learning tasks. Previous methods tend to propose specific designs for every emerging imprecise label configuration, which is usually unsustainable when multiple configurations of imprecision coexist. In this paper, we introduce imprecise label learning (ILL), a framework for the unification of learning with various imprecise label configurations. ILL leverages expectation-maximization (EM) for modeling the imprecise label information, treating the precise labels as latent variables. Instead of approximating the correct labels for training, it considers the entire distribution of all possible labeling entailed by the imprecise information. We demonstrate that ILL can seamlessly adapt to partial label learning, semi-supervised learning, noisy label learning, and, more importantly, a mixture of these settings, with closed-form learning objectives derived from the unified EM modeling. Notably, ILL surpasses the existing specified techniques for handling imprecise labels, marking the first practical and unified framework with robust and effective performance across various challenging settings. We hope our work will inspire further research on this topic, unleashing the full potential of ILL in wider scenarios where precise labels are expensive and complicated to obtain.
https://openreview.net/pdf/f09f985415cde0be7441264926a0629a18d9ef9d.pdf
[ { "confidence": 3, "rating": 7, "review_id": "rlw08JZI9x", "review_text": "This paper introduces imprecise label learning (ILL), a framework for the unification of learning with various imprecise label configurations, such as partial label learning, semi-supervised learning, noisy label learning, and a mixture of these settings. They propose an EM based method with closed-form learning objectives to handle the problem.\n\n● The problem of learning with imprecise labels is with great importance, and the idea of unification of different settings is interesting, impressive, and instructive.\n● The paper is well-written. \n● While some details are omitted in the main text, the comprehensive appendix furnishes ample information. Such thorough work is highly appreciated.\n\nGiven the extensive content, the main text of the paper omits certain details, which may not be immediately straightforward to follow for some parts, e.g., section 3.2.\n\nNA" }, { "confidence": 3, "rating": 7, "review_id": "rajvKqtzw8", "review_text": "This paper introduces a framework that unifies various imprecise label configurations, with an EM modeling for imprecise label information. The framework is demonstrated can be adapted to partial label learning, semi-supervised learning, and noisy label learning, and the combinations of all above. The experiments results show that the framework surpasses existing methods on various settings.\n\n-\tThis paper proposed a unified framework that can unify various imprecise label learning settings, reducing the need for separate designs and solutions for each type of label imprecision.\n-\tPromising performance is achieved in individual settings and mixture settings with the unified framework.\n-\tThe proposed method is highly versatile and can be applied to the setting of a mixture of imprecise labels with robust performance.\n-\tThe framework demonstrates scalability on larger and more complex datasets.\n\n- The implementation of EM over all possible labelings may increase the computation time.\n- More related works need to be discussed. The author considers the ground-truth or Bayes label distribution as the latent variables and leverages variational inference for estimating. I am not should this strategy is novel enough in the field of variational inference. I suggest the author add more related works to highlight the novelty of their technique.\n- The author should give an explanation of why utilizing an EM framework optimizes the variational lower bound. What is its advantage?\n\n- Can the framework be extended to handle other forms of weak supervision (such as imbalanced noisy label learning) and how?" }, { "confidence": 4, "rating": 6, "review_id": "FL5z8CXTGF", "review_text": "The article addresses the challenge of learning with imprecise labels in machine learning tasks, such as noisy or partial labels. Traditional methods often struggle with multiple forms of label imprecision. The authors introduce a novel framework named Imprecise Label Learning (ILL) that serves as a unified approach to handle various imprecise label scenarios. ILL employs the expectation-maximization (EM) technique, viewing precise labels as latent variables and focusing on the entire potential label distribution. The framework demonstrates adaptability to different learning setups, including partial label learning and noisy label learning.\n\n1. The article is well-written, with clear and concise language that effectively conveys the main ideas and contributions of the research. Also, comprehensive derivation of loss functions for the three imprecise annotations configurations derived from equation 5 are given which ensure clarity and thorough understanding for the readers.\n2. The article offers a comprehensive solution to the prevalent challenge of imprecise annotations, enhancing the adaptability and applicability of machine learning models.\n3. The inclusion of experimental results across multiple settings provides empirical evidence of the framework's robustness and superior performance.\n\n1. The article's innovation is limited, as the approach of considering ground-truth labels or Bayes label distribution as latent variables and using variational inference for approximation in weakly supervised learning is already a common method[1-2], which suggests that the presented techniques may not be as novel as claimed.\n\n [1] Xu, N., Qiao, C., Geng, X., & Zhang, M. L. (2021). Instance-dependent partial label learning. Advances in Neural Information Processing Systems, 34, 27119-27130. \n\n [2] Yao, Y., Liu, T., Gong, M., Han, B., Niu, G., & Zhang, K. (2021). Instance-dependent label-noise learning under a structural causal model. Advances in Neural Information Processing Systems, 34, 4409-4420.\n\n2. Some important baselines should be compared, such as [1,2] in SSL.\n\n [1] Nguyen, Khanh-Binh, and Joon-Sung Yang. \"Boosting Semi-Supervised Learning by bridging high and low-confidence predictions.\" *Proceedings of the IEEE/CVF International Conference on Computer Vision*. 2023.\n\n [2] Schmutz, Hugo, Olivier Humbert, and Pierre-Alexandre Mattei. \"Don’t fear the unlabelled: safe semi-supervised learning via debiasing.\" *The Eleventh International Conference on Learning Representations*. 2022.\n\n1. Given that the method of using ground-truth labels or Bayes label distribution as latent variables coupled with variational inference in weakly supervised learning is highlighted in prior works, how does the presented framework distinguish itself or advance beyond these existing approaches in terms of innovation or application?\n1. Could you provide a detailed analysis explaining why the unified framework demonstrates superiority over specifically designed approaches?" }, { "confidence": 2, "rating": 3, "review_id": "ZNm2YLEJRH", "review_text": "The paper provides a unified view on various imprecise-label learning frameworks, such as semi-supervised-, partial-label-, or noise-label learning, through the lens of the expectation-maximization algorithm. In addition to unifying these existing setups, EM naturally allows treating combinations of the above setups. Experiments show that the proposed method compares favourably with existing methods specialized to just one setup.\n\nThe proposed method generalizes to a wide range of settings, and performs on par or better with more specialized algorithms in most of the evaluations.\n\n* Neither abstract nor introduction make it clear that the paper is concerned purely with multi-class classification with deterministic labels y=f(x)\n\n* This stands in contrast to the critique in l. 70 regarding competing, `[...] they usually require additional assumptions 69 and approximations on the imprecise information for learnablility`: Assuming deterministic labels is a _very_ helpful simplification for noisy labels, as, together with some upper-bound on the noise rate, it restores identifiability of the model\n\n* A serious problem with the writing of the paper is that, for an attempt at introducing a probabilistic model that can then be used in EM, it does not actually write down the actual probabilisitic models it considers:\nIn this sentence (l. 158), the paper is extremely vague: \n> If I represents partial labels, then P(Y |I) would 158 have non-zero value over the candidate labels, and be 0 elsewhere. When I represents a set of noisy 159 labels, P(Y |I) would represent the distribution of the true labels, given the noisy labels. When I 160 does not contain any information, i.e., unlabeled data, Y can take any value.\n\nfor partial labels, is the underlying assumption $P(Y|I) = 1(Y in I) / |I|$, i.e., uniform distribution over all candidates? How about for unlabeled data?\nfor noisy labels, you either need to assume a fixed noise model, or $P(Y|I,\\theta)$ where $\\theta$ are the parameters of the noise model. \nl. 170: `Note that P (X; θ) is omitted from Eq. (5) since P (X) 170 does not rely on θ.` Why? Is this a new assumption? Earlier in the paper, $\\theta$ was introduced as the modelling parameter of the generic joint distribution `Let P (X, I; θ) represent a parametric form for the joint distribution of X and I`. The footnote claims `he actual parameters θ may apply only to some component such as P (Y |X; θ) of the overall distribution` but to me, \"may\" here means that in some situations, such a restriction is possible, whereas I guess the intended meaning is that the model is _always_ supposed to be $P(Y|X,\\theta)$?\nl. 174: `For independent instances setting` again, this reads as if the previous section had situated the paper in the independent instance setting, whereas this is the first time the topic comes up\n\n> The property of the second term log P (I|X, Y ; θ) is dependent on the nature of imprecise label I. If I contains information about the true labels Y , such as the actual labels or the label candidates, it can be reduced to P (I|Y ), i.e., the probability of I is no longer dependent on X or θ and thus can be ignored from Eq. (5).\n\nThis is not correct, just because I contains information about Y, the probability cannot be simplified.\n\nEquation (6) seems to appear out of nowhere: Even following the derivations in C.2, $\\mathcal{A}_s$ and $\\mathcal{A}_w$ just appear out of thin air in these equations. \nl. 223: `Things become more complicated here since the noisy labels $\\hat{Y}$ do not directly reveal the true information about $Y$` I'm not sure what this sentence is trying to say. $\\hat{Y}$ should have some information about $Y$, otherwise, learning is impossible; of course, it doesn't have the full information, but neither do partial labels, so I don't see how this changes the situation compared to the preceeding paragraphs.\n\n\nThe (unreferenced in the main paper) section D.7 in the appendix claims\n> Since the settings studied in this work has loss functions derived 943 as close-form from Eq. (5), the time complexity can be viewed as O(1). Thus our method in general present faster runtime without complex design such as contrastive loss.\n\nThis argument doesn't make any sense. Solving a system of $n$ linear equations can be written in closed form, yet this is not an O(1) operation.\n\nOverall, after reading the paper, I have almost no idea what the proposed method actually does. Equations (6), (7), (8) contain data augmentations, which I suspect may be important for attaining the performance reported in the paper(?), yet they are not really discussed as part of the proposed method. There are numerous inaccuracies, gaps, and I think even some errors (not fundamental, I think, but in the way the writing describes the math). It is certainly possible that I am misunderstanding something here, but as I see it right now, the paper is not in a shape in which it should be published.\n\n\nTypos/grammar:\nl. 87: our proposed method _generalise_ and subsumes\nl. 148: we consider all possible _labeling_ along\n\nSee Weaknesses." } ]
0LfgE6kvKZ
Local Superior Soups: A Catalyst for Model Merging in Cross-Silo Federated Learning
Federated learning (FL) is a learning paradigm that enables collaborative training of models using decentralized data. Recently, the utilization of pre-trained weight initialization in FL has been demonstrated to effectively improve model performance. However, the evolving complexity of current pre-trained models, characterized by a substantial increase in parameters, markedly intensifies the challenges associated with communication rounds required for their adaptation to FL. To address these communication cost issues and increase the performance of pre-trained model adaptation in FL, we propose an innovative model interpolation-based local training technique called ``Local Superior Soups.'' Our method enhances local training across different clients, encouraging the exploration of a connected low-loss basin within a few communication rounds through regularized model interpolation. This approach acts as a catalyst for the seamless adaptation of pre-trained models in in FL. We demonstrated its effectiveness and efficiency across diverse widely-used FL datasets.
https://openreview.net/pdf/182dcb91550832dd0b9bc7c88457fdc06efe869e.pdf
[ { "confidence": 3, "rating": 5, "review_id": "zpDrB6RD5V", "review_text": "In this paper Local Superior Soups (LSS) is proposed to minimize communication rounds in federated learning (FL) using pre-trained models, specifically tackling data heterogeneity challenges. LSS achieves this by employing sequential model interpolation, maintaining connectivity, and integrating diversity and affinity regularization terms. These innovations enable more local training steps and fewer communication rounds, effectively preventing client drift. Designed for adapting pre-trained models in FL, LSS enhances training efficiency, making it well-suited for deployment in edge computing applications.\n\n1. The proposed method can effectively reduce the communication rounds in federated learning (FL) using pre-trained models.\n\n2. The proposed method seems sound.\n\n3. This paper is well written.\n\n1. Only two small-scale image datasets are used in experiments. More large-scale datasets, especially those in other modalities, should be used.\n\n2. More pretrained models should be explored.\n\n3. More tasks besides image classification should be incorporated into experiments.\n\nNone." }, { "confidence": 4, "rating": 5, "review_id": "jEylmdFSFb", "review_text": "This paper proposes a method called Local Superior Soups (LSS), a novel technique for model merging in cross-silo federated learning aimed at reducing communication rounds while enhancing model performance. This paper introduces random interpolation, diversity term, and affinity term to alleviate the need for time-consuming model selection and redundant model training. Rigorous experiments on 4 datasets with 11baselines demonstrate the effectiveness of LSS.\n\n1. This paper discusses the importance of bridging two low-loss valley to reduce communication rounds.\n2. This paper introduces two quantifiable metrics, diversity and affinity, which serve as indicators of model quality..\n3. This paper conducts extensive experiments to illustrate the effectiveness of LSS.\n\n1. The distinction between LSS and similar federated learning methods such as FedProx, which also incorporates weights from global models to regulate client loss, is not clearly discussed.\n2. The subsection 3.3.1 titled \"Random interpolation conserving connected low-loss region.\" lacks mathematical detail to fully understand the interpolation process.\n3. The requirement for clients to receive the interpolated model pool ($M$) could potentially lead to significant communication overheads, which may not present a clear advantage over simpler methods like FedAvg or FedProx.\n4. The connection between Theorem 3.1 and the core methodology of LSS, specifically the diversity and affinity terms, appears tenuous. These terms do not seem to be directly derived from the theorem, which may weaken the theoretical foundation of the proposed method.\n5. This paper should consider referencing relevant literatures or conducting preliminary experiments to support its statments on the part called \"Limitation of previous model soups methods\".\n\n1. In Figure 3, how do the results of LSS compare with those of FedProx? Given the similarities in the core concepts between LSS and FedProx, such a comparison would be insightful for evaluating the distinct advantages of LSS.\n2. The results presented in Tables 1 and 2 suggest that LSS performs well in the initial training rounds. Can the authors clarify whether LSS is primarily advantageous only during these initial rounds? \n3. Furthermore, how should the training process be continued post-initialization? Is it feasible to employ LSS throughout the entirety of the training process, or would alternative methods be more effective in later stages?" }, { "confidence": 4, "rating": 5, "review_id": "6hHQfR0H4E", "review_text": "This paper proposes LSS, a model interpolation-based local training technique to reduce the number of communication rounds required. The intuition is to regularize local models to connected low-loss valleys, so the aggregated model may have lower loss. LSS is empirically evaluated on a variety of datasets and types of distribution shifts.\n\n- Figure 1 and 2, though may not be very rigorous, are clear and provide intuition to the readers. \n- The proposed algorithm is tested on a variety of datasets and types of distribution shifts.\n\n- Readability: the notation in section 3 is not clear enough. For example, $n$ is used for both number of data (section 3.1) and number of averaged model (Alg 1), which is confusing. \n- (minor) It might be a little abuse of notation when using $\\mathcal{D}_i$ for both distribution and dataset. I suggest using different notations.\n\n- Although Figure 2 is intuitive, it might not be so rigorous: high affinity and high diversity do not necessarily guarantee an aggregated model with low loss. In the example, two client’s local models are different “horizontally”, while the major axis of the loss landscape is also horizontal. However, this is not always guaranteed. If the clients models are different “vertically”, the proposed method may just fail. Mathematically, in algorithm 1, the gradient of $dist(f_{p_i}, f_p)$ and $dist(f_{p_i}, \\mathcal{M})$ is very likely to be nearly orthogonal. Could you explain intuitively why LSS is expected to perform well in most of the cases? \n- The scope of this paper is limited to FL with a pre-trained model. I understand that the pre-trained model may be larger in average, which makes convergence more changing. I am curious how LSS is limited to pre-trained models and whether it can generalize to randomly initialized models." } ]
0Lb8vZT1DB
Reliable Learning of Halfspaces under Gaussian Marginals
We study the problem of PAC learning halfspaces in the reliable agnostic model of Kalai et al. (2012). The reliable PAC model captures learning scenarios where one type of error is costlier than the others. Our main positive result is a new algorithm for reliable learning of Gaussian halfspaces on $\mathbb{R}^d$ with sample and computational complexity $d^{O(\log (\min\{1/\alpha, 1/\epsilon\}))}\min (2^{\log(1/\epsilon)^{O(\log (1/\alpha))}},2^{\mathrm{poly}(1/\epsilon)})$, where $\epsilon$ is the excess error and $\alpha$ is the bias of the optimal halfspace. We complement our upper bound with a Statistical Query lower bound suggesting that the $d^{\Omega(\log (1/\alpha))}$ dependence is best possible. Conceptually, our results imply a strong computational separation between reliable agnostic learning and standard agnostic learning of halfspaces in the Gaussian setting.
https://openreview.net/pdf/4cdcaf263f9d512b95d31f91c8f83ad1f8c92e4f.pdf
[ { "confidence": 2, "rating": 7, "review_id": "4Ffq1nvIHq", "review_text": "This paper studies the problem of agnostic reliable learning with Gaussian margin. It gives a novel algorithm with improved running time and sample complexity bound and this suggests that agnostic reliable learning is easier than agnostic learning. It also gives a Statistical Query lower bound matching some terms of the upper bound as an evidence that the upper bound is tight.\n\n1. The result of this paper is novel and complete.\n\n a. The running time and sample complexity of the proposed algorithm is a big improvement from the previous $d^{O(1/\\epsilon^2)}$ and the algorithm is completely new.\n\n b. The algorithm, lower bound and analysis are all highly non-trivial and technically sophisticated, showing a deep understanding of the problem. I think there's lots of technical novelties in the paper.\n\n c. The fact that there's a separation between agnostic reliable learning and agnostic learning is interesting and it's a contribution to formally establish this.\n\n d. The paper gives a non-trivial SQ lower bound matching some terms of the upper bound, suggesting its tightness.\n\n2. The writing of the paper is decent. Enough background knowledge is given. Math related parts is clearly defined and the proof is rigorous. The paper is organized and non-technical part is not hard to follow.\n\n1. There are some room for improvement in writing.\n\n a) As someone who is quite familiar with PAC and many classical learning problems but not familiar with reliable learning/learning Gaussian margin halfspaces, I find the introduction to the problem could be more clear, especially I think you can do a better job explaining the adversary's strategy and behavior, the \"corrupt negative labels are free\" part is worth more insights.\n\n b) I do find the paper really technical, but I think it is intrinsic. As someone who is not familiar with your algorithmic framework [DKK+22] and some technical parts (for example, the use of polynomials), I think more explanation could be helpful.\n\n c) Some parts of the writing could be more clear, like \"with high probability\" in the definition.\n\n2. I do find the SQ lower bound a bit weak. It doesn't have a dependence on $\\epsilon$, is there any known lower bounds on $\\epsilon$?\n\n3. The agnostic learning lower bound doesn't have a dependence on $\\alpha$, but the agnostic reliable learning bound has such a dependence, why? You assumed that $\\alpha$ is a constant, if it's not, is there any complications in your conclusion?\n\n1. In this paper and some other papers, the lower bounds are SQ lower bounds. As far as I know, SQ lower bounds are weaker than PAC lower bounds (SQ learnable implies PAC learnable but not the other way around), is there some difficulty to get a PAC lower bound for the problem?\n\n2. As far as I know, active learning doesn't seem to help with agnostic learning halfspaces (Gaussian margin, arbitrary noise), could it improve the sample complexity bound for agnostic reliable learning (with Gaussian margin)." }, { "confidence": 2, "rating": 7, "review_id": "AVPeiP8SHR", "review_text": "This paper considers learning halfspaces with Gaussian marginals in a reliable learning setting, where the learner has to guarantee that the error of the output classifier is less than $\\epsilon$ (and we assume such a classifier exists). It is known that the reliable learning problem can be efficiently reduced to agnostic learning, but the sample complexity for agnostic learning is high ($d^{O(1/\\epsilon)}$). This paper proposes an algorithm that has a much better sample complexity (polynomial in d, but still quasi-polynomial in $1/\\epsilon$), and it provides a lower bound showing that one cannot do much better w.r.t. d.\n\n- This paper considers an interesting learning theory problem.\n- The results look nontrivial.\n- The methods look sound, though I'm not familiar with the techniques used here and did not check the proofs.\n- It is written clearly, and explains the intuition well.\n\n- Though this is an interesting theory problem, it is not entirely clear to me how significant the results are. Specifically, what is the importance or implication of the computational separation between agnostic and reliable learning found in this paper, especially given that the proposed algorithm is still super-polynomial.\n\nN/A" }, { "confidence": 2, "rating": 8, "review_id": "ctoq1U68Wk", "review_text": "This work studies agnostic learning of halfspaces in the reliable learning model, which guarantees a halfspace with nearly no false positives and a nearly optimal false negative rate, where the optimal false negative rate is defined relative to a class of halfspaces with no false positives. The authors prove sample and computational bounds for this learning task under a standard Gaussian distributional assumption, dramatically improving the previously known bounds that followed from reduction to general agnostic learning under the same distributional assumption. They also show a statistical query lower bound of $d^{\\Omega(\\log(1/\\alpha))}$.\n\nThis work furthers our understanding of the sample and computational complexity of learning halfspaces under challenging noise models. The techniques used to obtain the algorithmic result are very interesting, and while technically involved, the overview of the proof approach in the main body is well-structured and modular (if still hard to follow as someone with little familiarity with the related work).\n\nWhile the page-limits are restrictive, I would have benefited from some additional handholding even in the overview. The introduction of Lemma 2.5 was particularly opaque, for instance. \n\n\nTypos/suggested edits:\n\nLine 14. “The problem of learning halfspaces is one the classical”\n\nLine 40. “has since been extensively studies”\n\nLine 53. “minimizing a lost function”\n\nLine 69. “as a reliable agnostic learning for Gaussian halfspaces”\n\nLine 84. missing close parens\n\nline 94. “reduce the fully reliable learning”\n\nline 105. “This implies that that”\n\nline 214. “Let D be joint distribution”\n\nAlgorithm 1 caption “General Halspaces”\n\nLine 511/525 “Reliable learning halfspaces”\n\nIn line 222, I’m confused about the signs. Doesn’t the interval $[t^*, \\infty]$ correspond to positive labels, and don’t we want the expectation of p within this region to be negative?\n\nHow does the equality in line 241 follow from Lemma 2.5?" }, { "confidence": 3, "rating": 7, "review_id": "s3akL7HMKx", "review_text": "This paper studies reliable learning of halfspaces in $d$ dimensions. Reliable learning is a framework in learning theory in which the learning algorithm is required to output a classifier $f$ satisfying:\n- the probability that f makes a false-positive error is at most $\\epsilon$\n- The probability that f makes a false-negative error is at most $opt+ \\epsilon$, where $opt$ is the smallest error rate achievable by a classifier in the class $\\mathcal{G}$ that has zero false-positive error.\n\nThe work gives a reliable learning algorithm for the class of $\\alpha$-biased halfspaces when the data marginal is the standard Gaussian distribution. A halfspace is $\\alpha$-biased if on a Gaussian input it has probability at least $\\alpha$ to take either of the two possible output values. The run-time of the algorithm is $d^{O(\\log(min(1/\\alpha,1/\\epsilon)))}min(2^{\\log(1/\\epsilon)^{O(log(1/\\alpha))}}\n, 2^{poly(1/\\epsilon)})$. \n\nThe algorithm first finds a candidate direction by estimating the Chow tensor. Consequently, the algorithm improves this hypothesis by performing a certain random walk. \n\nIt is shown that any statistical-query algorithm has to take at least $d^{\\log 1/\\alpha}$ time for this task. Statistical-query algorithm are a wide family of algorithms that includes virtually all algorithms studied in learning theory.\n\n- Reliable learning is a natural framework asking for approximately-best classifier that makes false positive rarely. Considering halfspaces over Gaussian data is arguably the most natural setting to study. However, prior to this work little was known about this question.\n- The run-time compares favorably with the run-time of $d^{poly(1/\\epsilon}$ that is known to be best for the more challenging agnostic model. This is true for all values of bias $\\alpha$, but is especially true when $\\alpha$ is a small constant.\n- The methods developed in this work seem potentially interesting in their own right.\n\n- Not clear that the dependence on $\\epsilon$ is best it can be.\n- None of the algorithms run in fully polynomial time in all parameters.\n\n- I think there is a missing parenthesis in the end of Theorem 1.3\n- Is it possible that for every small constant $c$, if $\\alpha$ is promised to be at least $c$, then there is an algorithm running in time $\\poly(d/\\epsilon)$? \n- Is it correct that for these halfspaces your algorithm runs in time $d^{O(1} 2^{\\log(1/\\epsilon)^{O(1)}}$?" } ]
0LXotew9Du
KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization
LLMs are seeing growing use for applications which require large context windows, and with these large context windows KV cache activations surface as the dominant contributor to memory consumption during inference. Quantization is a promising approach for compressing KV cache activations; however, existing solutions fail to represent activations accurately in sub-4-bit precision. Our work, KVQuant, facilitates low precision KV cache quantization by incorporating several novel methods: (i) Per-Channel Key Quantization, where we adjust the dimension along which we quantize the Key activations to better match the distribution; (ii) Pre-RoPE Key Quantization, where we quantize Key activations before the rotary positional embedding to mitigate its impact on quantization; (iii) Non-Uniform KV Cache Quantization, where we derive per-layer sensitivity-weighted non-uniform datatypes that better represent the distributions; and (iv) Per-Vector Dense-and-Sparse Quantization, where we isolate outliers separately for each vector to minimize skews in quantization ranges. By applying our method to the LLaMA, Llama-2, Llama-3, and Mistral models, we achieve < 0.1 perplexity degradation with 3-bit quantization on both Wikitext-2 and C4, outperforming existing approaches. Our method enables serving LLaMA-7B with a context length of up to 1 million on a single A100-80GB GPU and up to 10 million on an 8-GPU system. We develop custom CUDA kernels for KVQuant, showing that we can achieve up to ~1.7x speedups, compared to baseline fp16 matrix-vector multiplications, for the LLaMA-7B model.
https://openreview.net/pdf/14defcf80798b0426d9bd05b25ab492c11727c8a.pdf
[ { "confidence": 4, "rating": 7, "review_id": "ufwrapku8u", "review_text": "KVQuant presents a method for applying low-bit activation quantization to the Key-Value Cache, a major bottleneck in long context LLM's generation inference. The authors propose strategies (per-channel, per-token, pre-RoPE) tailored to the distribution characteristics of keys and values, as well as the distribution shift introduced by RoPE. They effectively mitigate the overhead introduced by their approach through kernel optimization.\n\nFor the quantization process, they draw inspiration from SqueezeLLM, employing a non-uniform, dense-and-sparse method to handle outliers efficiently. They also use offline calibration to manage dynamically induced quantization scale and outliers from KV activations.\n\nThe proposed KVQuant method demonstrates the least perplexity degradation in low-bit kv-cache quantization compared to FP16. Additionally, it outperforms the latest KV Quantization baseline (KIVI) based on group-wise quantization in terms of retrieval accuracy and LongBench performance.\n\n- Paper's writing is clear and easy to follow. It clearly explains why the kv-cache is the main bottleneck in long context scenario and highlights the need for quantization in memory-bound situations during LLM generation inference, making the importance of the paper easily understandable for the reader.\n- Beyond merely presenting performance, the paper minimizes the overhead of the proposed methods (Per-Channel Key Quantization, applying RoPE on-the-fly, offline calibration) through kernel optimization.\n- The strengths of this paper are clearly highlighted through comparisons with recent KV quantization baseline, KIVI.\n- The paper shows evaluation results not only with generation style (PPL) but also in long context prefill processing results (longbench, longeval).\n\nSince this paper is very solid, I don't see any major weakness in this paper. However, it is more focused on technical strategies for effectively applying quantization to the KV cache rather than presenting innovative or novel ideas. However, such efforts are crucial for making LLMs more accessible, so it is not considered a major weakness of the paper.\n\nAs a minor weakness, due to the extensive experimental results and content, there are many appendices linked in the main text and frequent parenthetical explanations, which can make the paper somewhat challenging to follow. There is room to improve readability by filtering out and emphasizing key points in the main text.\n\n- Do you have any insights into the cause of the disparate characteristics in the distributions of keys and values?\n- Can the distribution characteristics of Key and Value be generalized to widely used popularized LLMs? (e.g., Mistral, Gemma-2, Phi-3, …)\n- I am curious about the specific method used to measure PPL. I might have missed it, but when measuring generation PPL on wikitext and C4, did the authors use a teacher forcing style by providing all input tokens at once, applying quantization to the KV embeddings, and measuring loss with the final logits for all tokens? It would be helpful to have a detailed explanation of how PPL was measured." }, { "confidence": 4, "rating": 6, "review_id": "ttHdZytFsF", "review_text": "This paper proposes KVQuant, which consists of 4 techniques for improving the performance of low-bit KV cache quantization. By observing the distribution of Key and Value cache, it proposes to use channel-wise quantization for Key cache before RoPE and token-wise quantization for Value cache. It also adopts non-uniform quantization instead of uniform quantization to improve performance. It also leverages per-vector dense-and-sparse quantization and attention sink-aware quantization to isolate outliers. Experiments on Wikitext-2 and C4 are conducted to evaluate the proposed method.\n\n1.\tThe analysis of the KV cache distributions provide some insights to KV cache quantization.\n2.\tBased on the observation, the proposed techniques are reasonable for KV cache quantization.\n3.\tLatency of the implemented kernels is reported.\n\n1.\tThe paper seems to be a combination of several existing quantization methods (or minor revision of existing methods), e.g. choosing to use per-channel quantization for Key cache, using non-uniform quantization, and dense-and-sparse quantization to improve performance. It is more like a technical report.\n2.\tBased on the previous point, the paper writing is not clear enough. The paper proposes too many techniques and details are not fully presented in the main text of the paper. For instance, how is the non-uniform quantization used in this paper specifically implemented?\n3.\tThe evaluation section lacks detailed comparisons of the existing KV cache compression methods (like H2O, GEAR, KIVI).\n4.\tThe experiments are only conducted on Wikitext-2 and C4 datasets and reported the PPL. The effectiveness is not very well proven. Other challenging tasks such as reading comprehension, math problem solving, or code generation should be evaluated.\n\n1.\tQuantization signposts appear multiple times in the article, what exactly is this?\n2.\tThe paper uses a calibration set to optimize the quantization parameters. Will this lead to overfitting of the model? Will calibrating a quantized model on one dataset affect its performance on other datasets?" }, { "confidence": 2, "rating": 7, "review_id": "2OCL91j86f", "review_text": "This paper proposes KVQuant, a quantization framework for enabling long context window inference through compressing KV cache activations. Specifically, the KVQuant framework incorporates several techniques including per-channel key quantification, per-RoPE key quantification, non-uniform KV cache quantification, and per-vector dense-and-sparse quantification. Experiments on several datasets and LLMs show that KVQuant can achieve low precision (3-bit) quantization with a relatively low impact on model performance.\n\n- Addressing memory constraints on context window length by compressing KV cache activations is a timely and important research area.\n\n- Handling outliers before RePE and non-uniform quantization by considering sensitivity are good insights.\n\n- Extensive experiments were conducted, covering prominent models such as Llama, Llama-2, Llama-3, and Mistral, on both Wikitext-2 and C4, which verified that KVQuant can achieve 3-bit quantization with < 0.1 perplexity degradation.\n\n- Custom CUDA kernels are developed to accelerate inference.\n\nIn the per-vector dense-and-sparse quantization step, the numerical outliers are stored as high precision in a separate sparse matrix. It is not quite clear how to achieve this in practice in a hardware-friendly way without affecting inference speed. Additionally, the offline calibration step makes the implicit assumption on the access to data samples that are from a similar distribution as data from inference time.\n\nPlease refer to weaknesses." }, { "confidence": 3, "rating": 5, "review_id": "13uYx2SqFm", "review_text": "The paper presents KVQuant, a low precision quantization method for KV cache activations in LLMs to reduce memory consumption during inference with large context windows. KVQuant applies per-channel pre-RoPE quantization on Key cache, exploits non-uniform datatype for quantization, and keeps the per-vector outliers in full precision. KVQuant achieves neligible perplexity degradation with 3-bit quantization on Wikitext-2 and C4, outperforming existing methods, and allows serving LLaMA-7B with up to 1 million context length on a single A100-80GB GPU.\n\n+ The paper addresses a crucial issue in the use of LLMs for long-context applications, where memory capacity is a significant limitation. \n+ The paper is well-written and easy to follow.\n+ The observation regarding the distribution of Key activations and its impact on quantization is particularly insightful, contributing valuable knowledge to KV cache quantization.\n+ The paper includes practical, real-world measurements of speedup using specialized CUDA kernels, demonstrating the tangible benefits of their approach.\n+ A thorough ablation study is presented, which helps in understanding the design choices made in KVQuant.\n\n- The paper does not sufficiently clarify the baseline methods and implementations used for comparison.\n\n- What is the FP16 baseline implementation when evaluating speedup of the KVQuant kernel?\n- Compared to Post-RoPE quantization where post-RoPE Keys are stored, why does recalculating the RoPE in the KVQuant implementation can achieve faster speeds?" } ]
0KvYLaTBTE
Latent Plan Transformer for Trajectory Abstraction: Planning as Latent Space Inference
In tasks aiming for long-term returns, planning becomes essential. We study generative modeling for planning with datasets repurposed from offline reinforcement learning. Specifically, we identify temporal consistency in the absence of step-wise rewards as one key technical challenge. We introduce the Latent Plan Transformer (LPT), a novel model that leverages a latent variable to connect a Transformer- based trajectory generator and the final return. LPT can be learned with maximum likelihood estimation on trajectory-return pairs. In learning, posterior sampling of the latent variable naturally integrates sub-trajectories to form a consistent abstrac- tion despite the finite context. At test time, the latent variable is inferred from an expected return before policy execution, realizing the idea of planning as inference. Our experiments demonstrate that LPT can discover improved decisions from sub- optimal trajectories, achieving competitive performance across several benchmarks, including Gym-Mujoco, Franka Kitchen, Maze2D, and Connect Four. It exhibits capabilities in nuanced credit assignments, trajectory stitching, and adaptation to environmental contingencies. These results validate that latent variable inference can be a strong alternative to step-wise reward prompting.
https://openreview.net/pdf/0b6e118d3320c87fc9990c0f1752d8af260e3055.pdf
[ { "confidence": 3, "rating": 6, "review_id": "Ql8RDnWwUq", "review_text": "Authors propose a novel method Latent Plan Transformer (LPT). The key idea is to modify Decision Transformer (DT) approach by adding latent variable conditioning instead of return-to-go. This latent variable is assumed to represent a \"plan\" that the agent will follow. The motivation of replacing return-to-go is the fact that in practice return-to-go might be unavailable during the inference.\n\nTo my prior knowledge, authors provide a novel idea of usage of latent variable for decision making with autoregressive generative model. \n\nAuthors provide strong empirical results on a diverse range of tasks which show that the proposed method has much higher performance than other types of the DT.\n\nThere is a chance I did not understand some parts of the work which lead to the lack of intuition behind the behavior that we observe. \n\nThe problem is that we sample some random variable $z$ that is conditioned on the final return and just plug this fixed into the Transformer's cross-attention (if I understood everything correctly). This provides better results into a better performance than DT even when it has access to the step-wise return to go. It seems very strange as intuitively updated return-to-go should let the model to adapt to the situation in which the agent ended up and change the behavior according to it while fixed latent variable might only provide the direction which the agent should follow and not updated. Can it be the implementation/hyperparameters differences? Baselines scores are taken from previous works and not claimed to be reproduced with LPT codebase and it is not provided. Or is it just some good latent representation of the return-to-go?\n\nSome experimental results are omitted.\n\n1) What are results of DT/QDT if the same code is used for their training?\n\n2) What is the performance of the LPT using Antmaze medium/large tasks? Umaze tasks are not very representative. It is claimed that performance there is poor and appendix shows that there are some problems with those datasets. Anyway, what are the results? What if we remove the trajectories of length 1 and train all baselines and LPT using modified datasets?\n\n3) How is the Figure 3 (left) obtained? It is said that $z_0$ is sampled from isotropic normal distribution but why is it in different t-sne space?\n\n4) Ablation: what if we remove the return-to-go conditioning during the training of the U-Net? What if we remove the trajectory conditioning during the training of the U-Net? I find these missing.\n\n5) What kind of distribution over $z$ do we obtain? I would recommend to try some toy example where we have $z$ with dim of 2. Is it much different from gaussian?" }, { "confidence": 3, "rating": 6, "review_id": "oZlm0kgip5", "review_text": "The paper introduces the Latent Plan Transformer (LPT), a novel framework for trajectory generative modeling in the absence of step-wise reward. This framework employs a top-down latent variable model, using a temporally-extended latent variable z to represent a plan for decision-making. The framework comprises three components connected by the latent plan, a neural transformation of a Gaussian noise, a transformer-based trajectory generator, and a return estimator. LPT is optimized through maximum likelihood estimation on offline datasets composed of trajectory-return pairs. During testing, a latent plan is inferred based on an expected return, after which the trajectory generator is applied to extract actions. The framework is extensively evaluated across several environments, including gym locomotion (including antmaze), kitchen, maze2d, and connectfour. LPT addresses the challenge of temporal consistency while demonstrating competitive benchmark performance compared to baselines and excelling in various aspects, spanning credit assignments, trajectory stitching, and dealing with environment contingencies.\n\n- LPT creatively enables learning from trajectory-return pairs without any step-wise reward, while resolving the temporal consistency issue.\n- The paper provides a detailed analysis from a sequential decision-making perspective, identifying the significance of plan prediction in enforcing temporal consistency.\n- Exploitation-inclined Inference provides a simple and flexible way to control exploration and exploitation for latent plan sampling, which generally leads to better plans given the evaluation results.\n- The method is thoroughly evaluated, and LPT-EI exhibits superior performance compared to final-return baselines and strong stitching capabilities.\n- The paper is well-written and well-structured.\n\n- Training and inference may suffer from inefficiency because of MCMC sampling in more complex scenarios, especially when modeling high-dimensional distribution (e.g. image-based benchmarks) or requiring high accuracy.\n- As mentioned by the authors, the LPT may fail on datasets with a skewed distribution of trajectory lengths (e.g. antmaze-large) due to its sequence modeling nature.\n\n- Line 77-78, duplicate \"Consider\" sentences.\n- What's the difference between $\\bar{\\theta}$ and $\\theta$ in eq. 9?\n- How the expected return is chosen for each dataset?\n- In Figure 3, are the testing $z_0$s sampled based on unseen returns? For LPT, we need to choose an expected return before the latent plan is sampled, in many cases we may want to sample a trajectory with a return even beyond the highest one in the training data. Therefore, it would be valuable for the authors to provide an analysis of how LPT performs across a range of returns to assess its robustness when interpolating and extrapolating out-of-distribution returns." }, { "confidence": 3, "rating": 7, "review_id": "BejgPaw3jj", "review_text": "Building on the idea of decision transformer, the paper introduces a new generative model based decision-making agent called Latent Plan Transformer (LPT). Instead of directly generating the trajectories and returns as in the prior work, LPT would first generate a latent vector, and then generate the trajectory and its return conditioned on the latent vector. The idea of introducing this latent vector is to view it as a plan which provides the agent a temporal consistency guideline in the long decision-making process. Experimental results show improved performance of LPT over existing baselines in robotic and navigation tasks.\n\n- The idea of having a plan for decision-making is very interesting. Discussion in Section 4 on some intuitive reasons why having a plan may help in long range problems when temporal consistency is an issue.\n\n- Experimental results show strong performance in various tasks. Especially in the maze domains with long-range delayed rewards, the performance improve from DT to LPT highlights the benefits of the latent plan.\n\n- Although there are motivations and some high-level discussions provided, it would probably be more convincing if those ideas have connection to some theoretical analysis in MDP representation learning.\n\n- The ablation study in A.5 comparing different latent prior seems important, but little discussion is provided. Intuitively, DT may be view as LPT with a trivial prior. Are there more comparisons with different priors for the latent vector? Since there is a UNet used in LPT, LPT may end up having much more parameters than DT. Are the sizes of the models adjusted to account for this extra prior network in LPT?\n\n- Are there derivations for Eq. (9)? Why is the second term in (9) are between the two parameters of the same model and one with stop_grad( )?\n\n- Is the comparison with baselines fair in the sense that all the models have similar parameter sizes?" } ]
0KseSacluJ
CoFie: Learning Compact Neural Surface Representations with Coordinate Fields
This paper introduces CoFie, a novel local geometry-aware neural surface representation. CoFie is motivated by the theoretical analysis of local SDFs with quadratic approximation. We find that local shapes are highly compressive in an aligned coordinate frame defined by the normal and tangent directions of local shapes. Accordingly, we introduce Coordinate Field, which is a composition of coordinate frames of all local shapes. The Coordinate Field is optimizable and is used to transform the local shapes from the world coordinate frame to the aligned shape coordinate frame. It largely reduces the complexity of local shapes and benefits the learning of MLP-based implicit representations. Moreover, we introduce quadratic layers into the MLP to enhance expressiveness concerning local shape geometry. CoFie is a generalizable surface representation. It is trained on a curated set of 3D shapes and works on novel shape instances during testing. When using the same amount of parameters with prior works, CoFie reduces the shape error by 48% and 56% on novel instances of both training and unseen shape categories. Moreover, CoFie demonstrates comparable performance to prior works when using even 70% fewer parameters. Code and model can be found here: https://hwjiang1510.github.io/CoFie/
https://openreview.net/pdf/eed236b796e43c4fcce6c86d46c5cd3896968f4d.pdf
[ { "confidence": 5, "rating": 3, "review_id": "68aVC64Y5X", "review_text": "This paper proposes a new architecture for SDF auto-decoding task. It divides the whole SDF points into voxels and builds local coordinates for each surface patch included in valid voxels. Respective shape coding is learned separately and a generalizable MLP is used for SDF decoding. For MLP, a quadratic layer is proposed to better fit SDF values. Compared to the selected baselines, the proposed method shows great performance on both synthetic and real datasets.\n\nThe paper theoretically proves that ReLU based MLP is not enough for modelling the quadratic surface and thoroughly explains the difficulty in training local coordinate transformation. The results on two datasets show that local shape coding and hybrid representations have good potential for shape decoding.\n\nThe evaluation on the main assertions of this paper is not thorough. The method should be evaluated in three levels – accuracy, scalability and generalizability. \n\n1. The first one has been partially demonstrated by comparing with the selected baselines. However, here a SOTA baseline - Neural Kernel Surface Reconstruction (CVPR’23) - is missing.\n \n2. Scalability is not studied by the paper. Only object level datasets are tried. Large scale datasets such as CARLA should also be evaluated to demonstrate the ability of the model. This is a reasonable experiment for a locally coded model, because it only needs to introduce more and more patches for larger cases. \n\n3. Generalizability is one of the selling points for this paper. However, there are some issues related to the baselines and experiment settings. \n\nFirst of all, the paper states that the baseline NGLOD is a per-scene model, not a generalizable model, so NGLOD naturally could have better performance. However, NGLOD also has generalization ability according to the original paper, only LOD features are trained per-scene while keeping the MLP fixed, which is actually a similar setting to this paper. \n\nSecondly, there are two kinds of generalizability: across domain and across density. In this paper only generalizability across domain, i.e. object level synthetic to real, is tried. Scene level, for example ShapeNet and Synthetic Room to ScanNet or Matterport3D, and generalization across density, that is trained and inferred on different sampling density, are missing.\n\n4. Besides the experiments, there is one problem in theory derivation: \n\nIn the proof of proposition 1, equation 14 seems unreasonable to be equal. It is suggested to explain to what extend these two expressions could be regarded as equal. If equation 14 is not equal, how the final equation could be held?\n\n5. Missing reference(s): Line 114, missing reference(s) for using quadratic surface patch\n\nBesides the weaknesses, there is one more question: currently all points in one voxel are assumed to be related to the surface within the same voxel, what if the points near the boundary of the voxel are related to the surface included in the nearby voxel?" }, { "confidence": 4, "rating": 7, "review_id": "hcCG3cH0u5", "review_text": "### Motivation\n- In the realm of implicit 3D shape representations, local-based solutions [4, 19, 32] (which decomposes the target shape into a set of local surfaces to model) provide higher accuracy but at the cost of a higher number of parameters to optimize. \n- The authors argues that this number of parameters can be reduced by disentangling the actual geometry and the transformation (orientation, translation) of these local patches, modeling the latter separately.\n\n### Contributions\n- The authors thus introduce CoFie, a novel approach that utilizes an explicit coordinate field to transform local shapes, aligning them to reduce spatial complexity and improving MLP-based implicit local representation of 3D shapes.\n- Various theoretical and empirical evidences are provided w.r.t. the benefits of their formulation, as well as to justify other technical choices, e.g., the usage of quadratic layers to complement their MLPs.\n\n### Results\n- Experiments show that CoFie reduces the shape error by 48% to 56% compared to traditional generalizable methods (e.g., DeepSF [4], DeepLS [24]) and achieves comparable performance to prior works with 70% fewer parameters.\n\n### Relevance\n- This paper addresses the challenge of balancing accuracy and compactness in neural surface representations, a relevant and generic issue in 3D shape modeling.\n\n_(somewhat ordered from most to least important)_\n\n### S1. Extensive Theoretical Grounding and Methodological Explanations\n- The authors spent much effort formalizing and proving their contributions, providing the readers with extensive theoretical background.\n- The Methodology itself is clear and thoroughly explained. While the actual implementation has not been provided, an expert in the art should be able to re-implement this work.\n- The paper is also well illustrated and easy to follow (pipeline figures + qualitative results).\n\n### S2. Novelty.\n- To the best of my knowledge, the idea of disentangling geometry and transformation of local shapes is both new and interesting, and could benefit the 3DCV community.\n- The authors nicely tie together their original intuition and extensive formalization to justify their contribution.\n\n### S3. Convincing Evaluation\n- The proposed CoFie method outperforms other generalizable methods in terms of shape accuracy and weight compactness.\n- Qualitative results and the ablation study are also convincing.\n\n### W1. Somewhat Limited Comparison to SOTA\n- The authors compare to 4 methods (DeepSDF [24], DeepLS [4], NGLOD [30], 3DS2VS [41]), 2 of which are fairly old for the domain (<2020 for [24] and [4]). The literature has many more recent approaches, e.g. [a,b,c], with code or results pre-available, which could have been considered.\n- Moreover, the authors compare to only one shape-specific method (i.e., performing test-time optimization), NGLOD [30]. While such methods tackle a slightly different task, comparing to a single solution makes it hard to draw solid conclusions.\n\n### W2. Partial Evaluation of \"Compactness\" Claims\n- The authors claim that \"CoFie achieves comparable results with prior work using 70% less parameters\" [L56-57]. However, there is a distinction to be made between parameter number (compactness of the model), convergence speed (number of iterations to convergence), and inference time / computational footprint. I find that claiming some improvement along one of those dimensions without studying the impact on the others not so meaningful. \n\n### W3. Code Not Available\n- While the authors claim that they are \"committed to releasing our code\" [L16], no code has been provided yet to reviewers.\n\n\n### W4. Minor Remarks\n- Citations are missing (empty brackets) in several places [L114, L128, L197].\n- Typos: \"practival: $\\rightarrow$ \"practical\" [L118] ; \"Perfomance\" $\\rightarrow$ \"Performance\" [L270] ; etc.\n\n\n------------\n### Additional References:\n\n[a] Ye, Jianglong, et al. \"Gifs: Neural implicit function for general shape representation.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\n\n[b] Wang, Li, et al. \"Hsdf: Hybrid sign and distance field for modeling surfaces with arbitrary topologies.\" Advances in Neural Information Processing Systems 35 (2022): 32172-32185.\n\n[c] Lu, Yujie, et al. \"Unsigned Orthogonal Distance Fields: An Accurate Neural Implicit Representation for Diverse 3D Shapes.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\n\n_see **Weaknesses** for key questions/suggestions._\n\n### Q1. Computational Cost\n- Besides the points made in **W2** above, I wonder what the computational cost of replacing linear layers with quadratic ones is (c.f. [L178-187])?\n\n### Q2. Grid Size and Surface Complexity\n- What is the impact of the voxel grid size on the surface complexity / ability of the model to learn the surface? Intuitively, the larger each voxel is, the more complex/discontinuous the surface that it intersects can become.\n\n### Q3. Purpose of $w$\n- What is the point of introducing $w$ [L168-171] if in practice it is set to the indicator function (simply selecting the voxel $v$ where $x$ belongs)? Is there a point to keeping Equation 5 this generic?\n\n### Q4. Hyper-parameter $k$\n- The authors mention that setting the number $k$ of quadratic layers (following $L-k$ linear ones) to 1 yields the best performance [L187]. Could the authors share the quantitative evaluation behind this conclusion?" }, { "confidence": 3, "rating": 6, "review_id": "Pv49UOYY99", "review_text": "This paper introduces CoFie, a novel neural surface representation method designed to efficiently learn and represent complex 3D shapes. CoFie addresses the challenges of existing methods by introducing a coordinate field that optimizes the representation of local shapes, significantly reducing shape errors and improving the efficiency of parameter usage. The method is based on a hierarchical representation with coarse and fine-grained geometry, utilizing MLP with quadratic layers to enhance the expressiveness of the model. Experimental results demonstrate CoFie's strong generalization capabilities in shape reconstruction tasks, outperforming previous methods in terms of accuracy and efficiency.\n\nCoFie uses a coordinate field to transform local shapes into an aligned coordinate system, reducing spatial complexity and making the learning of MLP-based implicit representations more efficient. By incorporating quadratic layers into the MLP, CoFie enhances the model's ability to capture local shape geometry, improving the quality of shape modeling. CoFie is trained on a curated dataset and can represent arbitrary shapes from novel categories, demonstrating strong generalization capabilities.\nThe theoretical proofs in the paper are sufficient and the articulation is clear.\n\n1.CoFie's performance in detailing is strongly related to the cell size, which hinders its ability to represent fine-grained details. \n2.Representations based on nonoverlapping local patches may not favor expressing rich colors, a capability that implicit representations possess. \n3.It excels in displaying regular shapes, such as furniture, but is weak in showcasing shapes with rich curves.\n4. Line 197, citation is missing\n\nI noticed that the article mentions, \"We train CoFie on 1000 shape instances sampled from ShapeNet [5] consisting of chairs, planes, tables, lamps, and sofas (200 instances for each category).\" Why not train on the entire ShapeNet?" }, { "confidence": 4, "rating": 3, "review_id": "raW2IrHvhg", "review_text": "Paper proposes CoFie — 3D shape representation as a set of latents arranged on a regular voxel grid. Each latent encodes an oriented local quadratic patch. This local oriented patch defines local SDF in a local coordinate frame which is decoded via conditional MLP for which the last layer is quadratic: it defines a bilnear form on input vector instead of linear mapping). SDF value for query point x is decided based on the SDF value of local implicit function belonging to the nearest voxel and transformed from local coordinate frame to global. \n\nAuthors train proposed methods and baselines on a single random subsample of 1000 ShapeNet shapes (5 categories, 200 shapes each) and evaluate on holdout ShapeNet shapes and out-of-distribution shapes from Thingi10K. Proposed method outperforms baselines trained in the same limited data regime. The same ShapeNet model is also evaluated with the Neural Geometric Level of Detail method and DeepSDF fitted in the shape overfitting scenario.\n\n— Proposed representation is more efficient compared to regular latent grids and is comparable to surface latent grids (3DILG) in terms of latents needed to represent shape; \n— Method figure is clear and well done and quantitative results are well formatted;\n— Qualitative results look comparable to NGLOD which is a strong baseline for shape overfitting setting;\n— Design choices are clearly ablated (Table 4);\n\n— Proposed method seems to be very similar to AutoSDF and SDFusion: both methods utilize a regular latent grid that encodes local SDF that is used to infer the global SDF. Main difference seems to be in a local SDF decoder that is quadratic (last layer) for proposed method and also encodes SDF in the local coordinate frame (SDFusion and AutoSDF use global coordinates). Overall, it is fine as long as proposed method compares to these similar methods but evaluation of the paper is limited (see below). \n\n— Similarly to other regular latent grids methods (e.g. SDFusion, AutoSDF), this model scales cubically with respect to grid resolution and thus might not be well suited for representation of topologically challenging shapes with thin parts. \n\n— Evaluation of the paper is extremely limited. For point cloud reconstruction (auto-encoding), proposed methods and baselines are trained on 5 ShapeNet categories with 200 shapes per category (overall 1000 shapes). This evaluation is not enough to support the claim that CoFie “can represent arbitrary shapes that belong to any novel category” (LL52-53). These results might only indicate that CoFie might be better representation in low-data regime but in this case evaluation should be done similar to few-shot learning setting: all models should be trained on several different subsets of ShapeNet (e.g. 5) and average/std of test errors should be reported. I also want to note that 3D2VS trained on full ShapeNet is an extremely strong baseline that achieves almost perfect reconstruction quality for some categories (like airplanes). Also, the paper only uses Chamfer distance for evaluation and ignores other common measures like IoU and surface F-Score. \n\n\n— The choice of baselines is also limited. DeepSDF is a 2019 paper that uses a very simple Pointnet (not even PointNet++) encoder. DeepLS is a 2020 paper. 3D2VS seems to be the only recent baseline for 3D point cloud reconstruction. It is a very strong baseline but authors have not used pretrained models and trained their version on limited data instead. Given the fact that 3D2VS uses a very high capacity attention-based decoder, this training regime might not be a fair comparison because attention-based models often struggle in low data regimes. For the same reason, 3D2VS might not be a good baseline if the goal of the paper is to show that their representation is compact. In this case, 3DILG might be a more suitable and strong baseline (see below) because it uses a simpler local patch encoder (Pointnet), so it might work better in a limited data regime. Since the model utilizes local surface patches similarly to AtlasNet (see links below), it also can be a strong baseline, especially in low data setting. \n\n— Evaluation in an overfitting setting does not seem methodologically correct. : NGLOD and DeepSDF overfit to single shapes while CoFie was pretrained trained on 1000 shapes. These are completely different settings: overfitting regime measures capability of the model to efficiently fit one shape with preservation of geometric detail, and fitting and evaluation on collection of shapes evaluates the ability of the model to generalize to unseen shapes. This experiment should either be done if fully overfitting setting (similar to NGLOD paper) or in a reconstruction setting (see concerns above). \n\n— Quadratic MLP contribution seems a little bit weak to me. It looks like it is basically equivalent to linear MLP being run on quadratic feature expansion and in this case commonly fourier embeddings of points can be a very strong alternative (this is not tested). \n\n— Paper writing can be significantly improved in clarity. For example, paper does not specify input to auto-encoder for baselines. Is it point cloud or mesh? If it is a point cloud, what is the density of sampling and what sampling was used (FPS, random, poisson disk sampling, etc). How many codes were used to train 3D2VS? 512? What were hyperparameters for baseline training? On the other hand, paper spends a lot of space describing relatively common knowledge like MLP with ReLU activations (LL172-177). Another example is equation (5). On first glance it appears as weighted average, but since authors use voxel indicator function as weight, it means that for each query point x, global implicit function is based only on implicit function of nearest voxel grid latent. \n\n— Some statements in the paper are not correct. For example, the paper states that 3D2VS “employs transformers to predict the shape latent code” (LL258-260) . This is not factually correct: 3D2VS does not represent shape as one vector, it uses latent code clouds (usually 512 codes per shape). \n\n— Representation seems to be computationally expensive. Training on 1000 shapes takes one day on 4 GPUs with 24 GB VRAM. \n\n— Some relevant work is missing\n\nAutoSDF: Shape Priors for 3D Completion, Reconstruction and Generation https://arxiv.org/abs/2203.09516 \n\nSDFusion: Multimodal 3D Shape Completion, Reconstruction, and Generation\nhttps://arxiv.org/abs/2212.04493 \n\n4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks https://arxiv.org/abs/1904.08755 \n\n3DILG: Irregular Latent Grids for 3D Generative Modeling https://arxiv.org/abs/2205.13914 \n\n\nAtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation https://arxiv.org/abs/1802.05384\n\n— What is the input for the model for auto-encoding settings? Is it a point cloud? Or model back propagates into learnable latent code like DeepSDF? If it is a point cloud, what encoder model is used? If model back propagates into latent code, how come it is generalizable?\n\n— How were baselines trained? Were they trained on the same inputs as the proposed method? Did all methods use the same SDF sampling? As far as I know, 3D2VS uses occupancy as supervision. Was it trained with SDF or occupancy supervision for the paper? \n\n— What motivated size 1000 for a training dataset? Majority of modern shape auto-encoder train either on full ShapeNet of full ShapeNet categories (e.g. ~6000 ShapeNet chairs). If 5 different subsets of ShapeNet of size 1000 would be selected (same categories), how high would be the variance in evaluation across these subsets? I have strong suspicion that results shown int Tables 1 and Tables 2 might not generalize across training subsamples. \n\n— Have you tried using Fourier embeddings of input coordinates instead of quadratic layers? This might be a very strong alternative since it helps coordinate based models to encode high-frequency details very efficiently." }, { "confidence": 4, "rating": 6, "review_id": "70Vd0upa8z", "review_text": "The paper proposes a local prior based method where the model is trained on a dataset and learns a prior over local patches of the shapes, and then at test time reconstructs patches by optimising to minimise the reconstruction error w.r.t. the trained model's prior. While this has already been done in the literature, they propose to learn a coordinate transformation for each local region and initialise the coordinate transformation based on the geometry in that region. They also use a quadratic layer at the end of the network, inspired by their analysis of representing local patches by a quadratic surface.\n\n- Good ablation study showing the benefits of each component. Interestingly the geometric initialization is very important for performance improvement, suggesting that the latent based model has trouble training efficiently.\n\n- A lot of things are not made clear (asked in the questions) which makes it hard to understand the method\n- As mentioned in the strengths, it seems that the benefit of this method is that the coordinate field (and especially its init) it enables the latent based model to train better, rather than being a better representation. This could be explored more deeply. Also it makes the question below (\"It seems the proposed method has no regularization, is there a reason for this?\")\n\n- The method is called coordinate field, but it is not actually a true coordinate field: the coordinate transformation is learned discretely as parameters for each voxel, not continuously throughout the domain?\n- Usually shape autodecoding uses regularization on the latent parameters so then when doing test time latent optimisation the latent parameters can be initialized to zero and be \"close\" to many possible shapes. It seems the proposed method has no regularization, is there a reason for this?\n- Line 219: \"compute the derivatives of the SDF\" however at test time you don't have access to the full SDF (or do you?), so how to you get well oriented normals for initialization?\n- Would like more information about what SDF supervision exactly is given at test time. Is it just the surface points, or is assumed the SDF is known at test time as well?\n- Would like timing performance of test time optimization (general ballpark and if this compares to other methods)\n- Line 243: two types of methods or three types of methods?\n- Missing ref on line 114, line 128 and line 197" } ]
0JSKjdePGq
When to Sense and Control? A Time-adaptive Approach for Continuous-Time RL
Reinforcement learning (RL) excels in optimizing policies for discrete-time Markov decision processes (MDP). However, various systems are inherently continuous in time, making discrete-time MDPs an inexact modeling choice. In many applications, such as greenhouse control or medical treatments, each interaction (measurement or switching of action) involves manual intervention and thus is inherently costly. Therefore, we generally prefer a time-adaptive approach with fewer interactions with the system. In this work, we formalize an RL framework, **T**ime-**a**daptive **Co**ntrol \& **S**ensing (**TaCoS**), that tackles this challenge by optimizing over policies that besides control predict the duration of its application. Our formulation results in an extended MDP that any standard RL algorithm can solve. We demonstrate that state-of-the-art RL algorithms trained on TaCoS drastically reduce the interaction amount over their discrete-time counterpart while retaining the same or improved performance, and exhibiting robustness over discretization frequency. Finally, we propose OTaCoS, an efficient model-based algorithm for our setting. We show that OTaCoS enjoys sublinear regret for systems with sufficiently smooth dynamics and empirically results in further sample-efficiency gains.
https://openreview.net/pdf/da0a3d70fa3dca0be69b763308ebef152bfbbc86.pdf
[ { "confidence": 3, "rating": 6, "review_id": "UQbkIxHhqd", "review_text": "The paper focuses on continuous-time reinforcement learning. Typically the control and the measurements of continuous-time systems occur at discrete time points. These interventions generally come with a cost. The paper proposes a time-adaptive approach to reduce the number of these interventions in an optimal manner. Therefore they formulate the problem as an MDP, for which standard RL algorithms can be used. Additionally the authors propose a model-based algorithm for their setting.\n\n1. The paper describes an interesting setting. The problem of choosing the optimal measurement and control time-points for continuous-time systems is relevant for many processes in nature and industry.\n\n2. The reformulation of the original problems to discrete-time MDPs is well described.\n\n3. The various experiments visualize and describe the effects of the hyperparameters nicely.\n\n1. The references in the paper lack consistency in formatting. Conference names are sometimes written in full, sometimes abbreviated, and occasionally omitted altogether.\n\n2. One major issue is the lack of details on the learning process. It would be interesting to compare the learning curves of the equidistant and the time-adaptive approach.\n\n3. The experimental section leaves some questions open -> See Questions\n\n\nMinor Weaknesses:\n- Figure 3 comes before Figure 2\n- Page 5, Line 134 says time discretization splits the whole horizon in T/K discrete time points. I think it should be K time points with length T/K\n- It is difficult to discern the decline in interactions for the greenhouse task in Figure 3 first row, which coincides with the drop in episode reward.\n\n1. Regarding the experiments for Figure 2: After what time fraction are the greenhouse and pendulum swing-down usually in the stable equilibrium? As described, if the systems are in the stable equilibrium, the discretized approach continues with interactions, while the time-adaptive approach needs less. It would be interesting to compare the interactions for the time until the stable equilibrium is reached.\n\n2. Figure 2, Pendulum Swing-up: Why does the equidistant approach perform so poorly? In Figure 1, the task could be solved with K=5 number of interactions, without high-frequency changes. Intuitively the equidistant approach should perform well with more than 10 interactions.\n\n3. How many iterations were used for learning the equidistant policy and the SAC-TaCos policy in Figure 2? It appears that learning the equidistant policy should be easier, as it has smaller input and output spaces.\n\n4. In the first row of Figure 3, the interaction cost increases, yet the reward does not decline significantly. It would be interesting to identify the point at which the reward starts to drop." }, { "confidence": 4, "rating": 6, "review_id": "ngR8DJJgyB", "review_text": "The paper introduces a framework for reinforcement learning named Time-adaptive Control & Sensing (TACOS). The TACOS framework reformulates the problem of continuous-time RL into an equivalent discrete-time Markov decision process (MDP) that standard RL algorithms can solve. Additionally, a model-based version of TACOS, named OTACOS, is proposed to reduce the sample complexity.\n\n1. The introduction of the Time-adaptive Control & Sensing (TACOS) framework is a novel approach that creatively combines the challenges of continuous-time dynamics with the necessity of minimizing interactions due to cost considerations.\n2. The paper provides a strong theoretical foundation for the TACOS framework, with clear reformulation of continuous-time RL problems into an equivalent discrete-time MDP. \n3. The empirical results support the theoretical claims, with demonstrations of TACOS and OTACOS outperforming traditional discrete-time counterparts\n\n1. Although the problem posed by the paper is novel and interesting, the solution simply involves incorporating time $t$ into the action space for learning, which a little lacks novelty.\n2. The empirical validation is somewhat limited in diversity, primarily focusing on controlled synthetic environments. This limitation might affect the generalizability of the results to more complex or noisy real-world systems.\n\nThe model learned here differs from the typical models in model-based reinforcement learning, as it incorporates time as a transition variable, making the learning process more challenging. Could the authors demonstrate the learning performance of the model itself?" }, { "confidence": 3, "rating": 8, "review_id": "tcXarkC7Fo", "review_text": "This paper proposes a novel time-adaptive RL method framework (TaCoS) for continuous-time systems with continuous state and action spaces. The framework shows that the settings of interactions having costs or a budget of interactions, can be formulated as extended MDPs, that can be solved with standard RL algorithms. The paper theoretically demonstrates this, and empirically verifies this. The paper also empirically demonstrates that TaCoS works across a range of interaction frequencies, and proposes OTaCoS a model-based RL approach.\n\n* The paper is well written, and the intuitive explanations help the reader.\n* The problem of time-adaptive RL for continuous-time systems with continuous state and action spaces is of high significance to the community and is well-motived throughout.\n* The paper contributions are novel to the framework proposed by TaCoS and provide theoretical and empirical evidence.\n* The method works surprisingly well, with the surprise that TaCoS achieves better control performance even with a small budget of interactions.\n\n* L154: “Intuitively, the more stochastic the environment, the more interactions we would require to stabilize the system.” the argument should either have a reference or be empirically verified.\n* Minor: Missing related work reference of (Nam et al. 2021).\n\nReferences:\n\n* Nam, HyunJi Alex, Scott Fleming, and Emma Brunskill. \"Reinforcement learning with state observation costs in action-contingent noiselessly observable Markov decision processes.\" Advances in Neural Information Processing Systems 34 (2021): 15650-15666.\n\n* Is the action applied from the policy kept constant until the next interaction? I presume the action is kept constant. Have you considered parameterizing the action and enabling non-constant actions or even continuous trajectories of actions between interaction times as a form of open-loop planning?" }, { "confidence": 3, "rating": 7, "review_id": "m7WW34JlMd", "review_text": "Reinforcement learning (RL) is effective for discrete-time Markov decision processes (MDPs), but many systems operate continuously in time, making discrete-time MDPs less accurate. In applications like greenhouse control or medical treatments, each interaction is costly due to manual intervention. To address this, we propose a time-adaptive approach, Time-adaptive Control & Sensing (TACOS), which optimizes policies that also predict the duration of their application. This results in an extended MDP solvable by standard RL algorithms. TACOS significantly reduces interactions while maintaining or improving performance and robustness. We also introduce OTACOS, a model-based algorithm with sublinear regret for smooth dynamic systems, offering further sample-efficiency gains.\n\n- handling continuosu control is sort ot interesting since this also bridge the gap between real-world RL and RL in simulation.\n\nMy questions are listed as below\n\nThank you for the opportunity to review your paper. I have a few questions and observations outlined below.\n\n1. Real-world applications and global frequencies.\n\n- The discussion around the requirement for different global frequencies in real-world applications is intriguing (line 32), as it highlights the limitations of discrete-time sampling (lines 30-32). However, it seems this limitation is not fully addressed by continuous-time control (or the TACOS algorithm). The fundamental difference between discrete-time control and continuous-time control lies in the amount of environmental information included to compute the policy. For instance, within the total time interval $[0,T]$, discrete-time control computes the policy based on $K$ sampled data points {$t_1, t_2, \\cdots, t_K$}, resulting in an optimal policy only for those specific sampled times. In contrast, continuous-time control computes an optimal policy over the entire continuous duration $[0,T]$ and then applies it at discrete times (constrained by Equations (2) and (3)). Utilizing the policy derived from a continuous-time formulation in a discrete-time setting would provide better generalization ability to obtain higher rewards for $t \\notin$ {$ t_1,t_2,\\cdots,t_K$} but ***does not necessarily address the limitation mentioned in lines 30-32.***\n\n2. Definition of Notations:\n\n- The paper lacks precise definitions of certain notations. For example, the policy $\\pi: \\mathcal{X} \\to \\mathcal{U}$ maps states to control inputs, but the exact meaning of $\\pi_{\\mathcal{T}}$ is unclear. In line 80, it is stated that \"$\\pi_{\\mathcal{T}}$ is a policy that predicts the duration of applying the action.\" This suggests $\\pi_{\\mathcal{T}}$ is a prediction variable rather than a policy, as the term \"policy\" typically refers to decisions or actions taken. Additionally, while $t_{i} = \\pi_{\\mathcal{T}}(x_{t_{i-1}}) + t_{i-1}$ is mentioned, the role of $\\pi_{\\mathcal{T}}$ in the objective function needs further clarification.\n\nAlso have some minor issues.. \n\n- Line 65-66 and 68-69 discuss \"real-time inference\" or \"adaptive control approach,\" which is a known challenge in real-world reinforcement learning. Including references to related literature would help readers better understand these contributions as follows. \n\n[1] Dulac-Arnold, G., Mankowitz, D., and Hester, T.Challenges of real-world reinforcement learning.arXiv preprint arXiv:1904.12901, 2019.\n\n[2] Hyunin Lee, Ming Jin, Javad Lavaei, Somayeh Sojoudi, Pausing Policy Learning in Non-stationary Reinforcement Learning. ICML 2024 \n\n[3] Al-Shedivat, M., Bansal, T., Burda, Y., Sutskever, I., Mordatch, I., and Abbeel, P.Continuous adaptation via meta-learning in nonstationary and competitive environments.In ICLR, 2018.\n\n- Also please discuss what SDE stands for." } ]
0HRRNEAQFp
A General Protocol to Probe Large Vision Models for 3D Physical Understanding
Our objective in this paper is to probe large vision models to determine to what extent they ‘understand’ different physical properties of the 3D scene depicted in an image. To this end, we make the following contributions: (i) We introduce a general and lightweight protocol to evaluate whether features of an off-the-shelf large vision model encode a number of physical ‘properties’ of the 3D scene, by training discriminative classifiers on the features for these properties. The probes are applied on datasets of real images with annotations for the property. (ii) We apply this protocol to properties covering scene geometry, scene material, support relations, lighting, and view-dependent measures, and large vision models including CLIP, DINOv1, DINOv2, VQGAN, Stable Diffusion. (iii) We find that features from Stable Diffusion and DINOv2 are good for discriminative learning of a number of properties, including scene geometry, support relations, shadows and depth, but less performant for occlusion and material, while outperforming DINOv1, CLIP and VQGAN for all properties. (iv) It is observed that different time steps of Stable Diffusion features, as well as different transformer layers of DINO/CLIP/VQGAN, are good at different properties, unlocking potential applications of 3D physical understanding.
https://openreview.net/pdf/70fa9e1f034acc058ce9df7978cf685daf6ce6d1.pdf
[ { "confidence": 4, "rating": 5, "review_id": "FzlJZlNyho", "review_text": "This paper aims to evaluate how well large-scale vision models encode 3D properties of the scenes depicted in images. The paper proposes a general and lightweight protocol that involves training discriminative classifiers on the features of pre-trained models to assess their encoding of several physical properties of 3D scenes. These properties include scene geometry, materials, support relations, lighting, shadow, occlusion and depth.\n\nThis protocol is applied to several large vision models, such as CLIP, DINOv1, DINOv2, VQGAN, and Stable Diffusion. The findings indicate that features from Stable Diffusion and DINOv2 are particularly adept at discriminative learning for properties like scene geometry, support relations, shadows, and depth. However, they perform less well for occlusion and material properties. The results also show that different layers and time steps of these models are good at different properties, which could have potential applications for 3D physical understanding.\n\n1. This paper proposes a protocol to assess the 3D awareness for current large-scale vision models. The protocol considers various physical properties and is lightweight.\n\n2. The overall structure and writing are easy to follow.\n\n3. The problem investigated in this paper is very interesting. It provides a new perspective to interpret the features learned by large-scale models, especially their 3D awareness. This is very valuable for future studies.\n\n1. It seems unclear to me whether the proposed probes can really reflect the 3D understanding of large vision models, due to properties to be probed and the way to probe them. For example, the material, support relation and shadow properties, from my point of view, can be well identified with 2D clues (e.g. appearance and 2D spatial location). For occlusion and depth, using linear SVM to answer binary questions may not be enough to assess these properties.\n\n2. There is a lack of baselines for the linear probing. It is unclear how a 3D-aware model trained explicitly with 3D data will respond to probe questions (upper bound). Also, it is better to show how the models with little 3D awareness will react to the probes (lower bound). Otherwise, it is less informative just comparing between large vision models, especially when their scores are close and pretty high as shown in table 4.\n\n3. The paper “Probing the 3d awareness of visual foundation models” [1] shares a very similar goal with this paper. It’s suggested to add comparison and discussion with that paper. \n\n4. It would be more comprehensive to evaluate more large vision models such as SAM [2] and MAE [3].\n\n[1] Mohamed El Banani, Amit Raj, Kevis-Kokitsi Maninis, Abhishek Kar, Yuanzhen Li, Michael Rubinstein, Deqing Sun, Leonidas Guibas, Justin Johnson, and Varun Jampani. Probing the 3d awareness of visual foundation models. CVPR, 2024.\n\n[2] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. ICCV, 2023.\n\n[3] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollar, and Ross Girshick. Masked autoencoders are scalable vision learners. CVPR, 2022\n\n1. Have the large-scale models been trained on the dataset chosen in this paper? Does it have influence on the probe results?\n\n2. In Table 4, instead of using a random classifier as the baseline, it is fairer to train a feature extractor from scratch with your probing training set.\n\n3. Following the second point in Weakness, could you provide the upper bound and lower bound of the probe? That’s essential to justify that the probing tasks can actually reflect 3D awareness.\n\n4. Currently, the input to the linear SVM is designed as the difference between the averaged pooled features from two patches. Will pooling and subtraction operations lead to information loss? Is there any better way to formulate the linear probing?" }, { "confidence": 4, "rating": 7, "review_id": "r7PTPu4kPL", "review_text": "This paper investigates several mainstream large vision models for their 3D physical understanding. The authors curated a binary classification benchmark covering a set of important 3D properties based on publicly available 2D image datasets and linearly probed different layers and different time steps of the vision models. Consequently, the authors find that DINOv2 and Stable Diffusion perform the best among the probed models, while some properties such as material and occlusion challenge them all. Consequently, this work provides a general linear probing protocol and dataset to evaluate a vision model's capability in 3D physical understanding.\n\n1. Significance. The reviewer thinks this work studies a significant question. With the emergence of large vision models trained on 2D images, how and how well they could be applied to 3D-related tasks is important and of growing interest.\n2. Originality. This work defines a set of properties and corresponding tasks for evaluating 3D physical understanding. This is a novel benchmark for evaluating vision models.\n3. Clarity. The writing is clear and easy to follow. Section 3 presents clear demonstrations of the datasets and task definitions.\n4. Quality. Through and organized experiments are conducted. Grid search is applied to timesteps and layers.\n\n1. Lack of in-depth analysis and understanding. While the paper spends a big part describing the methods and experiments, an analysis of the performance differences seems to be glossed over. As many of the popular vision models are benchmarked, what are the authors' speculations and thoughts on the cause of the varied performance? Does the difference in data and training objectives play a part? The reviewer believes providing further analysis and understanding of the models will make this paper appreciated by more audiences.\n\n- For curating the dataset, how are the regions selected? For those obtained from annotation masks, are all the regions kept in the final dataset? Is there any human intervention and selection?\n- Does the size of regions affect the task difficulty?\n- Does patch size affect the performance? What are the patch sizes of the evaluated models?" }, { "confidence": 4, "rating": 7, "review_id": "noFlYFOsz0", "review_text": "In order to efficiently examine whether large vision models have explicit feature representations for different properties of the 3D physical scene, the paper proposes to linearly probe the features of different layers (and different time steps) from LVMs on specially designed binary classification problems. Extensive experiments demonstrate the effectiveness of the proposed probing scheme.\n\n1. As far as I am concerned, the paper is the first to investigate the 3D knowledge learned by LVMs in a lightweight manner with linear probing.\n2. For quality, the authors conduct extensive experiments on several datasets to validate the effectiveness of their strategy.\n3. For clarity, the paper is well written and easy to follow.\n4. For significance, it is crucial to investigate to what extent the pretrained LVMs understand the 3D physical world and the applications.\n\nAlthough the paper unveils the 3D perception abilities of pretrained LVMs using linear probing, it is still very simple and far from real application in downstream tasks. It would be better if the author could give more downstream applications in addition to the one in the appendix.\n\nPlease see the weakness section for details." } ]
0Gl5WxY6es
Grounding Multimodal Large Language Models in Actions
Multimodal Large Language Models (MLLMs) have demonstrated a wide range of capabilities across many domains including Embodied AI. In this work, we study how to best ground a MLLM into different embodiments and their associated action spaces, including both continuous and discrete actions. For continuous actions, a set of learned tokenizations that capture an action at various resolutions allows for sufficient modeling precision, yielding the best performance on downstream tasks. For discrete actions, semantically aligning these actions with the native output token space of the MLLM leads to the strongest performance. We arrive at these lessons via a thorough study of seven action grounding approaches on five different environments, encompassing over 114 embodied tasks.
https://openreview.net/pdf/e049d4a2bea8893fcb7487382f651d8e73dc264d.pdf
[ { "confidence": 3, "rating": 5, "review_id": "aZCSkFCyur", "review_text": "This paper studies how to \"ground\" multimodal LLMs (MLLMs) to the action spaces of agents with various embodiments. The authors examine \"Action Space Adapters\" (parameterization strategies; ASAs) for various embodiments and MLLMs and identify principles for constructing ASAs based on the target action space. The authors consider 5 embodied AI environments (3 continuous and 2 discrete) and over 100 tasks.\n\nThe paper addresses an important issue for the future generalization of MLLMs to embodied environments and different action spaces. The authors conduct a variety of experiments in different environments and action spaces. The motivation for the method is given a theoretical grounding, and the experimental results provide concrete recommendations for adapting MLLMs to novel action spaces and environments. The combination of theoretical motivation and actionable takeaways is a strong contribution.\n\nOverall the paper is very dense. This is a complex topic so understandably there is a log packed into 9 pages and as a result there are many parts of the paper that are hard to follow. There are a fair number of parts of the paper that are not clearly written, which impacts my scores. If the authors' rebuttal clarifies these points satisfactorily, I would be willing to reconsider my evaluation. The major points are given in Questions.\n\nSome other issues are:\nSome terms are not clearly defined for the reader. For example, \"codes\" as used in Figure 3. (Again, if I simply missed where this was provided, please point it out). Similarly for \"codebook\".\n\nNo specific example outputs are provided. Some examples of environments are provided in the appendix but there's no clear discussion of, for example, adaptations that the best performing methods get right or wrong. Providing these would help clarify the definitions issue and make the problem much more grounded (ironically) than the plot and charts in the paper currently allow.\n\n1) Can you be clear about the difference between SemLang and Lang? The way it is written right now it reads as if the only difference between the two is that there are different numbers in the sequence being predicted. But since SemLang is predicting tokens that correspond to words, I assume that there is some underlying embedding that when softmaxed produced the token index. In that case, where to the numbers in the vocabulary of Lang come from and how are they being predicted if it's not using an underlying semantic (embedding) representation like SemLang?\n\n2) Section 3.3: \"In the complex environments we consider, the zero-shot performance of MLLM is poor across all ASAs, even with detailed prompts.\" Is this demonstrated anywhere, either in this paper (in the appendix perhaps?) or elsewhere (if elsewhere, please provide the citation)?\n\n3) \"We train with supervised finetuning for CALVIN, Meta-World, HabPick, and BabyAI. We train with reinforcement learning on Language Rearrangement.\" Can you please explain why this does not result in an invalid comparison between the environments, if they are trained for each environment using a) different methods and b) to the specifications of the environment themselves? Does this not risk overfitting the ASA to the environment and the training method? Or am I misunderstanding the goal here?\n\n4) How precisely do the differences in embodiments affect the results? Are the embodiments those provided out of the box in each environment or do you do any adjustment or controlling for the effects of the differing embodiments?" }, { "confidence": 5, "rating": 7, "review_id": "WqCsdlFzAL", "review_text": "In this paper the authors present a way to adapt a Vision and Language model to perform action execution tasks in embodied environments. Specifically, systematically evaluate different ways of predicting actions both on task having both discrete and continuous action spaces. Thanks to this evaluation, it is possible to assess which are the most performant action prediction losses to use for different use cases. According to the authors' results, approaches based on VQ-VAEs are able to obtain the best performance in many tasks.\n\n1. One of the first papers that finally sheds light on the different approaches to performing action prediction in embodied environments. This is the most important contribution of this paper and I believe it will be really useful to refer to this set of experiments for future research. \n\n2. They propose a VQ-VAE variant to generate latent codes for encoding actions. These codes can be intended as a way to learn \"latent bins\" to cluster the action space. Additionally, they propose a variant of this model based on the RVQ-VAE architecture to model a set of codebooks that are used to generate more precise actions.\n\n1. The VQ-VAE variants are indeed really interesting and novel. However, I find the description of this method a bit unsatisfactory because it omits some details regarding \"how\" you train these models. See my question below for details. \n\n2. The authors chose a good set of tasks for their evaluation however I believe that a benchmark that would have been perfect for this work is VIMA-Bench because of its focus on systematic generalisation. CALVIN somehow offers this but I don't think it's as systematic as VIMA-Bench.\n\n3. For discrete action spaces they use BabyAI however they only test with a very limited size of the grids and with few tasks. Please see my questions related to this point as well. \n\n4. Some related work missing: \n\n- Team, Octo Model, et al. \"Octo: An open-source generalist robot policy.\" arXiv preprint arXiv:2405.12213 (2024).\n- Pantazopoulos, G., Nikandrou, M., Parekh, A., Hemanthage, B., Eshghi, A., Konstas, I., ... & Suglia, A. (2023, December). Multitask Multimodal Prompted Training for Interactive Embodied Task Completion. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (pp. 768-789).\n\n1. I believe that a reader would appreciate the details of how you have arranged the dataset to train the VQ-VAE variants. The appendix contains some information but it doesn't fully address the nature of the data (e.g., \"What is each example?\").\n\n2. Why did you decide to follow the work from Carta et al. considering that it uses only a subset of the trajectory instead of the BabyAI benchmark which contains a range of different tasks of different complexity? Additionally, when you evaluate, are you sure that 100 episodes are enough to experience different configurations from the ones you used at training time?\n\n3. Could you please clarify what is the language instructions for environments that do not have one such as Meta-World?" }, { "confidence": 3, "rating": 6, "review_id": "uWwV7cu1Jp", "review_text": "The paper empirically studies how to properly ground MLLMs into embodiments, with a particular focus on the action representations, including the continuous and discrete solutions. The authors conduct a thorough study on 7 methods, encompassing over 114 tasks. The research indicates that for continuous actions, optimal results are achieved by learning action tokens that precisely represent the action distribution (RVQ). For discrete actions, superior outcomes are obtained by aligning actions with the original semantic token space.\n\n1. How to properly ground MLLMs in embodied tasks is important and under-explored.\n2. The paper is the first to systematically and comprehensively study the optimal recipe for action tokens.\n3. The conclusions drawn from the empirical studies could provide guidance for subsequent research.\n\n1. All experiments are performed on LLaVA and LoRA. Conclusions may not be applicable to other MLLM architectures or scales.\n2. Based on existing conclusions, the quality of the paper could be further enhanced if the authors could suggest whether to use continuous or discrete methods, how to break through the accuracy limit of current methods, or where future work should focus on improvements.\n\nPlease see the Weaknesses." } ]
0G0VpMjKyV
Sketching for Distributed Deep Learning: A Sharper Analysis
The high communication cost between the server and the clients is a significant bottleneck in scaling distributed learning for overparametrized deep models. One popular approach for reducing this communication overhead is randomized sketching. However, existing theoretical analyses for sketching-based distributed learning (sketch-DL) either incur a prohibitive dependence on the ambient dimension or need additional restrictive assumptions such as heavy-hitters. Nevertheless, despite existing pessimistic analyses, empirical evidence suggests that sketch-DL is competitive with its uncompressed counterpart, thus motivating a sharper analysis. In this work, we introduce a sharper ambient dimension-independent convergence analysis for sketch-DL using the second-order geometry specified by the loss Hessian. Our results imply ambient dimension-independent communication complexity for sketch-DL. We present empirical results both on the loss Hessian and overall accuracy of sketch-DL supporting our theoretical results. Taken together, our results provide theoretical justification for the observed empirical success of sketch-DL.
https://openreview.net/pdf/5e1b541569f705928d9970e377e8c4ddc68aa971.pdf
[ { "confidence": 5, "rating": 7, "review_id": "9pVZCQZ1wR", "review_text": "This paper considers the sketch-DL framework for distributed / federated learning studied by prior works such as Rothchild et al., Song et al. The authors identify that for these works, their convergence either has a dependence on the dimension (under rather minimal assumptions) which is unfavorable, or assumes something stronger such as the sketched gradient has heavy-hitter coordinates and it is enough to recover using Top-$r$. The model studied in this paper is a deep, feedforward neural net with a 1-Lipschitz activation (say ReLU). They assume the loss function is PL and the eigenvalues of the predicator Hessian are dominated by a few top eigenvalues. Under these assumptions, they manage to remove the linear dependence on the dimension for convergence from Song et al. They also perform experiments to confirm that using sketching matrices such as CountSketch gives better performance than local top-$k$ or FetchSGD due to Rothchild et al. without error feedback.\n\nThis paper resolves a big issue from Song et al., where the convergence depends linearly on the dimension. They are able to provide a tighter analysis and a sharper bound by examining a different set of assumptions that utilize information of the Hessian, going beyond first-order information. This is quite important as in some sense, this paper obtains the best of both worlds: the algorithm it examines is the simplest form of applying linear sketching due to Song et al., where one could easily integrate extra mechanism such as DP into it effortlessly. The new set of assumptions imposed in this paper are quite reasonable, and it also performs experiments to confirm simply applying sketches are enough to guarantee fast convergence.\n\nThere are two main weaknesses of this paper.\n\n* Fair comparison with the analysis of Song et al. This paper makes 3 assumptions: 1. the loss function is PL, 2. the eigenvalues of Hessian are dominated by a few top eigenvalues and 3. a uniform upper bound on the $\\ell_2$ norm of the gradient. For fair comparison, the corresponding results in Song et al. are assuming 1. the loss function is strongly-convex and 2. the loss function is smooth. Note that Song et al. does not need to assume a uniform upper bound on the size of all gradients, which is commonly criticized as an unrealistic assumption. Moreover, Song et al. provides a slew of results for convex and non-convex loss function. In my opinion, this paper should provide a more comprehensive comparison with Song et al. For example, what is the crux to remove the linear dependence on the dimension? Is it due to PL, restricted strong smoothness (RSS) or uniform bound on the gradient? I would assume it's due to the RSS assumptions, but the authors should explicitly spell it out. Also, is it possible to remove the gradient upper bound assumption?\n\n* Experimental comparison with Rothchild et al. Figure 1 presented in the main body of the paper only compares the sketching algorithm with FetchSGD without error feedback, and hence obtains better dimension reduction ration v.s. accuracy, which is not surprising, as FetchSGD *requires* error feedback correction to make the algorithm correct. In Figure 2 on page 23, authors did compare with FetchSGD with error feedback, and the \"correct\" FetchSGD does have the superior performance. I do understand authors intention here as the vanilla FetchSGD is nonlinear, hence very hard to integrate any other pieces such as DP into it. From this perspective, the sketching algorithm in Song et al. slightly trades the performance for a simpler and more general algorithm. Still, I don't think it's fair to put a figure in the main body comparing the sketching algorithm with a \"wrong\" version of FetchSGD. Authors could probably put Figure 2 in the main body, and provide comprehensive justifications on why one would expect FetchSGD to perform better and why sketching algorithm could potentially yield more use cases in practice.\n\nA few typos:\n\n* In References, citation [56] should be Sketching as a tool for numerical linear algebra instead of Computational advertising: Techniques for targeting relevant ads.\n\n* On line 592, end of page 15, what is the matrix $\\\\|\\cdot \\\\|_2$ norm? Is that the spectral norm?\n\nA few questions:\n\n* Can you extend the analysis to non-PL losses (say the loss is only convex, or even non-convex) similar to Song et al.?\n\n* I think it will be greatly beneficial to have a table comparing the set of assumptions posed in this paper and Song et al., together with different convergence results." }, { "confidence": 4, "rating": 4, "review_id": "UXp12NopdN", "review_text": "This paper provided ambient dimension-independent convergence and communication complexity analysis for sketching-based gradient descent algorithms using the second-order geometry of loss Hession.\n\n* This paper proves that the dimensional dependence comes from the global smoothness assumption and provides a convergence analysis of the sketching method independent of the ambient dimensions.\n\n* The analysis of this paper does not require the heavy hitter assumption and the Top-$r$ sparsification operation.\n\n* This paper studies gradient descent (GD) and uses additional assumptions, limiting the application of the analytical results.\n\n* Comparison experiments are not reasonable, and the experimental results are roughly plotted.\n\n1. Why does this paper not study the more widely used SGD algorithm? This paper studies GD but compares it with FetchSGD in theory and experiment, which is not quite reasonable.\n\n2. The theoretical result of the multiple-local step in Theorem 2 does not converge to 0 with the number of iterations, so why do the authors say that the result is similar to the sublinear convergence rate of SGD? Also, the authors should provide more explanatory notes on why the GD algorithm using multiple-local step is similar to SGD.\n\n3. What is the notation $\\\\mathbf{v}\\_{0}$ in Assumption 3.3? It is not defined in the text nor used elsewhere. Is the notation taken from the literature [28]?\n\n4. As the authors state, this paper does not make innovations to the algorithm. Therefore, the authors should focus on describing the theoretical contributions of this paper and should not demonstrate the superiority of the algorithm by unfair comparison with flawed algorithms (biased compression algorithms without error feedback).\n\n5. The authors did not provide code, nor did they describe the experimental setup in detail. The reviewer wonders if it is feasible to simulate 100 clients for full-batch gradient descent (i.e., a batch size of 500 per client) using a single NVIDIA Titan X GPU. Also, it is not reasonable to run the mini-batch based FetchSGD algorithm using the full batch setting.\n\n6. The symbolic notation and written presentation in lines 137 to 153 of this paper are very similar to that of literature [28, Sections 3 and 4].\n\n7. The references of the paper are badly formatted, and many of them do not contain information on publication conferences or journals, e.g. [57, 58, 59, 60, 61, 62, 63, 64, 65, 67, 70, 71, 76]." }, { "confidence": 3, "rating": 6, "review_id": "bjpFaQzX4M", "review_text": "This paper presents a novel analysis of sketching-based distributed learning algorithms for deep neural networks. The authors provide a tighter, dimension-independent convergence analysis by leveraging the restricted strong smoothness (RSS) property of deep learning loss functions. This work addresses a gap between existing theoretical analyses (which suggest dimension dependence) and empirical results (which show competitive performance without such dependence). The paper provides theoretical guarantees, improved communication complexity bounds, and empirical validation of their approach.\n\n1. The paper presents the first dimension-independent convergence analysis for sketching in distributed deep learning without requiring restrictive assumptions like heavy-tailed distributions or top-r sparsification.\n2. The work bridges a gap between theory and practice, explaining why sketching-based methods perform well empirically despite previous pessimistic theoretical bounds. The authors present experimental results that support their theoretical findings, demonstrating the effectiveness of their approach on real-world datasets.\n3. The paper covers both single-step and multi-step local update scenarios, providing a thorough theoretical treatment.\n\n1. While the paper removes some restrictive assumptions from previous work, it still relies on certain assumptions about the loss function (e.g., PL condition) and the eigenvalue distribution of the Hessian. The robustness of the results to violations of these assumptions could be discussed further.\n2. While the paper provides a thorough theoretical analysis, more discussion on the practical aspects of implementing the proposed approach in large-scale distributed learning systems could be beneficial.\n\n1. Could you provide insights on how your approach might perform on other types of deep learning tasks beyond image classification (e.g., natural language processing or reinforcement learning)?\n2. What are the main challenges in implementing your approach in practical, large-scale distributed learning systems?" } ]
0EfUYVMrLv
Test-time Adaptation in Non-stationary Environments via Adaptive Representation Alignment
Adapting to distribution shifts is a critical challenge in modern machine learning, especially as data in many real-world applications accumulate continuously in the form of streams. We investigate the problem of sequentially adapting a model to non-stationary environments, where the data distribution is continuously shifting and only a small amount of unlabeled data are available each time. Continual test-time adaptation methods have shown promising results by using reliable pseudo-labels, but they still fall short in exploring representation alignment with the source domain in non-stationary environments. In this paper, we propose to leverage non-stationary representation learning to adaptively align the unlabeled data stream, with its changing distributions, to the source data representation using a sketch of the source data. To alleviate the data scarcity in non-stationary representation learning, we propose a novel adaptive representation alignment algorithm called Ada-ReAlign. This approach employs a group of base learners to explore different lengths of the unlabeled data stream, which are adaptively combined by a meta learner to handle unknown and continuously evolving data distributions. The proposed method comes with nice theoretical guarantees under convexity assumptions. Experiments on both benchmark datasets and a real-world application validate the effectiveness and adaptability of our proposed algorithm.
https://openreview.net/pdf/be25a9406407e0296d8240eb457f27cf4070b837.pdf
[ { "confidence": 3, "rating": 6, "review_id": "CU7gMpqu5i", "review_text": "This paper proposes a test-time adaptation (TTA) method focusing on adapting to a non-stationary environment.\nThe proposed method DART aims at minimizing the cumulative loss over time steps, which is equivalent to minimize the distribution gap between the source and current features.\nTo capture unknown changing periods of the environment, DART employs multiple base learners that are reset at different periods.\nDART incorporates the outputs of the base learners with dynamically learned weights to make final predictions.\nExperimental results show that DART can follow continual environmental changes on image corruption datasets.\n\n- S1: Addressing continual adaptation of non-stational environments is practical.\n- S2: Providing the theoretical insights is beneficial.\n- S3: It is interesting that minimizing the distribution gap results in minimizing the regret.\n\n- W1: Equations should be presented accurately. \n  - The argument of $\\hat{L}_t$ in Eq. (4) seems to be $\\hat{L}\\_t(\\phi\\_t,\\phi\\_0)$. \n  - Softmax seems to be missing in the formulation of $\\ell\\_\\text{entro}^t$. \n  - Ada-ReAlign minimizes the norms of the feature statistics between the source and current distributions, but Fig. 1 says that it minimizes KL divergence.\n- W2: Providing more detailed explanations of theoretical analyses in Sec. 3.4 would be beneficial. For example, where do $\\mathcal{O}(T^{-1/3})$ in L221 and $\\mathcal{O}(T)$ in L227 come from?\n- W3: Ada-ReAlign requires to keep $\\log T$ models, which is more than the baseline methods. Comparing memory consumption would be fair.\n- W4: Can we estimate the distribution gap in Eq. (3) accurately? When the batch size is smaller than the number of feature dimensions, the covariance matrix degenerates, i.e., the covariances cannot be estimated as we can take arbitrary subspaces that include all samples in the batch. Even if numerical stability is improved by adding a small constant to the diagonals of the covariance matrix (L153), this problem still remains unless the batch size is larger than the feature dimensions. It would be beneficial to evaluate the distribution gap with other metrics, e.g., optimal transport or MMD, at each round.\n\n- Q1: How do we determine the number of the base learners when $T$ is unknown?\n- Q2: Can we incorporate the idea of the meta-learner and base learners with other TTA methods?" }, { "confidence": 4, "rating": 7, "review_id": "89nc7Y1ipn", "review_text": "This paper proposes a novel algorithm called Ada-ReAlign for sequentially adapting a model to non-stationary environments. The proposed algorithm uses a set of base learners, each equipped with different learning window sizes, with a meta learner that combines their outputs. This online ensemble allows an adaptive projection of the unlabeled data, which exhibits changing distributions, onto the source distribution. Subsequently, entropy minimization is applied to the unlabeled data, facilitating the refinement of representations according to the updated distributions. This paper provides theoretical analysis and empirical experiments to justify and validate the effectiveness of the proposed approach.\n\n+ This paper is well organized and easy to follow, all the problem formulation and details of the proposed algorithm are well discussed, the experimental setup is clear to me and the result is convincing.\n\n+ The transformation of continuous test-time adaptation to dynamic regret minimization for representation alignment is novel to me. Based on this transformation, the proposed approach is versatile and capable of handling non-stationary environments without explicitly defining or identifying distribution shift boundaries. \n\n+ The proposed approach is theoretically sound. The dynamic exploration of different sliding window sizes of the data stream is novel and intriguing to me. The algorithm is sound with nice theoretical guarantees.\n\n+ Empirical results show superiority over competing algorithms on two simulated benchmark datasets and a wildlife species classification task of the iWildCam dataset. Many ablation studies are performed and the results seem convincing.\n\n- The proposed algorithm requires the sketch of the source domain feature embeddings during the adaptation, which may limit the use of the proposed algorithm in practice.\n\n- Although the proposed approach uses only a small number of base learners, $n$ base learners means $n$ times slower than a traditional base learner algorithm, which increases the computational burden.\n\nShould the number of base learners used depend on the downstream problem? How is the number of base models determined and the size of the sliding window chosen? Large windows adapt to gradual changes over time, while small windows adapt to drastic changes. It seems that the window size does not depend on the specific problem.\n\nThe meta-learner seems rather complicated. Why this particular design?\n\nIn the proposed algorithm, the number of base learners seems to be fixed. What is the performance when the number of experts is limited to a fixed number, less than the required number?" }, { "confidence": 5, "rating": 7, "review_id": "NGwzB8DVhB", "review_text": "The paper investigates test-time adaptation in a non-stationary environment where unlabeled data batches arrive sequentially with changing data distributions. The authors propose non-stationary representation learning to project the changing distribution back to the original source data distribution and update the classifier with entropy minimization based on the projected representation. Theoretical analysis shows that the proposed method has solid dynamic regret guarantees. Experiments show that the method of representation adaptation is effective in mitigating the distribution shift.\n\n1. The continuous test-time adaptation problem studied in the paper is a popular and crucial area of research for real-world applications of domain adaptation.\n\n2. The proposed non-stationary representation learning is novel and offers non-trivial technical contributions. The authors adapt techniques from online ensemble learning to test-time adaptation, addressing the challenges posed by unknown distribution changes and the problems of high bias and high variance due to the limited amount of unlabeled data available in each round in the data streams.\n\n3. The effectiveness of the method is validated through experiments on both benchmark datasets and a real-world application. In addition, its computational efficiency is well discussed.\n\n1. The paper lacks a sufficient review of related work, particularly in the online learning literature. For instance, the method uses dynamic regret as a performance measure and uses techniques from online ensembles. However, readers unfamiliar with this area or these techniques may find it difficult to understand the underlying motivation. \n\n2. The theoretical guarantees focus primarily on convex models and losses, making it hard to adapt them to deep learning models.\n\n3. In the sequential shift, the cyclicality period is fixed by $M$. The performance under conditions of sudden changes or non-cyclic changes is not well studied. Ablation studies are mainly conducted on benchmark datasets, leaving the performance in real-world applications unclear.\n\n1. How does Theorem 1 apply to deep models, considering that the theory was originally formulated for linear models?\n\n2. In the sequential shift, the cyclicality period is fixed by $M$. How does the performance change when the cyclicality period varies?\n\n3. How stable is the performance of the proposed method on the iWildCam dataset with different hyperparameters? are there any ablation studies?" } ]
0DE1dLMW2b
Quantum algorithm for large-scale market equilibrium computation
Classical algorithms for market equilibrium computation such as proportional response dynamics face scalability issues with Internet-based applications such as auctions, recommender systems, and fair division, despite having an almost linear runtime in terms of the product of buyers and goods. In this work, we provide the first quantum algorithm for market equilibrium computation with sub-linear performance. Our algorithm provides a polynomial runtime speedup in terms of the product of the number of buyers and goods while reaching the same optimization objective value as the classical algorithm. Numerical simulations of a system with 16384 buyers and goods support our theoretical results that our quantum algorithm provides a significant speedup.
https://openreview.net/pdf/fd77b25277d5b673393dcea0bc9eca92698b59f8.pdf
[ { "confidence": 3, "rating": 6, "review_id": "8pIqGLS76y", "review_text": "This paper proposes quantum faulty proportional response (FPR) dynamics, a quantum algorithm that mimics classical proportional response (PR) dynamic with quantum speed-up, to compute the \"part of\" market equilibrium. Specifically, a market equilibrium specifies the allocation of each good to each buyer, and this algorithm aims at computing the allocation of some pre-specified good to some pre-specified buyer. The authors theoretically show that FPR has a square-root speed-up over the classical PR algorithm, which is the square root of the minimum number of buyers and goods. Experimental results show that FPR outperforms classical algorithms prominently, with quantum steps simulated by classical steps.\n\nThis paper combines the problem of market equilibrium computation with a quantum algorithm, which is interesting. Theoretical and empirical results demonstrate the effectiveness of using a quantum algorithm to compute the \"part of\" market equilibrium, marking a significant step towards market equilibrium computation. Additionally, this paper is easy to follow.\n\n1. The goal of FPR is to compute the allocation of some pre-specified good to some pre-specified buyer, which seems fairly limited, since classical algorithms aim to compute the allocation of all goods to all buyers. Moreover, comparing the runtime complexity (as well as other complexities) between FPR and PR is not fair due to their different goals.\n2. For experiments, the goal is optimizing the EG objective value, which\nseems to deviate from the original goal of computing the equilibrium allocation.\n\n1. Following the weakness, can these results be extended if the goal is to compute the allocation of all goods to some pre-specified buyer, or some pre-specified good to all buyers? Note that the memory of this task is $O(\\max(m,n))$, which is still within $O(\\sqrt{mn\\max(m,n)})$ runtime.\n2. Since the quantum steps in experiments are simulated by classical steps, and the simulated FPR performs better than other baselines, does this imply that the classically-simulated FPR is actually a classical algorithm that outperforms other classical algorithms in the Fisher market computation problem? Why?\n3. See above weakness 2." }, { "confidence": 4, "rating": 7, "review_id": "Az2DPwh6a2", "review_text": "The paper studies the problem of computing an equilibrium for the Fisher market. Roughly speaking, this is a fractional matching problem with a particular objective function, namely a sum of logarithms of utilities. In the classical world, this problem is usually approximated with a local search algorithm, but the running time is too big for the applications in mind. One approach is to model the problem as a linear program with a convex objective, the so-called Eisenberg-Gale program. This LP is solved with a linear search approach, by simulating the proportional response dynamics. Here the convergence rate is inverse proportional to the number of iterations.\n\nThis is the state of the art, and the paper proposes to compute the proportional response with a quantum algorithm, obtaining roughly a quadratic speedup relative to standard algorithms. The quantum ingredient is the quantum amplitude estimation algorithm. The contribution of the paper is to reduce the proportional response problem to the quantum amplitude estimation problem, and to do a careful analysis concerning the required precision epsilon. Numerical simulations are also provided.\n\nThe paper is carefully written, the experiments well documented, and I really appreciate the discussion section, which gets into a lot of details concerning extensions and variants.\n\nIt would have been nice to report experiments on a real quantum computer (e.g., Google or IBM quantum computers).\n\nDoes the paper fit into an Artificial Intelligence / Machine Learning conference? Some discussion on this matter would be significant.\n\nIn the paragraph of line 32 you eliminate quite quickly the exact and approximation algorithms. I would like to see at this place how they compare to the PR dynamics. But this is done later in the experimental section.\n\nI think that since quantum computers have so little resource today and in the near future in terms of number of qbits and possible number of operations, we should change perspective. The classic approach says if we want a result with this precision/approximation, then we need that much memory and running time. But here, every hidden constant in the complexity matters, so we need to present results in the form, if we have so much memory and so much available running time, we can achieve this performance.\n\nLine 707, do you mean classic or quantum maximum finding algorithm? In the latter case it is quite a simplification to assume success probability 1.\n\nTo make (1) and (2) consistent I would write either =1 and =p or <=1 and <=p." }, { "confidence": 3, "rating": 5, "review_id": "GBeyeye7xP", "review_text": "This paper proposes a quantum-assisted algorithm for market equilibrium computation by first introducing the faulty proportional response dynamics and then constructing its quantum implementation. This algorithm has provable quadratic speedups. Simulation studies show its effectiveness.\n\n1. The paper introduces a new version of proportional response dynamics called faulty proportional response (FPR) dynamics, which helps in applying quantum computing techniques effectively.\n\n2. The quantum algorithm provides a significant speedup (quadratic) and uses less memory compared to classical algorithms, thanks to using quantum RAM (QRAM).\n\n3. To the best of my knowledge, this is the first quantum algorithm that achieves sublinear performance in computing market equilibrium.\n\n1. The proposed quantum algorithm relies on QRAM, which is not widely available and is still under development. This dependency limits the practical applicability of the algorithm with current quantum hardware.\n\n2. The numerical simulations provided in the paper are based on specific conditions and assumptions. Real-world scenarios might introduce complexities that are not addressed by the current simulations, potentially affecting the algorithm's performance.\n\n3. The theoretical guarantees of the quantum algorithm's convergence and error bounds are strong, but the practical implementation might face issues with noise and error rates in quantum hardware, affecting the accuracy and reliability of the results.\n\n4. The paper assumes quantum query access to the data without detailing the process of loading classical data into a quantum system. The practical steps and overheads involved in this data preparation are not fully addressed, which could impact the overall efficiency.\n\n5. Minor points.\n\n - Line 256, \"For our experiments, we generate data the input data $v$\".\n\n - Figure 1, \"We perform a on $n$ = 16384 buyers\".\n\n1. How to address the practical limitations posed by current quantum hardware, specifically QRAM? Are there any near-term developments in quantum technology that you believe will make your algorithm more feasible to implement?\n \n2. The numerical simulations are based on specific conditions and assumptions. How to handle more complex, real-world scenarios that may introduce additional challenges?\n\n3. Quantum hardware is prone to noise and high error rates. How robust is the algorithm to these practical issues?" }, { "confidence": 1, "rating": 6, "review_id": "7HdVuoC4jw", "review_text": "This paper focuses on market equilibrium computation with linear utility. Traditional algorithms like proportional response (PR) dynamics, despite having nearly linear runtimes relative to the number of buyers and goods, struggle with the massive scale of modern internet-based markets.\n\nThe authors introduce a quantum algorithm to reduce the cost of each iteration of the PR dynamics. They first provide a modified version of the PR dynamics called faulty proportional response (FPR). Leveraging quantum norm and inner product estimation, the algorithm achieves quadratic speedup and reduced memory usage using quantum random access memory (QRAM).\n\nI am not familiar with quantum computing. From the perspective of market equilibrbium computation, the proposed algorithm shows significant improvements in efficiency. Simulation results demonstrate that the algorithm can achieve higher EG object value than PR dynamics within the same number of queries to the value and bidding matrix.\n\nI am wondering about the practicality of the algorithm, since we need an accurate valuation profile of all buyers and goods, and the current limitations of quantum hardware pose significant challenges. However, I think this work is still valuable as a theoretical study.\n\nNone" } ]
0C3bLHwjsY
Promoting Fairness Among Dynamic Agents in Online-Matching Markets under Known Stationary Arrival Distributions
Online (bipartite) matching under known stationary arrivals is a fundamental model that has been studied extensively under the objective of maximizing the total number of customers served. We instead study the objective of *maximizing the minimum matching rate across all online types*, which is referred to as long-run (individual) fairness. For Online Matching under long-run Fairness (OM-LF) with a single offline agent, we show that the first-come-first-serve (FCFS) policy is $1$-competitive, i.e., matching any optimal clairvoyant. For the general case of OM-LF: We present a sampling algorithm (SAMP) and show that (1) SAMP is of competitiveness of at least $1-1/e$ and (2) it is asymptotically optimal with competitiveness approaches one in different regimes when either all offline agents have a sufficiently large matching capacity, or all online types have a sufficiently large arrival rate, or highly imbalance between the total offline matching capacity and the number of online arrivals. To complement the competitive results, we show the following hardness results for OM-LF: (1) Any non-rejecting policy (matching every arriving online agent if possible) is no more than $1/2$-competitive; (2) Any (randomized) policy is no more than $(\sqrt{3}-1)$-competitive; (3) SAMP can be no more than $(1-1/e)$-competitive suggesting the tightness of competitive analysis for SAMP. We stress that all hardness results mentioned here are independent of any benchmarks. We also consider a few extensions of OM-LF by proposing a few variants of fairness metrics, including long-run group-level fairness and short-run fairness, and we devise related algorithms with provable competitive performance.
https://openreview.net/pdf/61ecba74434d4fc5596cb759256dd94610bf7414.pdf
[ { "confidence": 3, "rating": 7, "review_id": "i9Dboxmeo4", "review_text": "This paper studies an online bipartite matching problem where each offline type has a capacity and the nodes of each online type arrive according to a Poisson process. Each offline type can serve a certain subset of online types, and each online node needs to be served or discarded immediately upon its arrival. The objective is to maximize individual fairness across all online types, i.e., the minimum matching rate across all online types.\n\nWhen there is only one offline type, the paper shows that the first-come-first-serve algorithm is $1$-competitive. When there are multiple offline types, the paper proposes an algorithm that is $(1-1/e)$-competitive, whose competitive ratio asymptotically converges to $1$ in several regimes: (1) all offline types have sufficiently large capacities, (2) all online types have sufficiently large arrival rates, or (3) the total offline type capacity and the total online type arrivals are sufficiently unbalanced.\n\nThe paper complements its positive results by proving the following negative results. Firstly, no randomized algorithm that matches every arriving online node whenever possible can achieve a competitive ratio better than $1/2$. Next, no randomized algorithm can achieve a competitive ratio better than $\\sqrt{3} - 1$. Finally, the $(1-1/e)$-competitiveness of their proposed algorithm is tight.\n\nIn addition, this paper generalizes the individual fairness metric to a group-level fairness metric, where agents are divided into multiple groups, groups are allowed to overlap, and the objective becomes maximizing the matching rates across all groups. For this fairness metric, this paper gives two algorithms whose competitive ratio converges to $1$ when offline types' capacities and online types' arrival rates tend to infinity, respectively. This paper also proposes short-run fairness as another individual fairness metric, which resembles ex-post fairness. For short-run fairness, this paper also presents various positive and negative results.\n\nThis paper studies a well-motivated and interesting problem. Also, this paper is well-written and well-structured in general.\n\nThis paper is conceptually strong. In particular, it initializes the study of fairness with respect to online agents and characterizes some special features underlying this model, whereas all prior papers focus on fairness with respect to offline agents. In addition, this paper proposes various fairness metrics that are interesting on their own.\n\nThe techniques in this paper are non-trivial and seem correct.\n\nSome results in the paper are not very promising. For example, the $1$-competitiveness of the FCFS algorithm when there is one offline type seems immediate, and the bounds for short-run fairness and group-level fairness seem quite preliminary.\n\nMinor:\n- Line 92: $s^*$ is not previously defined.\n\n(1) You claim that all hardness results provided in the paper are independent of any benchmarks. However, this is neither explicit in the proof nor discussed in detail. Can you further elaborate on this point?\n\n(2) In Line 308, you claim that there must exist a rare type $t$ for which $E[P_t] \\leq (n + 1) / 2$. This is not obvious to me. Can you further explain why this is true?\n\n(3) Do you believe that the $(1-1/e)$-competitiveness given by the SAMP algorithm is tight for this problem?" }, { "confidence": 3, "rating": 5, "review_id": "SXxUDRsiUp", "review_text": "This paper considers the online matching problem under known stationary arrival distributions. Each offline agent of type i has a matching capacity b_i, and each online agent of type j arrives according to an independent Poisson process of rate \\lambda_j. The objective is to maximize the minimum matching rate among all types of online agents. To defne fairness, the authors introduce three notions FAIR-L (long-run), FAIR-L(G) (group-level), and FAIR-S (short-run).\n\nThe authors show that for FAIR-L with multiple offline agents, the SAMP algorithm reaches a 1-1/e competitive ratio, which is a tight ratio for SAMP. Further, the authors provide a counterexample to show that any non-rejection algorithm cannot exceed 1/2 competitive ratio and any algorithm cannot exceed \\sqrt{3}-1. The authors also establish bounds for group-level and short-run fairness.\n\n1. Conceptually, I think the research direction of incorporating fairness in the online matching problem to be interesting. The fairness notions defined in the paper are natural to me.\n\n2. The results are clean and technically-involved. The competitive ratio for SAMP is tight.\n\n1. Although the proofs of the theoretical results are non-trivial, they mostly use existing techniques that are common in online matching. The algorithms and their analysis are standard. The overall technical contribution of this paper, in my opinion, is incremental.\n\nNo question." }, { "confidence": 3, "rating": 7, "review_id": "IZ84gdgZBK", "review_text": "This paper investigates a variant of the online matching problem, focusing on maintaining long-term fairness at both individual and group levels. A key finding is that the optimal competitive ratio achievable without rejecting any items is capped at 1/2. By implementing a novel sampling algorithm, SAMP, the authors achieve a competitive ratio of 1-1/e, with a matching upper bound demonstrated (for the algorithm specifically), and establish that no randomized algorithm can exceed a competitive ratio of sqrt(3) - 1 under their model.\n\n- The paper is well-written and provides clear and intuitive definitions of fairness, extending these concepts to accommodate both individual and group dynamics among online items. \n- The logical progression from LP-based algorithmic strategies to solid proof constructions aids comprehension and follows a coherent flow.\n- The algorithmic results are robust, supplemented by nearly matching upper bounds, suggesting a comprehensive study of the new matching variant given the model and assumptions used.\n\n- The proof techniques and impossibility results closely resemble those in prior works, potentially limiting the contribution of novel technical methods to the field. However, the intuitive nature of the solutions remains a strength, though it might not significantly expand the community's technical tools.\n- While the focus on theoretical analysis is strong, the inclusion of computational experiments could enhance the understanding of these algorithms' practical implications and performance in real-world settings.\n\n- For the First Come, First Serve (FCFS) for Online Matching with Long-run Fairness (OM-LF) we get a maximal matching thus it achieves a 1/2 approximation ratio in general? (ie the known result of the original online matching problem)\n- An exploration into the number of items that must be rejected to surpass the 1/2 competitive ratio would be insightful. Establishing a quantifiable trade-off between the approximation ratio and the number of rejected items could open new avenues for future research." }, { "confidence": 3, "rating": 6, "review_id": "5CTjEdntPN", "review_text": "This work focuses on a fair online bipartite matching problem under poisson arrivals, where the objective is to maximize the minimum number of matches across groups of the online nodes (a max min objective). It is showed that if rejecting online nodes is not allowed, then any algorithm will have at best a $1/2$ competitive ratio compared to an omniscient oracle. Even when rejecting is allowed, no randomized algorithm can do better than $\\sqrt{3}-1$. Finally, they propose an algorithm and prove corresponding guarantees that depend on the Poisson rates, which achieves in the worst case a $1-1/e$ competitive ratio, and the analysis of the algorithm is tight. Some additional results are given for a variation of the fairness objective.\n\n1) The model considered in this paper is a nice addition to the literature of online matching with fairness concerns, the presentation of the model, the algorithms, and their guarantee is clear.\n2) The theoretical results are broad, covering lower and upper bound for two different metrics, and solid, with non-trivial analysis. They contribute significantly to the literature on online matching, fairness, and online algorithms.\n\nWhile the description of the model and results is clear, the organization and motivation of the paper could be improved.\n1) The paper starts directly with the model without discussing why this problem matters, which is especially crucial when dealing with fairness questions. Some mentions of ride-sharing are made on page $6$, but without discussing the connection with the model. It is not explicitly stated what kind of fairness notion is considered in this paper, other than using a max-min objective.\n2) There are no references to other works until a remark on page $4$. In particular, some works such as [36] which **specifically** deal with online matching with fairness considerations is only briefly mentioned as pertaining to online matching, not to fair online matching, and no comparisons with it are made. Overall, the position of this work in the existing literature of fairness and fair online matching should be more detailed. See also question $3$.\n\nQuestions:\n1) It is mentionned in the related work that Huang et al showed that similar results can be obtained in the KIID and Poisson arrival models. To what extent can the results from this paper be translated to the KIID setting ? Does the same mild assumptions allow to translate results from one model to another ?\n2) I am confused about the inherent differences between the long-term \"individual\" fairness, and the group fairness (6). Which one is harder to achieve? If all the protected groups were non overlapping, is it correct that there would not be any difference in the guarantees provided by the algorithms (by grouping the relevant $\\lambda_j$)? When overlapping groups are allowed, why do the guarantees not depend on the group structure, does it consider in some sense the worst-case group overlap? It might be relevant (this is only a suggestion) to discuss briefly connections with the notion of intersectional fairness. \n3) There should be a more involved discussion regarding the paper [28] which seems to consider the converse problem of fairness among offline agents. In what ways are the results different? And why is it a more difficult / easier /as hard problem to consider fairness among online agents ?\n4) This is simply a curiosity in case the authors are familiar with the prophet secretary literature (if not this question can be ignored), in \"Prophet Secretary Through Blind Strategies\" Correa et al. proved a $\\sqrt{3}-1$ upper bound on the competitive ratio of their problem. Are there any connection between the two or is the fact that the same upper bound was obtained in this paper purely a coincidence?\n\nline 143: $\\mathcal{G}=\\\\{g\\\\}$ this notation feels a bit self referential. \n\nSmall typos : line 98 : outputing -> outputting line 240 : Onine -> Online" } ]
0AwMciNShl
Rethinking Human Evaluation Protocol for Text-to-Video Models: Enhancing Reliability, Reproducibility, and Practicality
Recent text-to-video (T2V) technology advancements, as demonstrated by models such as Gen2, Pika, and Sora, have significantly broadened its applicability and popularity. Despite these strides, evaluating these models poses substantial challenges. Primarily, due to the limitations inherent in automatic metrics, manual evaluation is often considered a superior method for assessing T2V generation. However, existing manual evaluation protocols face reproducibility, reliability, and practicality issues. To address these challenges, this paper introduces the Text-to-Video Human Evaluation (T2VHE) protocol, a comprehensive and standardized protocol for T2V models. The T2VHE protocol includes well-defined metrics, thorough annotator training, and an effective dynamic evaluation module. Experimental results demonstrate that this protocol not only ensures high-quality annotations but can also reduce evaluation costs by nearly 50\%. We will open-source the entire setup of the T2VHE protocol, including the complete protocol workflow, the dynamic evaluation component details, and the annotation interface code. This will help communities establish more sophisticated human assessment protocols.
https://openreview.net/pdf/aa115a2ffe88cc5707a0f711b0ee921175fa9141.pdf
[ { "confidence": 2, "rating": 8, "review_id": "MoreIi7bTJ", "review_text": "This paper introduces the Text-to-Video Human Evaluation (T2VHE) protocol, a standardized approach for assessing the quality of videos generated from text. \nBesides, this paper comprehensively evaluates some famous text-to-video models with their proposed reliable human assessment, which is very exciting and valuable for this community.\n\n1. Standardized human evaluation protocol: T2VHE provides a comprehensive and consistent framework for evaluating T2V models, enabling fair and reliable comparisons between different models.\n2. Reduced annotation costs: The dynamic evaluation module reduces the number of annotations needed by approximately 50%, making it more practical for researchers to evaluate large numbers of models.\n3. Comprehensive evaluation on existing text-to-video models, revealing some interesting findings.\n\n1. The algorithm’s performance and stability may vary depending on the specific prompt types and model characteristics.\n2. The ELO rating [1] mechanism seems to be a competitive baseline for the Rao and Kupper method used in this paper, which has been widely used in the comparsion of LLMs. Could you please describe the difference between Rao and Kupper method and ELO rating, and predict which kinds of method is better?\n\n[1] Chatbot Arena: Benchmarking LLMs in the Wild with Elo Ratings (https://lmsys.org/blog/2023-05-03-arena)\n\n1. How does the performance of the evaluated models vary across different domains or types of prompts?" }, { "confidence": 5, "rating": 5, "review_id": "oHcWVkJaBt", "review_text": "This manuscript presents a novel, standardized human evaluation protocol specifically tailored for Text-to-Video (T2V) models. The protocol encompasses a meticulously designed suite of evaluation metrics supplemented with robust annotator training resources. The effective deployment of Latent Action Ratings (LARs) is a notable feature of this work, enabling the acquisition of high-quality annotations. A dynamic evaluation component is also introduced, which remarkably curtails annotation costs by approximately 50% while maintaining the integrity of annotation quality. The paper further offers an exhaustive evaluation of cutting-edge T2V models. A commitment to open-source the entire evaluation process and associated code is a commendable initiative, which will undoubtedly empower the broader research community to assess novel models using updated data, building upon existing reviews.\n\n+ First of all, I am very grateful to the authors for their great contribution to the open-source community. I believe that this work will help existing research take a bigger step after it is released;\n+ Then, this paper carefully considers the evaluation criteria of T2V. 4 objective and 2 subjective annotation evaluations are used to improve the quality of discrimination, making the annotation results more in line with human preferences and lower error rates or bias;\n+ Furthermore, this work maximizes the effectiveness of comparative evaluation. From my point of view, this is very much like doing a reward model that serves for reinforcement learning from human feedback (RLHF), which will help unify and simplify the difficulty of annotation and quantification;\n+ Finally, due to a large amount of human power investment, the authors introduced a dynamic evaluation component to reduce the annotation cost and claimed that the expense could be reduced by 50% at the same quality.\n\n— My biggest concern is that the evaluation metrics currently proposed by the authors cannot be proven reasonable or convincing. Although this manuscript has used several quality standards (4 objective and 2 subjective), are they complete or redundant? For example, I think \"Video Quality\" is a vast range, which may include multiple aspects such as aesthetics, clarity, and even another indicator, \"Text Alignment\" (because if the generated elements do not match the prompt, it will not be considered a good video); I think \"Temporal Quality\" and \"Motion Quality\" are not two completely independent metrics, they should be highly related and coupled.\n— Furthermore, the authors mentioned that this paper adopts the Rao and Kupper model to quantify annotations results. I tried to follow this paper to understand how to quantify these metrics whose weights I am not sure about. Still, unfortunately, I found no similarities between it and this paper. I hope the authors can scientifically and clearly explain how the metrics are involved in the final results, which is one of the core contributions of this paper.\n— In addition, since the annotated results of all metrics are labeled by manual A/B testing, is the human preference the only one that dominates? How can the aesthetics and evaluation systems of different annotators be consistent? The same problem arises in the reward model, where its approach has been verified in the LLM field and cannot achieve high accuracy (only about 70+%). Thus, it needs to be used in conjunction with PPO/DPO. As this manuscript constructs datasets this way, how can we avoid the problem?\n— Finally, although the dynamic evaluation component proposed in this article reduces annotation costs to approximately half, the annotation task is too dependent on manual work; I think its promotion and ease of use in the open-source community are still questionable.\n\nI acknowledge that I appreciate the authors' contributions to the open source community, and reasonable and practical answers to the concerns in the \"Weaknesses\" section will help improve my rating." }, { "confidence": 4, "rating": 5, "review_id": "hjxIskvcbe", "review_text": "This paper introduces the Text-to-Video Human Evaluation (T2VHE) protocol, a standardized approach for evaluating text-to-video (T2V) models, addressing the challenges posed by recent advancements in T2V technology. The T2VHE protocol supposedly offers a solution through well-defined metrics, annotator training, and an effective dynamic evaluation module. \n\nExperimental results suggest that this protocol can offer high-quality annotations and also reduce evaluation costs. The authors plan to open-source the T2VHE protocol setup.\n\nThe paper addresses a timely issue in the rapidly evolving field of text-to-video (T2V) models: the lack of standardized quality assessment. Its strengths lie in providing a comprehensive literature review since 2016 and synthesizing past research into a unified evaluation protocol. This effort to aggregate previous work is particularly valuable, given the growing importance of T2V models. By proposing a standardized approach, the authors contribute to filling a critical gap in the field, potentially accelerating progress in T2V model development and evaluation. The highlighted strengths are as follows: \n\n* It introduces a novel, comprehensive protocol (T2VHE) for human evaluation of text-to-video (T2V) models, addressing a significant gap in the field.\n* The paper combines existing ideas (e.g., comparative scoring, Rao and Kupper model) in creative ways to improve the evaluation process.\n* The authors conducted a thorough survey of 89 papers on video generation modeling, providing a solid foundation for their work.:\n* The paper is generally well-structured and clearly written:\n* The introduction provides a clear motivation for the work and outlines the main contributions.\n* The authors commit to open-sourcing the entire evaluation process and code, which is crucial for reproducibility and further advancement of the field.\n\nMajor Weaknesses\n* Lack of comparison with existing evaluation protocols: While the paper introduces a new protocol, it doesn't directly compare its performance or efficiency against existing evaluation methods. This makes it difficult to quantify the improvements over current practices. In addition, it lacks an understanding with evaluation in other domains. ex. T2I, NLP, or CV. With the lack of details or standards in T2V evaluation, it would have benefitted from an expansive understanding of human annotation as a whole. Thus, the ideas although well representative of the current state of evaluation in T2V, lacks the maturity, which affords the novelty or value, that hasn't been surveyed in other domains. \n* Potential bias in dynamic evaluation: The dynamic evaluation module, while innovative, may introduce bias by discarding certain comparisons. As noted in Table 3, comparisons involving Gen2 were frequently omitted. This could potentially lead to less accurate estimations for top-performing models. \n* Lack of error analysis: The paper doesn't provide a detailed analysis of cases where the protocol fails or produces inconsistent results. Understanding these edge cases could help improve the robustness of the evaluation method. This further dives into my understanding that although it is a great survey of existing protocols, lacks a contribution its aggregation. Further evaluation and consideration in other domains, would assure me that it is a standard that can be well accepted vs. one that is just more comprehensive than the past. \n\nMinor Weakness\n* In line 3, there's a small typo where \"pop3 ularity\" is split across lines. It should be \"popularity\".\n* In line 31, \"practicabil32 ity\" is split across lines. This appears to be due to line wrapping rather than an actual error in the text.\n\n* Can you provide more details on how the dynamic evaluation module handles potential biases when discarding certain comparisons, especially for top-performing models like Gen2?\n* How does the protocol ensure that the reduced number of annotations in the dynamic evaluation still captures a representative sample across different prompt categories and model capabilities?\n* What steps were taken to validate that the training provided to LRAs effectively mitigates potential biases they may have, especially given their potential lack of diversity compared to crowdsourced annotators?\n* How does the protocol account for potential cultural or linguistic biases in the evaluation of \"Ethical Robustness\"? Can you provide more details on how this metric is defined and measured?\n* What considerations were made in designing the protocol to ensure its applicability across different types of T2V models (e.g., models specializing in different video styles or lengths)?\n* Would the survey of non -T2V evaluation approaches be beneficial when merging with this technology? With the lack of maturity and rapidly evolving field, would not benefit from a cross-domain understanding from more mature fields?" }, { "confidence": 5, "rating": 3, "review_id": "jzW81U4Zvk", "review_text": "This paper designs an evaluation protocol and surveys a number of video generation papers published over the last few years. The authors point out shortcomings of evaluation protocols in these priors works (e.g. they often limit studies to video quality comparisons with no clear definitions of video quality, small number of videos, annotators etc). The authors promise to publish their trainings for annotators along with an interface. They show that these trainings help significantly to improve reliability of annotator scores.\n\nAnother contribution is a method that doesn’t require all pairs of models to be compared and dynamically is able to select pairs to be compared and show experimentally that this scales better than the naive quadratic approach. \n\nFinally the authors use their methodology to compare 5 recent models and show among other things that Runway and Pika outperform their open source competitors.\n\nOverall the list of surveyed papers in this paper is quite impressive and comprehensive. Moreover, having an evaluation method that can be used without all-pairs of comparisons to be run is clearly important particularly as the number of video generation models increases. I also absolutely agree with the weaknesses of other video evaluations and agree that human evals are the golden standard and so making a public one available would be very beneficial for the community.\n\nWeaknesses include the following:\n* The authors cite works like Evalcrafter/Vbench but are vague about what value this current submission brings over these prior approaches. Specifically they claim that these approaches “may lack diversity to cover all real-world scenarios” but there are no details given. As I’ve started to see a number of papers published using the Evalcrafter methodology, I would like to see a clear explanation of the pros and cons.\n* Moreover the number of models actually evaluated in this paper is fairly small (only 5) — given that this is an evaluation methodology paper, it would be much stronger to include more models and perhaps even to show clearly that these human evaluations do not correlate well with things like FID or FVD.\n* I recommend citing and considering using ELO scores which similarly allow for us to score and rank models based on not-all-pairs comparisons and also allow for dynamic selection of model pairs to be compared side by side. See e.g., https://lmsys.org/blog/2023-05-03-arena/ for a recent example using ELO to rank LLMs by quality as perceived by humans. Related is the Microsoft TrueSkill work which has also been used widely.\n* Though the authors make a big deal of the fact that few prior works reveal the details of their interface and how the videos are displayed, I found very few details actually revealed in this paper. The one clear detail I can see is that the authors mention standardizing the height of the video across models. But this may not be the right thing to do! Consider the VideoPoet work which generate videos in portrait aspect ratios — resizing so that heights are fixed would be unfair to these vertical videos. Other details that are important are how to best show videos that are generated at different framerates and even different lengths (for example, how does one compare a 5 second clip from e.g. VideoCrafter to a 1 minute clip from Sora?).\n* Finally some things that are marked as objective clearly are not objective like aesthetic quality. It would be helpful / more convincing to explain clearly what the training looks like for such a dimension. I would presume that one example showing what aesthetic quality means is not nearly enough…\n\n\nMinor quibbles:\n* I recommend being clear that “higher is better” in many plots\n* The authors also use a lot of acronyms which impedes readability in various parts of the paper (e.g. last subsection of 4.3)\n* I somewhat disagree with the characterization of FVD being about temporal consistency… even though this is part of it, the statements about the other metrics also apply — it compares feature representations from real and generated images and assesses diversity and clarity…\n* I recommend seeing/citing the Videopoet paper that had very similar comparisons.\n\nSee weaknesses above." } ]
0AumdfLzpK
A Simple Framework for Generalization in Visual RL under Dynamic Scene Perturbations
In the rapidly evolving domain of vision-based deep reinforcement learning (RL), a pivotal challenge is to achieve generalization capability to dynamic environmental changes reflected in visual observations. Our work delves into the intricacies of this problem, identifying two key issues that appear in previous approaches for visual RL generalization: (i) imbalanced saliency and (ii) observational overfitting. Imbalanced saliency is a phenomenon where an RL agent disproportionately identifies salient features across consecutive frames in a frame stack. Observational overfitting occurs when the agent focuses on certain background regions rather than task-relevant objects. To address these challenges, we present a simple yet effective framework for generalization in visual RL (SimGRL) under dynamic scene perturbations. First, to mitigate the imbalanced saliency problem, we introduce an architectural modification to the image encoder to stack frames at the feature level rather than the image level. Simultaneously, to alleviate the observational overfitting problem, we propose a novel technique called shifted random overlay augmentation, which is specifically designed to learn robust representations capable of effectively handling dynamic visual scenes. Extensive experiments demonstrate the superior generalization capability of SimGRL, achieving state-of-the-art performance in benchmarks including the DeepMind Control Suite.
https://openreview.net/pdf/53b806d768bf005e5431ba8aa3ce049851f69624.pdf
[ { "confidence": 4, "rating": 6, "review_id": "pa4yZXZ8Qk", "review_text": "This paper introduces SimGRL for vision-based DRL tasks. It tackles two challenges: the imbalanced saliency, where an agent repeatedly favors saliency maps from the recent one in stacked frames, and the issue of overfitting to particular background areas rather than concentrating on elements crucial to the task at hand. To overcome the first challenge, the proposed solution modifies the image encoding process, treating each frame separately before combining their features. To deal with the second issue of observational overfitting, this research uses an innovative method based on shifted random overlay augmentation. This method trains the agent to associate rewards with changes in the actual object of interest, thus learning to ignore background motions. Experiments were carried out using the DMControl-GB video benchmarks, evaluated by the TID score and variance metrics. The results show that SimGRL outperforms existing methods and the ablation study reveals each regularization technique independently contributes to notable enhancements.\n\nIn this study, the focus is set on addressing critical obstacles in DRL and the author suggested both straightforward and impactful alteration to the model's structure as well as enhancement in the augmentation techniques. The empirical outcomes are reasonable and the statistical findings detailed in Table 1 provide persuasive evidence of the effectiveness of their method.\n\nThe paper could benefit from an explicit articulation of its motivation and contributions. While multiple mentions are naturally made where the two challenges are being addressed, situating the motivation behind the work and a summary of contributions in a dedicated paragraph would enhance the strength of the paper. Although the devised SimGRL algorithm achieves top-tier performance on the Video Hard benchmark, the analysis of its performance on the Video Easy benchmark is not explored in depth, which presents an area for more detailed examination.\n\n1) The findings concerning Video Easy presented in Table 1 appear to be of lesser importance. Could you provide a more detailed explanation of these findings?\n\n2) Furthermore, Table 1 reveals that 'Cartpole, Swingup' and 'Cheetah, Run' exhibit noteworthy outcomes, although the significance is not observed across all tasks. While you have offered explanations for the strong performance in 'Cartpole, Swingup' and 'Cheetah, Run', can you clarify why there weren't substantial improvements in some other tasks? Does this indicate that the impact of your study might be constrained?" }, { "confidence": 5, "rating": 6, "review_id": "n46v7AMxgY", "review_text": "This paper proposes two simple, yet crucial modifications to the Generalization in the Visual RL pipeline where during the test time, the visual observation consists of various degrees of dynamically varying backgrounds.\n\nConcretely, the authors identify two issues that inhibit RL agents from generalization: (i) imbalanced saliency and (ii) observational overfitting. To overcome these issues they propose two simple, yet effective solutions -- (a) Feature level frame stack where instead of stacking frames at the observation level, they stack at a feature level after obtaining spatial features of the images and (b) Shifted Random Overlay Augmentation where instead of applying data augmentation to the stack of frames, a shifted version of the data augmentation is applied to simulate the dynamically moving background effect which in turn helps learn more generalizable policy.\n\nThrough their experiments on the DM-Control (and robotic manipulation in the appendix) with varying backgrounds, the authors show the generalization capability of their proposed SimGRL (Simple framework for Generalization in visual RL) framework. \n\nAdditionally, the authors also propose Task-Identification (TID) Metrics to evaluate how much of the policy can capture the task-relevant objects in each stacked frame.\n\n1. A key strength of this paper is the simplicity of the proposed solutions to address the generalization issue and these simple changes are quite effective as shown in the experiments.\n\n2. The intuition behind both the proposed solutions is clear and straightforward.\n\n1. I do find myself in confusion with respect to the Attribution masking argument. I believe that the proposed solutions of Feature-level frame stack and Shifted Random overlay augmentation are both valid solutions to a generalizable policy irrespective of the authors inclusion of Attribution masking. My specific concern is as follows: \n\n What is the guarantee that a clear thresholded gradient masking of the critic wrt the input, which consists of the complete segmentation of the relevant objects and agent in the mask will lead to a better generalizable policy? It can be possible that a generalizable policy would care about only a few key points that are predictive of the reward -- and hence be able to accurately predict the Value of a state, and in that sense, an accurate segmentation of the attribution mask need not be necessary. To give an example, in the robotic manipulation task of reach -- the reward function is typically shaped as some function of the Euclidean distance between the robot's end-effector and the goal position. In such a situation, the Value function shouldn't really be bothered about where the other parts of the robot are.\n\n2. The weakness in (1) ties into my second concern where if only a few parts of the agent are responsible for achieving (and predicting) reward, then I don't see the value of the proposed TID metric. This assumes that my entire object and agent in the scene is responsible for the policy, which as argued in (1) is not the case in most scenarios.\n\n3. [Relatively minor concern, not taken into account for the final scoring] It would be beneficial to the reader if the proposed components in Figure 4 could be highlighted with say a different color and a distinction between the existing pipeline of SVEA be made.\n\n4. On the experimentation end, I feel that the evaluation is very limited to DM-control tasks. While SimGRL performs quite well on Video-Easy and Video-Hard settings of DM-Control, and 2 other tasks of Robotic manipulation, I would have liked to see a more comprehensive evaluation of various visual RL environments. Specifically, I'd recommend performing evaluation on RL-ViGen [1] (https://gemcollector.github.io/RL-ViGen/) which consists of a collection of diverse visual RL tasks. This would further strengthen this work's results.\n\n5. On L263, where authors say: \n> SimGRL demonstrates state-of-the-art performance in 4 out of 6 tasks in the Video Easy benchmark\n\n One has to be careful while making such claims especially when the difference on one such task is +1 -- which in DM-Control doesn't really mean anything. I would ask the authors to look at IQM and Probability of Improvement metrics to compare SimGRL to other methods from [2] which has shown to be more reliable than taking an average over seeds. This does not involve re-running the experiments, the existing checkpoints and returns can be directly used over their wrapper.\n\n6. Observational Overfitting: For this, one of the core arguments that authors provide is that because the same augmentation is applied uniformly to all frames the agent can tend to still focus on the background. I'm wondering how a baseline for this where you have different data augmentations for every frame in the history of stacked frames(say jittering, slight rotation, blur, overlay, random convolution, random shift) performs?\n\n7. In the Robotic manipulation results what does Test {1, 2, 3} mean? Does it mean running an evaluation on a _single episode_ for 5 seeds?\n\n----\n**References**\n\n[1] RL-ViGen: A Reinforcement Learning Benchmark for Visual Generalization, Zhecheng Yuan et al., NeurIPS 2023.\t\n\n[2] Deep Reinforcement Learning at the Edge of the Statistical Precipice, Rishabh Agarwal et al., NeurIPS 2021.\n\n----\n**Reason for current rating**: Because of my two major concerns around Attribute masking (and TID metrics), as well as lack of extensive experimentation on diverse RL environments, I am leaning towards a weak reject. However, my final decision would be based on authors' rebuttal, other reviewers comments and the discussion with authors during the rebuttal period. I'd encourage the authors to ask me any clarification questions for any of the experiments if they have any during the rebuttal/discussion period.\n\n-----\n\n**Post rebuttal update** -- As the authors have addressed all my major concerns, I have decided to increase my score to **Weak Accept**." }, { "confidence": 4, "rating": 6, "review_id": "e7TaLtSsYw", "review_text": "The paper first identify two pitfalls with the visual RL via gradient-based attribution mask, i.e., imbalanced saliency and observational overfitting. To address these two pitfalls, the paper proposes two novel modifications. One is for the encoder where encode each frame and stack encoded representations instead of encoding stacked frames directly. The other is a newly proposed shifted randomly overlay augmentation method to address the observational overfitting.\n\n- By and large, the paper is written well and presented clearly.\n- The section on pitfalls within visual RL algorithms is also interesting, and highlights the authors attention to the RL robustness and generalization problems.\n\n- One concern I have about the paper is the first pitfall where the imbalanced saliency is specifically applied to visual RL algorithms that apply stacked frames. However, many robust RL algorithms, e.g., DBC, SLAC, TiA, DRIBO, RePo, take only single frame as the input to the encoder which apply similar ideas as the feature-level frame stack. These algorithms also explicitly take previous frame stack as input with an encoder $p_\\theta(s_t | s_{t-1}, a_{t-1}, o_t)$. It would be interesting to investigate whether these methods also suffer from the imbalanced saliency and compare the proposed feature-level stack method with them in terms of performance and efficiency.\n- Though the paper identifies the pitfalls with attribution methods, it would also be interesting to see attribution masks of SimGRL in different settings, i.e., clean, video easy and video hard. This would help validate that the identified pitfalls are sufficiently addressed by SimGRL.\n- Fig 5 and demos showed along the paper may be misleading that the policy is purely learned from the clean environment without any background perturbation. However, the proposed randomly shifted overlay augmentation explicitly introduces background perturbations into the training process.\n\n- Could you comment on whether the pitfalls also apply to robust RL algorithms like DBC, SLAC, TiA, DRIBO and RePo?\n- Could you comment on how different attribution methods may affect the identified pitfalls and TID metrics?" }, { "confidence": 3, "rating": 6, "review_id": "xTc9P85DbO", "review_text": "The paper introduces SimGRL, a framework aimed at enhancing generalization in vision-based deep reinforcement learning (RL) under dynamic scene perturbations. It addresses two critical issues in existing visual RL methods: imbalanced saliency and observational overfitting. SimGRL employs architectural modifications to the image encoder and a novel shifted random overlay augmentation technique. Extensive experiments demonstrate SimGRL's superior generalization capabilities, achieving state-of-the-art performance on the DeepMind Control Suite and other benchmarks.\n\nInnovative Architectural Modification: The proposed modification to stack frames at the feature level instead of the image level effectively addresses the imbalanced saliency issue, ensuring the agent focuses on spatially salient features in each frame .\nNovel Data Augmentation Technique: The shifted random overlay augmentation introduces dynamic background elements during training, which helps the agent to focus on task-relevant features and ignore irrelevant background changes .\nComprehensive Evaluation: The method is thoroughly tested on multiple benchmarks, including the DeepMind Control Suite and DMControl-GB, demonstrating superior generalization performance compared to state-of-the-art methods.\nQuantitative Metrics: The introduction of TID metrics provides a quantitative way to evaluate and understand the method's effectiveness in identifying task-relevant features, correlating high TID scores with improved generalization performance.\nComputational Efficiency: The method achieves significant improvements without requiring additional auxiliary losses or networks, maintaining computational efficiency.\n\nDependency on Augmentation Quality: The effectiveness of the shifted random overlay augmentation relies on the quality and diversity of the natural images used for augmentation, which may limit its performance in certain real-world scenarios where such images are not available or appropriate.\nAd Hoc approach: the authors offer specific fixes to observed shortcomings in vision-based deep RL in simulated environments, and it remains to be seen if these fixes are an hand-crafted overfitting to specific tasks that won't generalize to other tasks.\nFinally, while the method shows impressive results in simulated environments, its applicability and effectiveness in more realistic real-world scenarios have not been thoroughly validated.\n\nHow does the diversity of natural images used in the shifted random overlay augmentation impact the generalization performance of the RL agents? Are there specific types of images that are more effective?\nAre there any plans to test SimGRL in real-world environments, and what challenges are anticipated in such scenarios? How does the method handle real-world dynamic perturbations that are not present in simulated benchmarks?\nWhat are the effects of using different numbers of frames in the feature-level frame stack on the performance and generalization capability of SimGRL?\nHow sensitive is the performance of SimGRL to the choice of hyperparameters such as the number of layers in the image encoder and the maximum shift length in the shifted random overlay augmentation?" } ]
09nyBqSdUz
RefDrop: Controllable Consistency in Image or Video Generation via Reference Feature Guidance
There is a rapidly growing interest in controlling consistency across multiple generated images using diffusion models. Among various methods, recent works have found that simply manipulating attention modules by concatenating features from multiple reference images provides an efficient approach to enhancing consistency without fine-tuning. Despite its popularity and success, few studies have elucidated the underlying mechanisms that contribute to its effectiveness. In this work, we reveal that the popular approach is a linear interpolation of image self-attention and cross-attention between synthesized content and reference features, with a constant rank-1 coefficient. Motivated by this observation, we find that a rank-1 coefficient is not necessary and simplifies the controllable generation mechanism. The resulting algorithm, which we coin as RefDrop, allows users to control the influence of reference context in a direct and precise manner. Besides further enhancing consistency in single-subject image generation, our method also enables more interesting applications, such as the consistent generation of multiple subjects, suppressing specific features to encourage more diverse content, and high-quality personalized video generation by boosting temporal consistency. Even compared with state-of-the-art image-prompt-based generators, such as IP-Adapter, RefDrop is competitive in terms of controllability and quality while avoiding the need to train a separate image encoder for feature injection from reference images, making it a versatile plug-and-play solution for any image or video diffusion model.
https://openreview.net/pdf/666abdc45ab3cd2b2837525f6f089ce0643ac59d.pdf
[ { "confidence": 4, "rating": 5, "review_id": "GnrmMlmehF", "review_text": "This paper targets the task of subject consistency in image and video generation. Previous consistency generation methods [53,68] are based on concatenated attention. The authors reformulate concatenated attention in a manner similar to classifier-free guidance, simplifying the constant C matrix to a constant c, leading to the proposed reference feature guidance (RFG). The authors applied RFG to multiple applications, including subject consistency in image and video generation. Qualitative and quantitative results show that RFG outperforms prior works such as IP-Adapter, BLIPD, and SDXL.\n\n1. The authors propose a simple yet effective method for consistent text-to-image generation. The relationship among the proposed method, cross-frame attention, and concatenated attention is an interesting observation.\n\n2. The proposed method is shown to be useful in multiple image and video generation applications, including blending features of multiple images, using negative examples to increase diversity, improving temporal consistency in video generation, and preserving identity in video generation. This method is shown to be generalizable to different techniques.\n\n3. The paper includes comprehensive visual examples demonstrating good visual quality.\n\n1. The generated images appear to have limited diversity. In Figure 13, many examples of humans have very similar poses and styles. In contrast, IP-Adapter generates more diverse images.\n\n2. The paper discusses the relationship between concatenated attention and cross-frame attention, mentioning that the results have similar effects to recent works [53] and [68]. However, visual comparisons are not included.\n\n3. Most results focus on humans. It would be beneficial to see different types of subjects/objects.\n\n4. Although the proposed method successfully generates videos with temporal consistency, the generated videos contain less motion. Could the authors explain the possible reason for this and suggest ways to improve it?\n\nThe text alignment decreases in Figure 14. Could the authors provide comments on why this happens and suggest possible solutions to improve text alignment?" }, { "confidence": 4, "rating": 6, "review_id": "mYHT90sON0", "review_text": "The authors analyzed the current methods of consistent content generation and revealed that the concatenation of reference features in the self-attention block can be reformulated as a linear interpolation of image self-attention and cross-attention between synthesized content and reference features with a constant rank-1 coefficient. Motivated by the analysis, the paper identified the rank-1 coefficient is not necessary and proposes a simplified method by replacing the rank-1 coefficient with a scalar coefficient. Compared to the previous method, this method doesn’t require additional training, while achieving considerable performance, and allowing multiple applications, such as multi-references, concept suppression, and video generation.\n\n1. The method and related work are clear and well-written, easy to follow.\n2. The proposed method is simplified based on previous methods and achieves effective results.\n3. The method is training-free and the authors claim that it’s flexible and plug-and-play for diverse text-to-image models.\n4. The paper exhibited many qualitative results and highlighted its effectiveness with comprehensive experiments for different applications.\n\n1. As the paper mentions the proposed method is plug-and-play, if a subsection is added to show the flexibility of the method across different models and potential limitations/considerations when applying it, that would be great.\n2. The quantitative comparisons can be difficult for this task but is it possible to use CLIP or other models to evaluate the consistency of the reference and generated images quantitatively?\n3. A more comprehensive discussion on its limitations, failure cases, or social impact would be great.\n\n1. How many generated images are used for human evaluation, and what do the values for the vertical axis (such as over 300 for consistency score) mean?\n\n2. It seems that the major difference between the proposed method and the previous method is whether to use a rank-1 matrix or a scalar for the interpolation of attention. Could the authors elaborate more on why the previous method requires training while the proposed method does not? How is the rank-1 matrix obtained in the previous method?\n\n3. How to align the parameters for the methods using the rank-1 matrix and the scalar so that the control of the reference is comparable across methods?" }, { "confidence": 4, "rating": 5, "review_id": "oUL7ujMW1U", "review_text": "This paper focuses on the challenge of ensuring consistency in the generation of images and videos. Deep learning and artificial intelligence techniques are utilized in image and video generation for generating new images or generate video frames based on given inputs, such as text prompts or reference images. Ensuring consistency is necessary for maintaining coherence between the subject or object in the generated images or video frames.\n\nThe previous methods mainly consisted of the IP (image prompt)-Adapter and concatenated attention approaches. The IP-Adapter technique retrieves characteristics from reference images and incorporates them into the generation process to improve coherence. More precisely, the method employs an image encoder that has been trained independently to extract features from the reference image. These features are then incorporated into the generation process using attention mechanisms, which involve both self-attention and cross-attention. \nNevertheless, this method necessitated distinct training of the encoder and utilized unchanging weights, leading to restricted adaptability. Furthermore, it is difficult to maintain the coherence between the textual content and the accompanying visual references.\n\nThis paper introduces a novel approach called RefDrop to address these challenges. The major contributions are as follows:\n1.\tA review of methods for generating consistency. The paper demonstrates that current concatenated attention methods can be understood as linear combinations of self-attention and cross-attention. This discovery indicates these techniques merge features from reference images using linear interpolation.\n2.\tReference Feature Guidance (RFG): The RFG method is presented as an approach for directly managing the impact of reference images by linearly combining self-attention and cross-attention. Users can enhance adaptability by adjusting the influence of reference images using a scalar coefficient 𝑐.\n3.\tNo prior training necessary: RefDrop provides an adaptable method to efficiently utilize features from reference images without requiring separate training of the encoder.\n\nRefDrop is a novel method that utilizes features from reference images to effectively control consistency in the generation of images and videos. This method controls the impact of reference images by using linear interpolation of self-attention and cross-attention. More precisely, features are derived from both the input and reference images, and subsequently, self-attention and cross-attention are executed independently. The outcomes of these considerations are subsequently combined through linear interpolation to generate the ultimate features, employing a scalar coefficient c to equitably distribute the influences from each attention mechanism. This enables adaptable manipulation of the impact of reference images. RefDrop efficiently employs reference features without requiring additional training, making it suitable for a wide range of generation tasks. Furthermore, it guarantees temporal consistency in video generation by utilizing the initial frame as a reference image, thereby preserving coherence throughout the following frames.\n\n- Integration of RefDrop into Diffusion Models Without Additional Training\n- Controlling Reference Image Impact in RefDrop for Efficient Image Generation\n- High-Quality Image Generation with RefDrop Using Single and Multiple References\n- Temporal Consistency in Video Generation with RefDrop\n\n- The lack of feature-based comparisons using metrics like Kullback-Leibler (KL) divergence in the experiments\n\nThis approach may introduce undesired background elements, complicating the management of specific image components, isn't it?" }, { "confidence": 4, "rating": 6, "review_id": "3n0uw8OkEt", "review_text": "This paper presents RefDrop, a method that allows users to control the influence of reference context in a direct and precise manner. More specifically, the proposed method is training-free, which means it can be used plug-and-play without the need to train a separate image encoder for feature injection from reference images.\n\n- The paper is well written and easy to follow.\n- This paper presents an intuitive way for consistent image/video generation by fusing the features of objects into the diffusion process. \n- Experiments show the effectiveness of proposed method on preserving object appearance.\n\n- One problem of the proposed approach is that it does not dis-entangle spatial control with appearance control. A stronger guidance scale not only means higher appearance similarity, but also higher spatial similarity between the generated image/video and the reference image. This could be an intrinsic drawback of the proposed method.\n\n- Can the authors provide some gifs for generated videos? I'm a little bit concerned that if we enforce high guidance scale, the generated video will probably be quite static without large movements.\n- I guess reference-ControlNet is also relevant? It would be nice to include it as a baseline." } ]
09RKw0vXjR
Fast Iterative Hard Thresholding Methods with Pruning Gradient Computations
We accelerate the iterative hard thresholding (IHT) method, which finds \(k\) important elements from a parameter vector in a linear regression model. Although the plain IHT repeatedly updates the parameter vector during the optimization, computing gradients is the main bottleneck. Our method safely prunes unnecessary gradient computations to reduce the processing time.The main idea is to efficiently construct a candidate set, which contains \(k\) important elements in the parameter vector, for each iteration. Specifically, before computing the gradients, we prune unnecessary elements in the parameter vector for the candidate set by utilizing upper bounds on absolute values of the parameters. Our method guarantees the same optimization results as the plain IHT because our pruning is safe. Experiments show that our method is up to 73 times faster than the plain IHT without degrading accuracy.
https://openreview.net/pdf/a97ce714f40bb23dc2626c2363a019d73add26da.pdf
[ { "confidence": 3, "rating": 7, "review_id": "ur5CdXjmLj", "review_text": "Iterative hard thresholding (IHT) is used to select the k most important features in an ordinary least squares (OLS) linear regression model, that is, the model parameter vector is constrained to have only k non-zero entries. It seems that most practical IHT methods to solve the constrained problem are based on gradient descent. The running time of iterative methods until convergence depends on (a) the computational cost of each iteration, and (b) the number of iterations. If we understand it correctly, previous work has focussed on reducing the number of iterations by using some form of regularization (improving the smoothness of the problem) or by exploiting information from previous iterations, such as, momentum. The paper under review seems first to propose a method for reducing the computation cost of the iterations by avoiding the computation of unnecessary entries in the gradient. This is achieved by maintaining upper and lower bounds for each entry of the full OLS parameter vector at each iteration. The bounds can be used to prune computations of unnecessary entries in the gradient. The method is guaranteed to give the same results as plain IHT.\n\nThe problem is well stated and relevant. The proposed idea to reduce the cost per iteration is, as far as I can tell, novel. Its realization is technically non-trivial, but well described. I have not checked the proofs in the appendix, but intuitively the results make sense. In general, the paper is well written. Actually, I enjoyed reading it.\n\nI do not see any major weakness. \n\nMaybe the presentation can be improved in some parts. Here are some suggestions:\n\nLine 51: m-dimensionAL vector\nLine 63: ... if it uses a heap ...\nLine 137: The ASYMPTOTIC cost ... [in some situations the actual cost should be higher] \nLines 239 and 241: Avoid starting a sentence with references.\nTable 1: The information content is really low.\n\n1. Would it be possible to combine your technique with techniques that aim to reduce the number of iterations? For instance, the techniques described in references 4 and 26, or 8 and 20?\n\n2. Why did you not include the methods from references 4, 26, 8, and 20 in your experiments? Are they superseded by references 2 and 19?" }, { "confidence": 4, "rating": 5, "review_id": "tNaNwbrg69", "review_text": "This paper studies iterative hard thresholding (IHT) as a canonical method for sparse linear regression. With precomputed X^TX and X^Ty, the computational cost of the algorithm is dominated by the gradient updates. To reduce the computational cost, this work proposed a pruning procedure at each step of IHT to only compute certain elements of the gradient vector.\n\nReducing the computational cost of IHT is an interesting and important problem.\nNumerical results indicate a significant reduction in running time by employing the pruning procedure proposed in the paper, without compromising the estimation accuracy.\n\nMajor:\n* It is unclear to me why the pruning strategy is defined the way it is. \n* Consider adding more literature review: it is not clear how big of a gap exists in the literature that this work is trying to bridge. Section 4 Related Work feels out of place. Could consider moving this to the beginning of the paper. \n* A general suggestion: consider adding more explanation on the rationale behind each technical definition/ result and why it is defined/ stated in that particular way. For instance, it may be helpful to mention that Lemma 1 is derived from the triangle inequality + Holders inequality.\n* I think some definitions are stated in the form of lemmas which they should not be, e.g. Lemmas 3 and 8. In general, I think some lemmas are unnecessary or can be combined.\n* Presentation is a bit long-winded at places, e.g. third paragraph\n\nMinor:\n* “73 times”: a more precise statement may be more useful; briefly describe the dimensions of the problem etc\n* line 60: it is more consistent to use “problem in (1)” instead of “Problem 1”\n* Notation: \n * I find it more natural to use boldface X as the design matrix\n * it is a bit confusing to denote the hard thresholding operator by Pk, hk is more common\n * \\mathbb{D}^t and \\mathbb{I} are unconventional notations for sets. Use \\mathcal{D}^t and \\mathcal{I} instead.\n\n* I find the terminology \"pruning\" a bit confusing. My understanding is that it refers to setting a particular entry $z_j^t$ to zero depending on whether or not $\\bar{z}_{j}^t $ is less than a certain threshold. Is this correct? If yes, then maybe \"thresholding\" is a better terminology?\n* By \"pruning is safe\", do you mean that the pruning step does not compromise the accuracy of IHT at all? Can you prove this more explicitly?\n* I think X^TX takes O(n^2m) computational cost instead of O(mn)?\n* Did you mean to use f(\\theta) instead of 1/2||y-X\\theta||_2^2 in (1)?" }, { "confidence": 3, "rating": 6, "review_id": "asVHERmcLM", "review_text": "The paper proposes to pruning the computation of marginal gradients in the IHT algorithm to accelerate the updating steps. For that, the upper bound $\\overline{z}_j^{t}$ of the component $z_j^t$ in the gradient step is proposed in Definition 1 and unnecessary elements in $\\mathbf{z}^t$ that must be thresholded to zero can be identified.\n\n- The proposed fast IHT is save in the sense that it can achieve the same output as original IHT.\n- Pruning the unnecessary marginal gradient computation significantly saves computation costs when the sparsity $k$ is small. The idea is simple yet effective. \n- Good empirical performance.\n\n- The proposed upper/lower bounds seem to be very restricted to the structure of the sparse linear regression task. And the proposed method does not work when general convex loss functions are considered.\n- For the sparse linear regression task, there are already efficient algorithms. For example, the cordinate descent algorithm uses in `glmnet`, which only needs to compute one dimension of marginal gradient in per iteration. The comparation beyond the IHT-type algorithm is ignored in the paper.\n\nThere are some minor questions.\n1. Line 42-45 is duplicated with the abstract.\n2. Definition 2 is less informative. It generally says nothing about the construction of the candidate set $\\mathbb{D}^t$. \n3. The performance seems highly sensitive to the selection of $t^\\ast$.\n4. The proposed method seems only suitable when the learning rate $\\eta$ is small and the parameters update gradually. When a larger learning rate is used or the momentum is introduced as in the related work (Section 4), the pruning may no longer take effect." }, { "confidence": 2, "rating": 5, "review_id": "8Vf0R5JPIr", "review_text": "The authors accelerate the iterative hard thresholding (IHT) method, whose purpose is to find the k most important elements from a linear regression model. Specifically, they safely prune unnecessary elements with upper bounds on the element values. The experiment shows significant speedup for the proposed method.\n\nThe proposed method comes with theoretical guarantees and exhibit significant empirical speedup.\n\nThe importance of the work seems to be not clearly conveyed. The evaluation is also limited. While providing significant speedup, for the scale of the problem considered, the wall time of the baselines still seems to be within a tolerable range.\n\nIs it possible to include more experiments from the other use cases mentioned in the paper such as sparse coding, dictionary learning, or compressed sensing, preferably of a larger scale?" } ]
08oUnmtj8Q
FSEO: A Few-Shot Evolutionary Optimization Framework for Expensive Multi-Objective Optimization and Constrained Optimization
Meta-learning has been demonstrated to be useful to improve the sampling efficiency of Bayesian optimization (BO) and surrogate-assisted evolutionary algorithms (SAEAs) when solving expensive optimization problems (EOPs). However, existing studies focuses on only single-objective optimization, leaving other expensive optimization scenarios unconsidered. We propose a generalized few-shot evolutionary optimization (FSEO) framework and focus on its performance on two common expensive optimization scenarios: multi-objective EOPs (EMOPs) and constrained EOPs (ECOPs). We develop a novel meta-learning modeling approach to train surrogates for our FSEO framework, an accuracy-based update strategy is designed to adapt surrogates during the optimization process. The surrogates in FSEO framework combines neural network with Gaussian Processes (GPs), their network parameters and some parameters of GPs represent useful experience and are meta-learned across related optimization tasks, the remaining GPs parameters are task-specific parameters that represent unique features of the target task. We demonstrate that our FSEO framework is able to improve sampling efficiency on both EMOP and ECOP. Empirical conclusions are made to guide the application of our FSEO framework.
https://openreview.net/pdf/d44100470bdf5aaff81b6c009ccb38e5fe83efc1.pdf
[ { "confidence": 3, "rating": 4, "review_id": "FbhPu1kK4g", "review_text": "This paper introduce a meta-learning framework into few-shot optimization to assist the surrogate modelling in expensive evaluation setting. The authors parameterize a mapping function to get the hidden feature of the solution space and then integrate such mapping into a gaussian kernel function as a deep kernel. They then facilitate meta-training of the proposed deep kernel over a group of related tasks to attain an experience model, by maximizing the posterior likelihood. During the online optimziation of the target task, the experience model is firstly adpated to the new task in the same way above and then updated acoording to its accuracy in terms of the predicted objective value. The experimental results show that the proposed framework achieves competitive performance against some strong baselines over EMOPs and ECOPs benchmarks.\n\n1. The idea of integrating meta-learning into the kernel-learning based surrotgate methods is novel, and might improves the surrogate-based optimziation towards generalizable setting.\n\n2. The expriments result show that the proposed FSEO framework is at least competitive with the existing baselines, which is acceptable and should be encouraged for further development.\n\n3. The overall writing is clear.\n\nBefore the next round of author-reviewer rebuttal, following concerns exist:\n1. Given that the likelihhod-based loss function (Eq. 4) should be maximized to fit the samples from all of the related tasks, why its update should follow a gradient descent rather a gradient ascent? Correct me if I was wrong.\n\n2. line 144 ~ 146, the authors state that the U update interations roots from the smaller number of available related tasks. I can not understand the reason behind, can you explain it more?\n\n3. The neural network $\\phi$ used in the deep kernel function is a 2-layer MLP, which limits the FSEO to meta-learn surrogate function among the related tasks with the same slution dimension. However, in practice, related tasks might not share the same dimension. I would appreciate the authors to provide realistic scenarios where FSEO is eefective. Besides, the effectiveness of the FSEO on traditional single-objective tasks should also be verified to make it more convincing that FSEO is a general framework.\n\n4. Although the overall writing of this paper is not bad, it is still difficult for less-skilled readers to understand the whole picture. In particular, the content in Section 3.2 and Section 3.3 should be carefully refined to make sure the potential readers fully understand how the DKL and MDKL operate. For now, it is too simple and ambiguous.\n\nsee Weaknesses." }, { "confidence": 4, "rating": 4, "review_id": "EhYydZVArW", "review_text": "This paper proposes Meta Deep Kernel Learning (MDKL), a new surrogate for SAEAs. MDKL consists of a deep kernel with meta-learning. Empirical studies demonstrate its effectiveness in expensive multi-objective optimization and constrained optimization.\n\n1. This paper is well-written and easy to follow. The technical details are well presented.\n2. This paper extends deep kernel and meta-learning-based surrogates into evolutionary algorithms.\n3. This paper investigated multi-objective optimization and constrained optimization.\n\nMeta-learned deep kernel surrogates have already been well-studied in Bayesian Optimization [1]. The authors are also aware of this as they mentioned in Related Work. I think this paper does not present significant new advancements based on the previous work.\n\nFirst, the authors claim that MDKL is specially designed for optimization, while the previous work is not. In this regard, I do not see many differences between MDKL and previous meta-learned deep kernels. The authors claim that the advantage of MDKL lies in continuous adaptation; however, most models support parameter updates or fine-tuning. The authors also did not sufficiently explain the relationship between continuous adaptation and optimization problems, or what significance it has for optimization problems.\n\nSecond, the authors propose that one of the novelties of this paper is taking expensive multi-objective optimization problems (EMOPs) and expensive constrained optimization problems (ECOPs) into account. MDKL, as a surrogate, can be integrated into almost any expensive optimization algorithm. It seems to be able to cooperate with Bayesian optimization as well. The authors simply replaced the surrogate in a multi-objective optimization algorithm with MDKL and conducted some experiments, without providing any new analysis, insights, or proposing any new methods specifically for MOPs or COPs. Therefore, I believe this paper does not make a significant contribution to solving EMOPs and ECOPs.\n\n[1] Martin Wistuba and Josif Grabocka. Few-shot Bayesian optimization with deep kernel surrogates. ICLR 2021.\n\n1. Regarding MOPs, this paper only includes results on synthetic problems. I recommend using some real-world instances, such as NAS and Hyperparameter tuning.\n2. Table 14 and Fig. 9. The HV values are all 0, indicating that the reference point is set too low.\n3. P2, Line 64. MOPs and COPs can also be global optimization problems.\n4. P5, Algorithm 3, Line 3. How are the increments initialized?" }, { "confidence": 4, "rating": 6, "review_id": "PpTxIJC4uY", "review_text": "The authors developed a few-shot evolutionary optimization framework to effectively solve the multi-objective EOPs and constrained EOPs.\n\nThe proposed method can solve the multi-objective EOPs and constrained EOPs with little data, especially for the engineering problems.\n\nThe learning results may rely on the relation degree of different tasks.\n\n(1) How to define the relation degree of the related task T1-Tn. If the tasks are highly related, it is relatively easy to get a good result.\n(2) Though the performance can be improved compared to some basic algorithms such as MOEA/D-EGO and constrained-EGO, the computational cost of the proposed method is also increased. Give more details about the overhead of the algorithm.\n(3) The proposed method uses the same constraint handling method as con_FS. Why can the proposed method find more feasible solutions in the right figure of Figure 4.\n(4) The complexity of the proposed algorithm should be analyzed." }, { "confidence": 3, "rating": 4, "review_id": "x0rMuKUSBh", "review_text": "This paper proposes a new surrogate-assistant evolutionary algorithm that utilizes a Gaussian process with Deep Kernel Learning as the surrogate model. The method employs few-shot meta-learning to learn from multiple tasks to construct the surrogate. It is then integrated with the existing MOEA/D-EGO algorithm to create a new approach. Experiments are conducted on the DTLZ benchmark problems and a gasoline motor engine calibration problem to evaluate the performance of the proposed algorithm.\n\n1. The proposed method is instantiated and tested in expensive multi-objective optimization and constrained optimization scenarios.\n2. A real-world problem is considered in the experiments.\n\n1. Many important algorithmic details are unclear. For instance, the main distinction between the proposed MDKL and the existing DKL algorithms is its ability to learn from a set of related tasks, yet its implementation is not clearly explained. How parameters from different source tasks collectively form the experience, and how parameters from both source and target tasks jointly create this experience, are not clearly addressed.\n2. The effectiveness of the algorithm is primarily tested on expensive multi-objective optimization problems, but state-of-the-art algorithms in this field were not selected for comparison.\n\n1. Two variables, theta and p, are used to describe task-independent parameters. What are they, and what is the difference between them?\n2. According to Algorithm 1 and Algorithm 3, task-independent parameters are only used to determine whether the condition in line 2 is met. They have no other function. How do the source tasks improve the algorithm's performance?\n3. How is the initialization of increments done in Algorithm 2 and Algorithm 3?" } ]
08GbdALmEs
Learning Versatile Skills with Curriculum Masking
Masked prediction has emerged as a promising pretraining paradigm in offline reinforcement learning (RL) due to its versatile masking schemes, enabling flexible inference across various downstream tasks with a unified model. Despite the versatility of masked prediction, it remains unclear how to balance the learning of skills at different levels of complexity. To address this, we propose CurrMask, a curriculum masking pretraining paradigm for sequential decision making. Motivated by how humans learn by organizing knowledge in a curriculum, CurrMask adjusts its masking scheme during pretraining for learning versatile skills. Through extensive experiments, we show that CurrMask exhibits superior zero-shot performance on skill prompting tasks, goal-conditioned planning tasks, and competitive finetuning performance on offline RL tasks. Additionally, our analysis of training dynamics reveals that CurrMask gradually acquires skills of varying complexity by dynamically adjusting its masking scheme.
https://openreview.net/pdf/513f5fca94e83204751e3471b756bc56d9b0cb07.pdf
[ { "confidence": 3, "rating": 6, "review_id": "QbySdZcUZ8", "review_text": "This paper presents CurrMask, a novel masked prediction approach for unsupervised RL pretraining, which learns skills of different complexity through block-wise masking and adaptively adjusts masking schemes in a curriculum for training efficiency. In contrast to previous methods that perform random masking at the token level, CurrMask applies a pool of masking schemes with different block sizes to capture temporal dependencies in various scales. In addition, CurrMask trains a bandit model to schedule the masking schemes to facilitate pre-training using the target loss decrease as the reward. The method is extensively evaluated in three downstream task settings across different environments from the DeepMind control suite. The experimental results demonstrate the strong empirical performance of CurrMask for both zero-shot inference and finetuning, which generally outperforms the compared baselines.\n\n- The proposed framework creatively introduces curriculum skill learning to masked prediction for RL pretraining, which facilitates long-term reasoning and training efficiency.\n- The proposed method generally brings performance improvements across different downstream tasks.\n- Comprehensive analysis is performed to understand the effectiveness of each component in the proposed method.\n- The paper is well-written and well-structured.\n\n- The masked prediction pretraining is interleaved with a bandit training process with non-stationary reward distribution, which could increase the instability of the training process\n- A skill usually refers to a meaningful abstraction beyond simply consecutive states and actions (e.g. a state-action sequence completing a subtask/subgoal). Therefore, not all state-action sequences are useful in downstream tasks. Masking blocks randomly may let the model spend lots of capacity on memorizing arbitrary state-action segments, instead of capturing reusable behaviors that can be efficiently transferred to downstream applications.\n\n- It puzzles me whether block-wise masking is needed necessarily. If mask ratios are randomly chosen from a range, high mask ratios would naturally generate masked blocks of varying sizes, and low mask ratios would also likely ask the model to perform token-level reconstruction, equivalently implementing the mixture of masking schemes, which seems to make the block-wise masking redundant\n- It remains unclear to me how Figure 4(a) serves as evidence of long-term prediction capability. And how would the attention map change if the prediction horizon gets longer? \n- Given that the reward is computed every $I$ training step(s), how stable the reward calculation and the overall training dynamics is with different $I$s?" }, { "confidence": 3, "rating": 5, "review_id": "yANakquX4V", "review_text": "The paper presents a method that learns skills through curriculum masking. Specifically, the approach CurrMask can automatically arranges the order of different masking schemes for training. The algorithm is tested on Deepmind Control Suite tasks, and show positive results in representation learning, zero-shot skill prompting, and zero-shot goal-conditioned planning.\n\n1. The idea of designing different masking curriculum to learn different skills is generally interesting and makes sense.\n\n2. The experiments, although in limited domains do showcase that the method works well for the most part.\n\n3. Overall writing is great. Hyperparameters used in the experiments are provided in the appendix for reproducibility.\n\n1. My main concern is that there is no comparison to other existing skill learning methods with offline data, e.g. the two papers (Ajay et al., 2021; Jiang et al., 2022) mentioned in the paper's related work section. As a skill learning/discovery paper, at least one of the existing approaches with the similar setting should be empirically compared with.\n\n2. There is not enough explanation for the proposed \"task selection\" method, which I believe is the central part of the proposed approach. Specifically, in section 4.3, what do \\omega_i, \\omega'_i, K denote? Does \\pi represent the policy? What is the intuition behind the two equations in the task selection subsection? Without explaining these, it is hard for me to understand how the proposed approach learn the masking curriculum.\n\n3. There is no visualization of the proposed masking curriculums. As this is the central contribution of the paper, it would be very interesting to see what the actual masking curriculum is for those continuous control tasks and how is affect the numerical results.\n\nFigure 1 (a), the inputs are all masked?" }, { "confidence": 3, "rating": 5, "review_id": "7xOrrBcLQl", "review_text": "This work proposes a curriculum learning approach to reinforcement learning skills from masked trajectory sequences. Given a set of pre-collected environment samples (here from a TD3 agent in 9 different mujoco domains), the proposed MaskCurr curriculum treats the agent's progress (target-loss decrease) like a two-armed bandit problem, where one of the bandits is the amount of information that is covered up (mask-ratio) and the other bandit is the length of the covered gaps that the agent has to reconstruct (block-size). Evaluation is tested on two tasks, skill-prompting, which requires the trained agent to complete a starting sequence, and goal-planning, where masked trajectories with interspersed checkpoints (goals) have to be filled in such that the intermediate goals are reached. CurrMask is compared to a set of various masking techniques (i.e., variations of random masking) as well as GPT variants, and is shown to perform competitively in both of the above mentioned tasks.\n\n- Well written paper that is easy to follow, with clean formalization and good, readable balance between text and material (plots, diagrams, tables, algorithms). \n- Generally decent evaluation, 9 environments / tasks, each evaluated with 20 runs. Fine-tuning potential included as well, although not quite fairly assessed (see weaknesses).\n- The choice of random-masking baseline variants does cover a good range of the ablation information of CurrMask, and the results analysis (Tab.1, Fig.3&4) provides insightful understanding of the experiments.\n\n- Not clear how \"versatile\" or \"diverse\" skills are classified here. I understand these properties w.r.t. to how different single tasks behaviors are emerging, rather than \"learn multiple tasks\", which I believe is meant here.\n- The fine-tuning experiments (Fig.2) seem to be plotted unfairly against the \"from scatch\" baseline, since pretrainined models do not \"start\" at step 0 (but at -#pretraining-steps) on the x-axis. For the training itself, 25k steps for mujoco domains is rather brief and all training curves seem to be stopped mid-training. As such the actual suitability for fine-tuning is rather questionable, or at least not shown with significance.\n- The attention-map result of Fig.4 a) could be better interpreted in the main paper, there is a visual difference but its not quite clear to me how these maps correlate to the claimed useful long-term dependencies skills. The appendix provides some more insight, but all relevant information and explanations of main paper plots should be in the main paper as well. \n- Experiments could use some different domain for comparison (apart from the mujoco domains).\n- While the improvement of CurrMask compared to the baselines is shown, the significance of the results is lowered by the fact on how good the random masking techniques still perform.\n\n\n---\nMinor issues:\n- Repeated mentions to \"following previous/prior work Liu et al.2022\" which give the impression of self-reference. If this is not a self-citation please clarify the wording for the double blind reviewing standard.\n- l26 missing word \"conditioned on the remaining (?), ...\"\n- The title could be more specific, claiming generally versatile skills for (basically only) the mujoco domain feels a bit far-fetched. The scope of this work is not broad enough to warrant such a sweeping claim.\n\n- The \"from scratch\" baseline is TD3?\n- Although CurrMask mostly outperforms the random baselines, did you perhaps try factor in the overhead of the CurrMask learning into the evaluation? I.e., if the random variants would increase their training time by the observed 4.7% wall-clock time overhead, how would your estimation be on the performance comparison given in Table 1?\n\n---\nEdit after rebuttal: Questions have been addressed, I will keep the overall positive score." }, { "confidence": 4, "rating": 5, "review_id": "JdMzN6WxcC", "review_text": "This paper proposes a curriculum masking pretraining paradigm for RL training, which is based on the block-wise masking schemes and is able to decide the block size and mask ratio automatically. Specifically, the authors design a masking pool with different masking scheme of different block size and mask ratio. Given the target loss and the corresponding reward from the environment, this method formulates the masking selection as a multi-armed bandit problem and sample the masking scheme from the masking pool according to the updated policy. The experiments on control tasks demonstrate the effectiveness of this method.\n\n1.\tThe paper is well written and easy to follow.\n2.\tThis work uncovers that the optimal combination of block size and mask ratio requires adaptive selection.\n3.\tThe analytical experiments demonstrate the ability of this method to capture long-term dependencies.\n\n1.\tThe authors have not conducted experiments to compare token-wise and block-wise masking before they choose the block-wise masking. Despite the theoretical analysis in the Introduction section, it would be better to prove this choice through experimentation.\n2.\tThere is no time complexity analysis of this method. If possible, the authors could present how much time each baseline and this method consume respectively.\n3.\tIn figure 6, the authors report the mean block size of the whole training process, so I am curious about the mean mask ratio of the whole training process. Furthermore, is it possible to visualize the impact of mask ratio just like Figure 3(a), so that we can know whether masked prediction benefits from larger mask ratio or smaller ratio.\n\nSee weaknesses." } ]
08A6X7FSTs
Director3D: Real-world Camera Trajectory and 3D Scene Generation from Text
Recent advancements in 3D generation have leveraged synthetic datasets with ground truth 3D assets and predefined camera trajectories. However, the potential of adopting real-world datasets, which can produce significantly more realistic 3D scenes, remains largely unexplored. In this work, we delve into the key challenge of the complex and scene-specific camera trajectories found in real-world captures. We introduce Director3D, a robust open-world text-to-3D generation framework, designed to generate both real-world 3D scenes and adaptive camera trajectories. To achieve this, (1) we first utilize a Trajectory Diffusion Transformer, acting as the \emph{Cinematographer}, to model the distribution of camera trajectories based on textual descriptions. Next, a Gaussian-driven Multi-view Latent Diffusion Model serves as the \emph{Decorator}, modeling the image sequence distribution given the camera trajectories and texts. This model, fine-tuned from a 2D diffusion model, directly generates pixel-aligned 3D Gaussians as an immediate 3D scene representation for consistent denoising. Lastly, the 3D Gaussians are further refined by a novel SDS++ loss as the \emph{Detailer}, which incorporates the prior of the 2D diffusion model. Extensive experiments demonstrate that Director3D outperforms existing methods, offering superior performance in real-world 3D generation.
https://openreview.net/pdf/7d0e61c2f52b953842ed9561fdffaa8dd330d197.pdf
[ { "confidence": 5, "rating": 5, "review_id": "BWtoSgYelu", "review_text": "This paper presents Director3D, a text-to-3D generation framework that creates realistic 3D scenes with adaptive camera trajectories. It includes a Cinematographer (Traj-DiT) for generating camera trajectories, a Decorator (GM-LDM) for initial scene generation, and a Detailer (SDS++ loss) for refinement. Using 3D Gaussians for scene representation, extensive experiments demonstrate its effectiveness.\n\nThe results are good with high spatial consistency.\n\nThe task is interesting. It is a good idea to generate scene-level 3D GS directly.\n\nThe writing is clear.\n\n1. The quantitative comparison with object-level methods seems unfair. The given prompts include sence information, which efftets clip score. In quantitative comparisons, the authors should at least compare with scene-level methods like LucidDreamer or others.\n\n2. Why didn't the authors compare their method with camera control video generation methods?\n\n1. Is the number of Gaussians equal to the number of pixels in each view? Why set it that way? It seems that 256*256 Gaussians might not be sufficient for a scene-level representation. What is the total number of Gaussians used?\n\n2. Can this pipeline achieve user-specified camera trajectories?\n\n3. Is it possible to generate a complete 3D scene and then reconstruct it? If the camera trajectory encompasses the entire scene, all 3D information is captured. How does the novel view inference ability perform once the three stages are completed? If only can generate denoised views, the contributions will be weakened." }, { "confidence": 5, "rating": 5, "review_id": "cH0FWhbnEx", "review_text": "The paper proposes a scene-generation method from text input. The framework utilizes three models that first generate a trajectory, then produce 3D Gaussians, and finally refine through SDS loss.\n\n- The design of using a trajectory generator and 3DGS diffusion is novel and impressive. \n- The proposed method outperforms existing object-level methods as shown in experiments.\n\n- The paper's evaluation is limited. The methods compared are object-level generation frameworks. The experiments can be improved by comparing them against scene-level generation methods, e.g. LucidDreamer[16] and ZeroNVS[1].\n- I am confused about how SDS++ loss is presented. The paragraph following Eq. (9) contains numerous undefined variables, making it hard to understand how exactly SDS++ loss is formulated. Can I say it is the loss presented in [67] but integrated with learnable text embedding? What's the difference against the loss proposed in VSD[26] and how does it compare?\n- The proposed trajectory generator is novel. How important is generating the trajectory? Can we just assign some trajectories by retrieving the dataset? It seems that most trajectories in MVImgNet and DL3DV are very similar. How diverse are the generations? I see the results in Fig. 12. But aren't the differences coming from different sampling results of a text-based generator? Since the cameras are already normalized, the orientation of the first frame should not be restricted by the camera trajectory. It seems that the model is overfitting to the orientations in MVImgNet (flying around something on the table). Are the \"randomly generated camera trajectories\" coming from other objects? Is it because GM-LDM only works with one trajectory per scene? Does the model support working with multiple trajectories for the same scene?\n- Is the proposed GM-LDM only training the 3DGS scene on observed views? How do the authors avoid overfitting in the rendering-based denoising? Is the SDS++ loss only employed on interpolated camera trajectory?\n- Is the presented video only showing the views presented in the trajectory or they are novel view synthesis results? How do we know the generated 3D scene is not overfitting to these views? Sparse-view 3D reconstruction suffers from over-fitting issues a lot and I'd like to hear the authors' thoughts on this matter.\n\n[1] Sargent K, Li Z, Shah T, et al. Zeronvs: Zero-shot 360-degree view synthesis from a single real image. CVPR 2024, arxiv 2023.\n\nPlease refer to the weakness." }, { "confidence": 4, "rating": 7, "review_id": "PgDKzzw8M1", "review_text": "This paper presents Director3D, a novel text-to-3D generation framework designed to generate both real-world 3D scenes and adaptive camera trajectories. Specifically, the authors propose the Traj-DiT to generate adaptive camera trajectories, which treats camera parameters as temporal tokens and performs conditional denoising using a transformer model. The authors propose the GM-LDM and SDS++ Loss to generate robust 3D scenes by leveraging the 2D diffusion prior. Extensive experiments demonstrate the effectiveness of Director3D.\n\n1. This paper is written clearly and easy to read. \n2. The idea of treating camera parameters as temporal tokens for denoising generation is novel and effective. \n3. The GM-LDM and SDS++ loss functions achieve a fairly high level of realistic scene synthesis. The generated scenarios conform to the textual input as well as being reasonably realistic and consistent. \n4. The proposed Director3D achieves impressive results on both quantitative and qualitative results.\n\n1.\tIt would be more convincing to include more text-to-3D scene generation methods in the Quantitative Comparison in the Qualitative Comparison.\n2.\tFor ablation experiments of SDS++ Loss, please use more detailed evaluation metrics and experimental results to illustrate the effects.\n3.\tFurther implementation of conditionally controllable camera view generation would be helpful for the application of this technology. \n4. Some important work in this area such as 3D-SceneDreamer should be discussed if they are not suitable for the experimental comparison.\n5. The visual quality of the results seems acceptable compared to existing methods like 3D-scene-dreamer, the limitation of the method, including not allowing a large range of the camera movement, should be discussed.\n\nNo additional questions." }, { "confidence": 4, "rating": 5, "review_id": "FvMdKDE3x9", "review_text": "This paper proposes a framework for simultaneous text-to-3D scene and camera trajectory generation. The authors propose a 3-stage pipeline to (1) generate a dense camera trajectory from input text, (2) use multi-view latent diffusion from a sparse subset of the generated trajectory to generate the 3D scene representation (Gaussian splats), and (3) refine the Gaussian splats with a modified SDS loss.\n\n- The paper tackles a practical problem of generating a 3D scene representation while also synthesizing a camera trajectory from text. It has implications of potential further applications for video/movie synthesis using explicit 3D representations.\n+ The paper is well-written, the presentation is clear, and the method description is easy to follow and understand.\n\n- While this paper tackles a new problem, I have concerns with the problem statement. First, what is a \"real-world\" camera trajectory? Virtually any kind of camera motion could be created in the real world (be it hand-held shakiness or really smooth orbital trajectories that could be achieved via physical equipment). It seems that the camera motions that Traj-DiT could synthesize are mostly orbital (object-centric) -- why would this be considered as a \"real-world\" trajectory?\n- I don't quite get why 3D scene generation and camera trajectory generation should be a coupled problem. NeRFs / Gaussian splats with good quality are not necessarily created via \"camera trajectories\", but rather via a broad range of covered viewpoints.\n- Using a diffusion model to synthesize trajectories, the number of frames would be fixed. How does one determine such trajectory length? How can one vary the length?\n- It is unclear why Cinematographer generates a trajectory with dense frames while only a sparse subset is ever used subsequently.\n- To evaluate the quality of the synthesized camera trajectory, I believe the method should also be evaluated and compared against with a video generation quality metric (e.g. FVD). The 3D scene representation could be a well-trained NeRF / Gaussian splat, and rendered videos under different trajectories could be quantified and compared.\n\nPlease see the weakness sections." } ]
07N0qoaZ2L
Improved Analysis for Bandit Learning in Matching Markets
A rich line of works study the bandit learning problem in two-sided matching markets, where one side of market participants (players) are uncertain about their preferences and hope to find a stable matching during iterative matchings with the other side (arms). The state-of-the-art analysis shows that the player-optimal stable regret is of order $O(K\log T/\Delta^2)$ where $K$ is the number of arms, $T$ is the horizon and $\Delta$ is the players' minimum preference gap. However, this result may be far from the lower bound $\Omega(\max\{N\log T/\Delta^2, K\log T/\Delta\})$ since the number $K$ of arms (workers, publisher slots) may be much larger than that $N$ of players (employers in labor markets, advertisers in online advertising, respectively). In this paper, we propose a new algorithm and show that the regret can be upper bounded by $O(N^2\log T/\Delta^2 + K \log T/\Delta)$. This result removes the dependence on $K$ in the main order term and improves the state-of-the-art guarantee in common cases where $N$ is much smaller than $K$. Such an advantage is also verified in experiments. In addition, we provide a refined analysis for the existing centralized UCB algorithm and show that, under $\alpha$-condition, it achieves an improved $O(N \log T/\Delta^2 + K \log T / \Delta)$ regret.
https://openreview.net/pdf/8e5c1c3200f4a84de5869e15cc5f115e7afb829b.pdf
[ { "confidence": 3, "rating": 5, "review_id": "5YwdneSFpt", "review_text": "This paper considers two-sided matching bandit problems, where the goal is to minimize regret against a player's optimal stable matching. The state-of-the-art algorithms achieve a regret bound of $KlogT/\\Delta^2$. The authors suggest an algorithm using an adaptive online Gale-Shapley to achieve a regret bound of $N^2\\log T/\\Delta^2$ under $N<K$. Under $\\alpha$-condition, they provide a new analysis for the previous algorithm to achieve a regret bound of $N\\log T/\\Delta^2$.\n\n1. The authors suggest an adaptive online Gale-Shapley algorithm.\n2. The suggested algorithms outperform the previous algorithms when $N$ is much smaller than $K$.\n3. They demonstrate this result using synthetic experiments.\n\n1. Without $\\alpha$-condition, the upper bound has $N^2$ instead of $N$, which may not be tight with respect to $N$.\n2. Even though they provide a new algorithm, it may be hard to see the technical novelty.\n\nWithout $\\alpha$-condition, is the lower bound linear with respect to $N$?" }, { "confidence": 3, "rating": 7, "review_id": "8Be90d8DQ2", "review_text": "The paper studies the bandit learning problem in two-sided matching markets and provide improved regret analysis that nearly match the lower bound (although with slightly different definition of instance-dependent gaps). The bound is particularly useful when the number of players is much smaller than the number of arms. Numerical experiments show significant improvement under some special cases. The paper also improve the regret analysis for an existing UCB algorithm under an additional $\\alpha$-condition.\n\n1. The paper is very well-written. The flow is clear and easy-to-follow.\n2. Strong technical novelty as well as solid theoretical results. The paper gives new insights on how to balance between exploration and exploitation in matching markets.\n3. Numerical experiments show significant improvement compared to existing policies.\n\nAlthough the dependence on $N$, $K$, $T$ is optimal, the main weakness lies in the difference of defining the instance-dependent gap $\\Delta$.\n\nI would suggest the authors add a paragraph in the introduction to explcitly point out the contributions in this paper. In particular, I would like to see the main algorithmic design insight be emphasized from this work. For example, why previous algorithms fail to obtain low regret (due to over exploration)?" }, { "confidence": 4, "rating": 5, "review_id": "6WtNzkUqjG", "review_text": "The paper proposed a new algorithm for bandit learning in matching markets and showed new results on regret bounds. However, the combination of bandits with matching markets is wired. Is there any evidence that practitioners would like to use bandits for learning in markets? The novelty of the theory is also unclear.\n\nThe paper proposed a new algorithm for bandit learning in matching markets and showed new results on regret bounds.\n\nThe combination of bandits with matching markets is wired. Is there any evidence that practitioners would like to use bandits for learning in markets?\n\n1. The key novelty in the paper is unclear. What are the new techniques developed in the paper for proving the upper bound? \n\n2. Do you have any empirical markets that could use the proposed algorithms?\n\n3. How to generalize the techniques to decentralized markets?" }, { "confidence": 2, "rating": 5, "review_id": "UzfLH0Mn1k", "review_text": "This work studies the problem of bandit learning in two-sided matching markets, where the number of players $N$ is smaller than the number of arms $K$. The players' preferences $\\mu\\_{i,j}$ are unknown but the arms' utility preferences $\\pi\\_{i,j}$ are known. Two algorithms with improved regret bounds are proposed. The first one is adaptive online Gale-Shapley (AOGS) for general markets, where multiple stable matchings exist. In AOGS, each player follows an adaptive explore-then-commit strategy to identify the best available arm based on the confidence bounds of each arm reward estimates and whether an arm is better matched to other players. The regret bound of AOGS is stated in terms of a new notion of gap. The second algorithm is centralized UCB combined with offline Gale-Shapely for markets with $\\alpha$-condition, where a unique stable matching exists.\n\n- Novelty: The following contributions of the work seem novel: the new definition of the gap, the exploration strategy in Algorithm 1, and the improved analysis of Algorithm 2.\n- Significance: All algorithms appear to be rigorously analyzed. No proofs are missing. The bound in Theorem 6.2 have optimal dependency on $K, N$ and $log(T)$ compared to a known lower bound.\n- Writing quality: The paper is generally well-written.\n\nThe contributions are rather incremental. Details are below:\n- The new bound in Theorem 4.1 depends on a new notion of gap. Given that the existing literature in Table 1 already have at least three different notions of gaps, it is not clear in what sense another type of gap is better than the existing ones. Does the new gap capture the difficulty of practical scenarios better? This was not sufficiently discussed in the paper. \n- A lower bound result for the new gap is missing. As the paper acknowledged, comparing with the lower bound in Sankararaman et al is not entirely meaningful because the lower bound there uses a different kind of gap.\n- In general, a comprehensive comparison with existing results is missing. For example, do any of the existing results in Table 1 imply anything about the new results in the current paper? Another thing is Algorithm 2 was proposed by Liu et al for $\\mathrm{gap}\\_2$, while the paper uses the same algorithm for $\\mathrm{gap}\\_3$ under $\\alpha$-condition, so it is not clear why the analysis in this paper is considered an improved version of the one in Liu et al (the goal is different). Maybe if the proposed analysis implies that Algorithm 2 obtain optimal bounds on multiple types of gaps *simultaneously*, then it would be a significant result.\n\nPlease see the questions in the Weakness section above. One more question below:\n- The assumption that $\\pi\\_{i,j}$ are known seem strong and has been exploited thoroughly in previous works. How are the results impacted if this assumption does not hold?" } ]
06Vt6f2js7
SyncTweedies: A General Generative Framework Based on Synchronized Diffusions
We introduce a general diffusion synchronization framework for generating diverse visual content, including ambiguous images, panorama images, 3D mesh textures, and 3D Gaussian splats textures, using a pretrained image diffusion model. We first present an analysis of various scenarios for synchronizing multiple diffusion processes through a canonical space. Based on the analysis, we introduce a synchronized diffusion method, SyncTweedies, which averages the outputs of Tweedie’s formula while conducting denoising in multiple instance spaces. Compared to previous work that achieves synchronization through finetuning, SyncTweedies is a zero-shot method that does not require any finetuning, preserving the rich prior of diffusion models trained on Internet-scale image datasets without overfitting to specific domains. We verify that SyncTweedies offers the broadest applicability to diverse applications and superior performance compared to the previous state-of-the-art for each application. Our project page is at https://synctweedies.github.io.
https://openreview.net/pdf/359a870b4264337a01a5b8dd7b79979f90e67323.pdf
[ { "confidence": 4, "rating": 7, "review_id": "MQUxmla1rb", "review_text": "SyncTweedie attempts to elucidate the design space for synchronized diffusion. Synchronized diffusion means to optimize some representation (e.g., 3D mesh) by jointly diffusing its lower-dimensional projections (e.g., 2D image). The paper analyzes existing synchronized diffusion approaches on different tasks under the same umbrella on a diagnostic benchmark, and provides an empirical insight that one underexplored design is the best choice. The paper then verifies the insight in a range of tasks, including panorama image generation and 3D texturing.\n\n1. The paper is enjoyable to read. The analysis of the 5 categories of diffusion is very clear. The 3 types of tasks (1-1, 1-N, N-1) are also nicely defined. There have indeed been extensive studies on different tasks based on a similar idea, this paper has done a nice job of reviewing them and elucidating the design space.\n2. The toy experiment is well-designed. It is a simple task, and the authors slightly modify it so it comprehensively covers all challenges in all types of tasks. The toy experiment is also diagnostic enough to provide empirical insights, which makes Case 2 (SyncTweedies) stands out.\n3. The experiments cover a range of tasks including arbitrary-sized image generation, panorama image generation, mesh texturing, and 3DGS texturing. I find these tasks sufficient to support the generalizability of the proposed approach.\n\n1. The depth-to-360$^\\circ$ panorama result is blurry. In the meantime, the generated arbitrary-sized image seems to be fine. Is there any reason for that?\n2. It would be better if generation diversity could be discussed, which I think would be an advantage over SDS-based works. How would different noise initialization affect the generated results and how robust is the approach given different random seeds?\n3. Some implementation details are missing. \n - The paper seems to miss the implementation details of the aggregation function for different tasks. For example, how is the view aggregation for 3DGS performed? Is it a gradient-based optimization or an analytical weighted sum as in MultiDiffusion[4]? This will also help to understand the runtime of the method.\n - What are the number of projected views (i.e., $N$) for each task? How are the views sampled?\n\nMinor:\n- L95: tdeterministic -> deterministic\n- L594 refers to something in L628.\n\n- The paper mentions that the framework cannot jointly optimize geometry and texture for 3D generation, could the authors elaborate more on why there is such a limitation? Does it suggest that the aggregation function $A$ cannot be too under-constrained?" }, { "confidence": 3, "rating": 6, "review_id": "DfXH71snhJ", "review_text": "This paper introduces SyncTweedies, a novel framework to generate diverse visual content such as ambiguous images, panoramas, mesh textures, and Gaussian splat textures. The method uses a synchronization process that averages outputs of Tweedie's formula across multiple instance spaces, eliminating the need for fine-tuning. The authors claim their method preserves the rich priors from large-scale datasets, enhancing generalizability. Experimental results show that SyncTweedies outperforms existing methods in various applications.\n\n(1) From my point of view, the framework's versatility is impressive, as it addresses various visual content generation tasks. This general applicability is a significant advantage over more specialized methods. The use of Tweedie's formula for synchronizing diffusion processes across different instance spaces is genuinely novel. This methodological innovation could inspire new directions in the field of generative models. \n\n(2) SyncTweedies' ability to function without fine-tuning is a strong point. This feature ensures that the model retains its generalization capabilities, making it effective on diverse and previously unseen datasets. \n\n(3) Besides, the authors have conducted extensive experiments, comparing their method against several state-of-the-art techniques. These comparisons convincingly demonstrate the superior performance of SyncTweedies across multiple tasks.\n\n(1) The method assumes that the projection and unprojection functions are accurate, but in practice, these operations can introduce errors, especially in complex transformations such as those required for 3D mesh texturing. The author does not address how SyncTweedies handles these projection errors, which can accumulate and degrade the quality of the generated content. \n\n(2) The theoretical basis of SyncTweedies seems to implicitly assume that the data distributions in the instance and canonical spaces are well-aligned. However, this assumption may not hold in practice, especially when dealing with diverse and complex datasets. The paper does not explore the theoretical implications of misaligned data distributions and how they might affect the performance and stability of the synchronization process.\n\n(1) How does SyncTweedies handle scenarios where projection and unprojection functions between instance spaces and the canonical space are not perfectly invertible? What impact does this have on the quality of generated content? \n\n(2) I am also wondering if the synchronization process can be optimized further to reduce computational overhead without compromising the quality of the generated content." }, { "confidence": 4, "rating": 7, "review_id": "ZdCnVNUdvI", "review_text": "This paper investigates content generation within a target space using pretrained diffusion models operating in projected subspaces. The authors analyze five variants of the DDIM procedure, performed separately in each subspace and aggregated in the target space using known projection and unprojection operators. They explore different sequences of projection and aggregation, concluding that the optimal approach is to aggregate each subspace's \"estimated x0 from xt\" (the output of Tweedie's formula).\n\nThe method is evaluated through texturing tasks involving depth, mesh, and 3DGS, each representing distinct projection and unprojection scenarios. The ablation study and comparisons against baselines demonstrate the robustness of the proposed aggregation stage.\n\n+ The paper effectively breaks down the multi-DDIM process, clearly delineating distinct alternatives that differ in the sequencing of the projection and aggregation operations. \n+ The identification of three different tasks, which span a range of projection and un-projection operations, enriches the analysis and allows for a clear identification of the strengths and weaknesses of each aggregation strategy. I also appreciated the toy experiments, which effectively mimic the 3D effect in a simpler 2D setup. \n+ The final strategy advocated for in this work aligns with the standard DDIM, is straightforward to implement, requires no additional tuning, and does not incur extra computational costs.\n\n- Missing Explanations: The figures are difficult to understand due to missing information. For example, Figure 3 does not explicitly state the experimental setup, including the definitions of the canonical space and the subspaces.\n- The formulation of cases 1-3 that operate in the subspace does not explain how the recovered signal in z is finally returned. This is related to my previous comment, where it seems Figure 3 shows individual subspace generations rather than the end goal/task.\n- The initialization process is not discussed in the main paper. It is unclear whether the subspaces share the same initial noise. If they do, how is noise being generated in z? This is particularly challenging when z is 3DGS.\n- In the appendix, it states that the 3DGS are optimized towards the predicted clean images. Is this optimization considered the unprojection operation \"g\"? If so, this should be explicitly stated and moved to the main paper. More details on this optimization process are needed, including whether it is performed until convergence at each denoising time during generation.\n- In lines 129-130, phi and psi are used in the target domain. Please explain how this is done properly, given that noise is not injected in that space. Why would the coefficients in eq (3) and (4) hold true?\n\n- The caption of Figure 1, “Diverse visual content generated by SyncTweedies,” and the abstract phrase \"generating diverse visual content\" are misleading. They suggest the generation of general 3D content, while the method, as mentioned in the limitations, relies on known mappings between the subspaces and the target space, limiting it to texturing tasks. The authors should tone down these statements.\n- If the mapping from the canonical space to the instance spaces is simple (say, linear or even the Idnetity mapping), it seems all \"cases\" perform similarly both quantitatively and qualitatively. Do these methods mathematically converge to the same method, or are there still differences? In the 1-1 projection example in Figure 3, it appears all methods produce the same result. Is this correct? Table 10A shows almost identical results for different methods, and in Table 1, the scores for the 1-1 cases are very similar.\n- The claim of being the first to propose this method and that it was previously \"overlooked\" is too bold. Guidance literature shows that guidance in \"\\hat{x_0}\" offers more stable optimization and results. While guidance differs from multi-diffusion, it is similar enough to make the analogy. The authors should discuss similarities and differences, referencing papers like \"Universal Guidance for Diffusion Models\" and motion diffusion papers such as MDM and Trace&Pace. Further, if in degenerate cases of f and g maps (e.g. if they are the identity function) some \"cases\" are indistinguishable, be mindful when attributing aggregation choices to these methods (e.g., MultiDiffusion). Methods requiring tuning, like DiffCollage, should not be dismissed as baselines. The main contribution of this paper is identifying the best \"place\" to aggregate, and tuning is somewhat orthogonal. If \"Case 2\" improves DiffCollage, this should be demonstrated.\n- The term \"1:1 case\" is confusing and suggests disjoint individual subspaces.\n- The integration process of 1:n should also be better explained. It may also be useful to add illustration figures. \n- Improve Figure 2 for clarity with a more detailed caption. Connecting the diagram to the notations introduced in the paper would also help readability.\n\nplease see weaknesses." } ]
06JRFVK88O
Mimicking To Dominate: Imitation Learning Strategies for Success in Multiagent Games
Training agents in multi-agent games presents significant challenges due to their intricate nature. These challenges are exacerbated by dynamics influenced not only by the environment but also by strategies of opponents. Existing methods often struggle with slow convergence and instability. To address these challenges, we harness the potential of imitation learning (IL) to comprehend and anticipate actions of the opponents, aiming to mitigate uncertainties with respect to the game dynamics. Our key contributions include: (i) a new multi-agent IL model for predicting next moves of the opponents - our model works with hidden actions of opponents and local observations; (ii) a new multi-agent reinforcement learning (MARL) algorithm that combines our IL model and policy training into one single training process; and (iii) extensive experiments in three challenging game environments, including an advanced version of the Star-Craft multi-agent challenge (i.e., SMACv2). Experimental results show that our approach achieves superior performance compared to state-of-the-art MARL algorithms.
https://openreview.net/pdf/5de8e301e7b989f7765b02764db159c3bf7f21e9.pdf
[ { "confidence": 3, "rating": 6, "review_id": "7YSOZFKIOF", "review_text": "This paper addresses the issue of training instability and slow convergence in MARL caused by the changing strategies of other agents. It proposes reducing the uncertainty faced during training by imitating the opponents' strategies. To address the challenge that opponents' actions are usually unobservable, the paper further proposes predicting the opponents' next state, providing corresponding derivation and analysis. Finally, the authors tested their approach in multiple experimental environments, achieving significant improvements in final rewards.\n\nThe paper is well-organized, with clear motivation and straightforward method introduction, making it easy to follow. The adaptation of IQ-Learn is easy to understand, and the derivation process is quite clear and technically sound.\n\nThe experiments included SOTA MARL algorithms and compared results under different imitation learning frameworks. In the majority of tasks across three different environments, the proposed IMAX-PPO algorithm achieved the highest expected rewards, demonstrating the effectiveness of the method.\n\nI have not found any obvious flaws. My concern lies in the experiments. I noticed that the training curves in the SMACv2 and Gold Miner environments are very unstable, i.e., the performance flucuates. Is this due to adversarial imitation learning? The stability of the algorithm needs further validation. Additionally, the default settings have fixed enemy strategies, making the learning easier. If the enemies are also learning agents, the instability may be exaggerated.\n\n1. The performance of many methods in the experiments fluctuate. How many seeds were used in the experiments? Previous experiments on SMACv2 usually uses more than 5 random seeds.\n2. Are the strategies of the enemies fixed in the environment?" }, { "confidence": 3, "rating": 8, "review_id": "psoQ3QbLDt", "review_text": "This paper presents a new framework of multi-agent reinforcement learning (MARL) by modeling opponents’ behaviors through imitation learning.\n\nThe motivation and the method is well described and the performance is tested in extensive experiments with challenging tasks against SOTA methods.\n\nIt is assumed that each agent performs just based on the present observation o_i, but coordinated behaviors often require memory coding the group strategy or opponents’ game plan.\n\nIf I understand correctly, an important feature of the proposed framework is that all ally agents jointly learns a single joint model of the enemies. On the other hand, the SupMAPPO agents learn the enemies’ next states individually. What if the same information sharing is performed for supervised prediction of enemies’ next state?" }, { "confidence": 3, "rating": 6, "review_id": "aqbQqDFjZM", "review_text": "The paper studies cooperative-competive MARL. It utilizes imitation learning to comprehend and anticipate the next actions of the opponent agents (enemies), aiming to mitigate uncertainties of the controlled agents (allies) with respect to the game dynamics.\n\n- The paper studies a very interesting problem, central to the MARL community.\n- The paper proposes a novel and interesting method, combining imitation learning techniques and opponent modelling for enhancing the agents' individual policies.\n- The paper provides many experiments on three benchmarks (smacv2, grf, gold miner)\n- The proposed method significantly improves performance in many tasks over the most important baselines: the backbone MAPPO and the supervised baseline, SUP-MAPPO.\n- The authors provide interesting theoretical analysis of the proposed framework.\n\n- Related work needs improvement. In opponent modelling, the authors claim that: \"All the aforementioned related works require having access to opponent’ observations and actions during training and/or execution\". This is not the case in most opponent/agent modelling works (e.g., see Papoudakis et al.), as they model the other allies, not the enemies (which is allowed under the CTDE schema). I believe that the proposed method belongs to the category of opponent/agent modelling in MARL. Moreover, some important references are missing, see [1], [2], [3].\n- The presentation needs improvement, heavy notation in many parts. The authors should remind more often what some quantities represent. Furthermore, more intuition of the proposed framework and the method is needed in sections 4 and 5 and the related work. In a nutshell, why can IL solve the agent modelling problem, why is it important and how? Also, how does the goal of the method (i.e., predicting the next states of the enemies) is connected to the IL objective of section 4.1? \n- Ablation study is missing. Also, it would be interesting if the authors provide experiments of the proposed method on top of other MARL algorithms as well. \n- The framework can be impractical as it is now, as it may need a lot of hardcoding to be implemented to any environment, since it leverages enemy state information within the individual (allies) agents' observations (e.g., information regarding the neighborhoods). In other words, one may need to be able to decompose manually the agents' observations into different parts, some of them are related to enemies.\n\n[1] Papoudakis, Georgios, Filippos Christianos, and Stefano Albrecht. \"Agent modelling under partial observability for deep reinforcement learning.\" Advances in Neural Information Processing Systems 34 (2021): 19210-19222.\n\n[2] J. Sun, S. Chen, C. Zhang, Y. Ma, and J. Zhang, “Decision-making with speculative opponent 388 models,” IEEE Transactions on Neural Networks and Learning Systems, 2024.\n\n[3] R. Raileanu, E. Denton, A. Szlam, and R. Fergus, “Modeling others using oneself in multi-agent reinforcement learning,” in International conference on machine learning. PMLR, 2018, pp. 4257–4266.\n\nThe questions have been intergrated into the weaknesses section." } ]
04EC4ZnZJj
MemoryFormer : Minimize Transformer Computation by Removing Fully-Connected Layers
In order to reduce the computational complexity of large language models, great efforts have been made to to improve the efficiency of transformer models such as linear attention and flash-attention. However, the model size and corresponding computational complexity are constantly scaled up in pursuit of higher performance. In this work, we present MemoryFormer, a novel transformer architecture which significantly reduces the computational complexity (FLOPs) from a new perspective. We eliminate nearly all the computations of the transformer model except for the necessary computation required by the multi-head attention operation. This is made possible by utilizing an alternative method for feature transformation to replace the linear projection of fully-connected layers. Specifically, we first construct a group of in-memory lookup tables that store a large amount of discrete vectors to replace the weight matrix used in linear projection. We then use a hash algorithm to retrieve a correlated subset of vectors dynamically based on the input embedding. The retrieved vectors combined together will form the output embedding, which provides an estimation of the result of matrix multiplication operation in a fully-connected layer. Compared to conducting matrix multiplication, retrieving data blocks from memory is a much cheaper operation which requires little computations. We train MemoryFormer from scratch and conduct extensive experiments on various benchmarks to demonstrate the effectiveness of the proposed model.
https://openreview.net/pdf/3182fbee67403ec98f7f114ce0d5398a511c94cf.pdf
[ { "confidence": 4, "rating": 7, "review_id": "qoImygWW8M", "review_text": "In this paper, the authors present a new transformer model named MemoryFormer, which utilizes locality-sensitive hashing to replace the matrix multiplication operation of the fully-connected layer. This paper can minimize the computational complexity of transformer by removing all the computations except for self-attention. The idea is to use hash tables to store pre-computed features in the memory and retrieve them during inference. The authors evaluate the MemoryFormer on six tasks and compare with other efficient transformer models to show the superiority in saving computations.\n\nThis paper is well-written and easy to follow. Existing works that try to make the transformer model computationally efficient mainly focus on the self-attention module and FFN module, while this paper tries to address this issue by focusing on the most fundamental part of transformer, the fully-connected layer. This makes the paper stand out for its novelty. The authors use locality-sensitive hashing to simulate the linear projection and use gradient descent via backpropagation to learn the hash tables where the embeddings are stored. The leverage of the memory resources rather than massive parallel matrix multiplication makes the presented model potentially feasible for CPU deployment and inference.\n\n1. Although the authors experiment on three different model sizes, but they do not provide the exact number of parameters of these models. It would be better if the authors could provide the parameter number.\n2. Missing the ablation study about the non-linearity in the FFN module.\n3. It would be better to provide some discussion about the training cost given that frequently hashing and retrieving would cost much VRAM I/O of GPU for training.\n\n1. What is the training cost of MemoryFormer? Is it fast to train such models?\n2. Can MemoryFormer be scaled up to reach billions of parameters like Llama-7B?" }, { "confidence": 3, "rating": 5, "review_id": "Qtxtw70eLW", "review_text": "A FLOPs-reduction strategy designed for transformer is proposed by this paper. The author introduces MemoryFormer, a transformer model that is built using the Memory Layer instead of fully-connected layer. According to the paper, the author claims that the MemoryFormer has the minimum computation because the Memory Layer uses a locality-sensitive hashing algorithm with low computational complexity to compute feature projection instead of using matrix multiplication performed by Fully-Connected layer. The author also designs a differentiable mechanism for the hash tables to be updated, therefore making the MemoryFormer end-to-end trainable like baseline transformer. Besides, the author keeps the necessary computation of self-attention operation untouched, which makes this work orthogonal to other existing efficient transformer designs. Extensive experiments show that the proposed MemoryFormer performs better than other transformer baselines on 6 tasks when built with the same hyper-parameters, such as number of layers and hidden sizes.\n\n• The heavy computations of decoder model hinders the further development for LLMs. This paper aims at solving this main issue that the LLMs are currently facing. Exploiting storage to reduce inference computation is a fancy idea.\n• The proposed hashing-based Memory Layer requires much less computation than Fully-Connected layer in forward pass. The back-propagation mechanism of the hash tables proposed in Sec.2.2 sounds reasonable.\n• The experiment section are comprehensive. The results show that MemoryFormer has better performance while consuming much fewer FLOPs.\n• The content of paper is well-written and well-organized. The equations are clear and easy to understand.\n\n• The proposed method could be further tested with other attention method, for example, incorporating linear attention mechanism into MemoryFormer. However, such experiment is missing.\n• The author didn't report the parameter size of the proposed Memory Layer against the corresponding Fully-Connected layer, and did not report the number of parameters of the MemoryFormer model.\n• Lack of the experiment for the activation function of Memory Block.\n\nSince current LLMs require scaling up in model size for more intelligence, I wonder if the author experimented on a larger size of the proposed model? Scalability is crucial." }, { "confidence": 4, "rating": 6, "review_id": "W6ZH8bj5Af", "review_text": "This paper introduces a novel neural network call MemoryFormer. This is a modified version for transformer that eliminates the dense layer (FC layer). The proposed MemoryFormer tries to address the problem of high computational complexity of decoder-style generative models. Concretely, the author proposes the Memory Layer which uses locality sensitive hashing (LSH) to achieve the embedding projection which is done by the fully-connected layer in normal transformer. This way, the MemoryFormer eliminates the computational cost of linear projections and keeps only the computational cost of multi-head attention. The author conducts sufficient experiments at different model scale on different NLP benchmarks. The experiment results show that MemoryFormer has better performance than other transformer models with the same #layers and hidden dimension while obviously reduce the FLOPs. The method in this paper shows a different but possible way of building future large language models.\n\n1 This paper proposes an interesting way to reduce the computational complexity of transformer neural network. The proposed Memory Layer uses LSH algorithm as a replacement for the linear projections of linear layer. It’s novel to use the memory space to store feature vectors instead of compute them on the fly.\n2 The experiment results demonstrate that the proposed model has better performance than baseline across different sizes, showcasing the scalability of the proposed model.\n3 The alternative plan for FC layer in transformer is underexplored as it is the most basic building component in neural networks.\n\n1 This paper doesn’t have any data regarding the inference latency of the proposed MemoryFormer neither on CPU or GPU. The purpose of this paper is reducing FLOPs of transformer, I think such experiment is important.\n2 The baseline model Pythia[1] uses 8 benchmarks to evaluate Pythia models while this paper uses 6 benchmarks. This is not a big deal but I hope the author can report the results on the other two.\n3 The formulation of Eq.10 might be a little bit confusing if not reading carefully and could be optimized for readability.\n\n[1] Pythia: A suite for analyzing large language models across training and scaling. ICML 2023.\n\n1 Which model size is being used in the ablation study in Section 3.3? Is it MemoryFormer-tiny, or -small, or base? This paper seems to forget to mention about this information.\n2 This paper reports the FLOPs of different models in Tab.1 and 2. What is the inference latency achieved by MemoryFormer?" }, { "confidence": 3, "rating": 6, "review_id": "QUkUGlrgI0", "review_text": "This work proposes to replace most linear layers of transformers by trainable hash-tables. The new modules---called memory layers---rely on locality sensitive hashing to obtain relevant indices within several hash tables, and returns a linear combination of the associated vectors. To overcome the non-differentiability of the hashing operation, the weights of the linear combination are computed from the inner products between each input vector and its hashed representation. They use memory layers to replace key, query, and value projections, as well as to replace the down and up-projection matrices of the feedforward blocks. Playing on the coarseness of the hashing, this new module can reduce the FLOPs required when compared to a traditional matrix multiplication. They train modified Pythia models and show their approach is outperforming baseline models on multiple reasoning tasks, while using significantly less FLOPs.\n\nI find the paper well motivated. Tackling the often dominating cost of the FFW operations is an important research direction. The idea developed in this work is novel and the results are surprisingly good. Improving upon the transformer architecture is not an easy feat and the results suggests that the memoryFormer is better on reasoning tasks while requiring less FLOPs. I find it interesting that the non-linear nature of the hashing operation allows to omit the use of activation functions. \n\nAn interesting work overall, but I believe some results/details are missing (see weaknesses).\n\nTo fully evaluate the impact of this work, I am missing some informations:\n- What is, in more details, the experimental setup used to trained Pythia models? How many steps are used during training? How are the loss curves for both the Pythia baseline and the memoryFormer? Which hardware was used? \n- You should do a comparison in terms of number of parameters. The memoryFormer likely uses an order of magnitude more trainable parameters compared to Pythia models. This makes the comparison between e.g. Pythia-70M and MF-tiny questionable. What scores would you get using a Pythia model with a similar number of parameters?\n- Discussion on time complexity are missing from the analysis. I am assuming that the memoryFormer is slower. Speed depends on hardware and I understand that it seems unfair to compare your approach to very optimized matrix-multiplication kernels, but this is still an important limitation to discuss. How many iterations per second during training for a memoryFormer and a Pythia model with the same number of parameters? What is the inference speed? This work seems clearly oriented at proposing a cheaper transformer, and successfully reduces the FLOPs required. Knowing if this translates into speed gains today---or giving ideas on how easy it would be to leverage this gain tomorrow---seems important to evaluate the impact of this work. Given a time budget and a specific machine (cpu or gpu), which size of models would fit the time budget? For those model sizes, would a memoryFormer provide better scores than a baseline Pythia model?\n- At the small models scale, I am not sure how trustworthy the scores on reasoning tasks are. What would be the accuracies obtained when answering at random? For instance, WinoGrande is a binary task, and scores are often close to 50%. What are the perplexities reached by the different models?\n\nOverall, you propose an interesting architecture, but I am not entirely convinced by the evaluation. A more faithful account of the implications of using memory blocks on time complexity and on the number of parameters would help.\n\nSee above." } ]
02r24A8doi
Achieving Constant Regret in Linear Markov Decision Processes
We study the constant regret guarantees in reinforcement learning (RL). Our objective is to design an algorithm that incurs only finite regret over infinite episodes with high probability. We introduce an algorithm, Cert-LSVI-UCB, for misspecified linear Markov decision processes (MDPs) where both the transition kernel and the reward function can be approximated by some linear function up to misspecification level $\zeta$. At the core of Cert-LSVI-UCB is an innovative certified estimator, which facilitates a fine-grained concentration analysis for multi-phase value-targeted regression, enabling us to establish an instance-dependent regret bound that is constant w.r.t. the number of episodes. Specifically, we demonstrate that for a linear MDP characterized by a minimal suboptimality gap $\Delta$, Cert-LSVI-UCB has a cumulative regret of $\tilde{\mathcal{O}}(d^3H^5/\Delta)$ with high probability, provided that the misspecification level $\zeta$ is below $\tilde{\mathcal{O}}(\Delta / (\sqrt{d}H^2))$. Here $d$ is the dimension of the feature space and $H$ is the horizon. Remarkably, this regret bound is independent of the number of episodes $K$. To the best of our knowledge, Cert-LSVI-UCB is the first algorithm to achieve a constant, instance-dependent, high-probability regret bound in RL with linear function approximation without relying on prior distribution assumptions.
https://openreview.net/pdf/c4416b40b8b47e9d8fa8155df573d6a2c68b8f6e.pdf
[ { "confidence": 3, "rating": 4, "review_id": "HVFhN4Jai3", "review_text": "The paper studies constant regret guarantees for linear MDPs. The main result is the first algorithm with this guarantee. The algorithm is a modification of LSVI-UCB with a careful quantization technique and some form of action elimination. This improves upon the regret of previous works by a factor of $\\log K$ where $K$ is the number of episodes.\nUnlike previous works, the algorithm is robust to some degree of model mispecification and does not need to know it in advance.\n\n1. The method is robust to some model mispecification and does not need to know it in advance.\n2. The regret guarantee is tighter compared to previous works.\n3. The algorithm is computationally efficient in the sense that it runs in polynomial time in the problem parameters.\n4. There are significant technical challenges in the work. In particular the careful quantization required to avoid dependence on $K$ and the refined analysis of the regret bound by the suboptimality gaps.\n\n1. While it's challenging to achieve regret that is independent of $K$, I do not see this as a significant improvement over the logarithmic dependence. This is especially true since it is not clear whether the correct polynomial dependence on the remaining parameters has been established. Moreover, this improvement is only relevant in the regime of a fixed success probability $\\delta$, which is often dependent on $K$, e.g., if bounding the expected regret.\n2. The improvement to the regret comes at a significant increase to the algorithm's complexity. It's not clear whether this is necessary. For example, are you certain that the quantization cannot be replaced by a more careful analysis of the value cover? Why do you need both bonuses and action elimination and sample rejection? The runtime complexity also seems higher (though this is a minor concern). If you think these modifications are indeed necessary, perhaps you could demonstrate this with experiments.\n3. While I am not familiar with better results, the dependence on the worst case gap weakens the result. Do you think it is possible to achieve a better dependence? e.g. on the average inverse gap with respect to the optimal policy.\n4. The writing is not very consistent with some places having phrasing and grammatical issues (e.g., quantification should be quantization and many more).\n5. The algorithm needs to be better explained. While it seems the authors made significant efforts in this regard, many parts of the algorithm remain rather unclear. For example, the bonuses are not immediately apparent. As I understand it, you use a quantization of the standard bonus. It would be much easier for the reader if this is made explicit in the algorithm and explained. Other examples are the action elimination and sample rejection whose purpose is unclear.\n\nSee above." }, { "confidence": 3, "rating": 5, "review_id": "QF2M08jXbo", "review_text": "This paper proposed a constant regret learning algorithm for linear MDPs with approximation error (misspecification). Specifically, it proved an instance dependent regret that is independent of the number of episodes. In addition, the algorithm does not require prior knowledge of misspecification level or suboptimality gap.\n\n1.\tThe background of the problem is stated clearly with well summarized review of the literature.\n2.\tThere is a comprehensive comparison of the new constant regret results with existing works.\n\n1.\tThe proposed algorithm and analysis follow the algorithm in Vial et al. (2022) closely. The main differences seem to be the layer-dependent quantification in Algorithm 1 and the exit condition 3 in Algorithm 2. It would be better to discuss more clearly how these two modifications contribute to getting rid of the log(K) dependence. \n2.\tSome of notations are used without explanation. For example, $\\lambda$ in algorithm 1, $\\gamma_l$ in Algorithm 2 and Theorem 5.1. Also, how the certified estimator is defined in Algorithm 2 is not clear.\n3.\tSome analysis and results require more clarification: (a) The $\\delta$ condition used in Theorem 5.1 does not match the one used in Lemma C12. (b) It is not very clear how the key results in getting rid of log(k) (Lemma C14 line 765) is derived using $x\\leq a+\\sqrt{bx}$.\n\n1.\tThe new algorithm does not require UniSOFT assumption made by Papini et al. (2021a). However, in Thm 5 of that paper, they showed that UniSOFT is a necessary condition for constant regret. Can you explain how this does not apply to the current setup? \n2.\tIt is not very intuitive to understand the constant regret with no dependency on the number of episodes. Are there any simple examples that can run the algorithm empirically to verify this behavior?" }, { "confidence": 3, "rating": 6, "review_id": "j1WbwoCK33", "review_text": "This paper gives a constant regret (total regret independent of the number of episodes $K$) algorithm for online reinforcement learning of linear MDPs. The environment is considered to be a $\\zeta$-approxinate linear MDP as in (Jin et al 2019), with an assumption that the misspecification level ($\\zeta$) is not too large. The algorithm is a more sophisticated variant of the original LSVI-UCB, which is a further improvement of earlier work that improves the regret of LSVI-UCB (but does not achieve constant regret).\n\nThe regret is parametrized by the instance-dependent minimal suboptimality gap $\\Delta := \\min_{h,s,a} \\\\{V_h^\\ast(s) - Q_h^*(s,a) : V_h^\\ast(s) - Q_h^*(s,a) \\neq 0\\\\}$, which also appears in earlier instance dependent regret bounds for linear MDPs (and also in general episodic MDPs).\n\nTheir algorithm, Cert-LSVI-UCB, has a regret bound of $\\tilde{O}(d^3 H^5 \\Delta^{-1})$. The dependence on $\\Delta$ is optimal, whereas the dependence on $d$ and $H$ may be suboptimal.\n\nThis paper is also an improvement on the previous work that provides \"constant regret\" bounds for linear MDPs (Papini et al 2021) in that it does not require the \"UniSoft\" assumption on the feature mapping (only requires the standard assumptions of misspecified linear MDPs as in Jin et al 2019, plus that the misspecification level $\\zeta \\leq \\tilde{O}(\\Delta/\\sqrt{d}H^2)$).\n\n* The algorithm and its analysis gives a substantial improvement over earlier algorithms with instance-dependent bounds for linear MDPs, in that it is the first algorithm with a constant regret bound that requires minimal assumptions, even for misspecified linear MDPs.\n* The algorithm is cleanly based on earlier linear MDP algorithms starting from LSVI-UCB, applied in more sophisticated ways (multiple regression phases per episode etc), and the improvements in the algorithm and challenges in the analysis have been highlighted in the main paper itself (especially, in avoiding the dependency of the regret on the number of phases).\n\n* The contribution of the paper is entirely theoretical, but most of the proof is in the appendix (which is unavoidable).\n* Some experimental results (even on synthetic data) would definitely enhance the paper.\n\nNA" }, { "confidence": 2, "rating": 6, "review_id": "CdPWK3ApgY", "review_text": "The paper presents improved regret bounds for Linear MDPs when the suboptimality gap \\Delta is known. Most crucially (a bit surprisingly), the regret bounds are constant in a number of episodes K\n\nThe contributions are broadly theoretical, and the removal of dependence on the number of episodes K seems to be a good contribution to the academic understanding of the gap-dependent (instance-dependent)learning bounds for reinforcement learning.\n\nThe paper provides a good coverage of related works in this area and describes the setting well.\n\n1) b_h^\\pi seems to be undefined in proposition 3.2\n2) \\kapp_\\ell in Algorithm 1, line 8, as claimed by the author, helps to get rid of log(K). Can you explain more about how it helps to remove log(K)? is this the reason why the regret bounds do not have log(K) dependence?\n3)I would urge the authors to provide a full expression of regret without O notation, including the log factors either in the main text or in the appendix (I could not find a full expression for the result in theorem 5.1). Specifically, it is surprising, perhaps I am missing something, that the regret is O(1) when Delta = 1/(d^3H^5) when K=1. I suspect some lower-order terms are hidden, or maybe the result is only valid for K \\geq something. The asymptotic dependence can be right still.\n\nPlease see above, I am willing to engage during the rebuttal phase." } ]
02HWT9c4Lp
Voxel Proposal Network via Multi-Frame Knowledge Distillation for Semantic Scene Completion
Semantic scene completion is a difficult task that involves completing the geometry and semantics of a scene from point clouds in a large-scale environment. Many current methods use 3D/2D convolutions or attention mechanisms, but these have limitations in directly constructing geometry and accurately propagating features from related voxels, the completion likely fails while propagating features in a single pass without considering multiple potential pathways. And they are generally only suitable for static scenes and struggle to handle dynamic aspects. This paper introduces Voxel Proposal Network (VPNet) that completes scenes from 3D and Bird's-Eye-View (BEV) perspectives. It includes Confident Voxel Proposal based on voxel-wise coordinates to propose confident voxels with high reliability for completion. This method reconstructs the scene geometry and implicitly models the uncertainty of voxel-wise semantic labels by presenting multiple possibilities for voxels. VPNet employs Multi-Frame Knowledge Distillation based on the point clouds of multiple adjacent frames to accurately predict the voxel-wise labels by condensing various possibilities of voxel relationships. VPNet has shown superior performance and achieved state-of-the-art results on the SemanticKITTI and SemanticPOSS datasets.
https://openreview.net/pdf/3f85a5f95fbad0b4bf7c58bb45f23e0c242c2ba9.pdf
[ { "confidence": 4, "rating": 6, "review_id": "OOc2C0GjaD", "review_text": "This paper introduces Voxel Proposal Network which achieves semantic scene completion from both voxel and BEV perspectives. Beginning with confident voxel proposals, the information is propagated to other voxels, aiming to handle dynamic aspects. Multi-Frame Knowledge Distillation is also incorporated to accurately predict the voxel-wise labels.\n\n1. Reconstructing from the reliable occupied voxels and propagating information from these voxels to others is a good solution for the semantic scene completion. \n2. information distillation from multi-frames is also a normal strategy to improve the performance. \n3. The ablation experiments in this paper are quite comprehensive. \n4. The code provided in the appendix clearly demonstrates the details of the CVP module.\n\n1. The motivation of the proposed components should be provided. At the beginning of the abstract, this paper mentioned that most previous methods use 3D/2D convolutions or attention mechanisms, having limitations in directly constructing geometry and accurately propagating features from related voxels. However, how can the proposed method tackle this problem?The authors are required to give a straightforward analysis.\n2. The writing of this paper needs improvement. The author employs a single branch with multiple subnetworks for single-frame input, which seems processing the input sequentially. But line117 describes it using the “branch”, which can easily be misinterpreted as parallel. Besides, to my understanding, the multi-frame branch distill the knowledge of different timestamps to the results from different blocks, aiming to enhance the performance of feature propagation. It would better to give a overview before introducing the specific design of the network components.\n3. The com. w/o CVP and com w/ CVP are confusing in the Table 1.\n4. Comparison with other knowledge distillation algorithms (at least three algorithms) and the proposed MFKD is necessary, as done in the Table 4 in the SCPNet [1].\n5. SCPNet also distills knowledge from multi-frame inputs to single-frame network, more analysis on the differences between MFKD and SCPNet is necessary.\n\nOverall, the writing of this paper should be improved. Presenting motivation will help the readers understand the method better.\n\nPlease see the weakness." }, { "confidence": 4, "rating": 5, "review_id": "HkO2uq7eak", "review_text": "To directly construct scene geometry and accurately propagate features from relted voxels, the paper proposes VPNet with a voxel proposal mechanism to identify confident voxels for completion and a multi-frame knowledge distillation scheme to fuse information from multi-sweep LiDAR. VPNet achieves better performance than other methods for semantic scene completion on SemanticKITTI and SemanticPOSS.\n\n1. For originality, VPNet predicts offsets to propose confident voxels which is distinct from existing methods.\n2. For quality, the paper conducts extensive experiments on two benchmarks for semantic scene completion, together with ablation studies.\n\n1. For clarity, the paper uses massive symbols to elaborate technical details which is hard to follow. I think there could be some abstraction.\n2. For motivation, the paper claims that existing methods have problem with directly constructing scene geometry and accurately propagating voxel features, but I do not see why this is true and how VPNet is better in these aspects.\n\nPlease give some explanations about the weaknesses." }, { "confidence": 5, "rating": 5, "review_id": "LvgBVJYE40", "review_text": "The author introduced the Voxel Proposal Network (VPNet), a dual-branch semantic scene completion method with two key innovations.\nFirst, the Confident Voxel Proposal (CVP) module, which includes offset learning and voxel proposal, generates a confident feature map based on the semantics-embedded feature map, enabling completion in the 3D branch. Second, the Multi-Frame Knowledge Distillation (MFKD) module distills semantic knowledge from each augmented feature map of the multi-frame network into the branches of the single-frame network in two stages, enhancing completion performance.\n\nThis manuscript has clear structure, well-benchmarked qualitative and quantitative results.\n\n1. Lack of Comparison with State-of-the-Art Methods: The performance of VPNet is not compared with current state-of-the-art methods such as SCPNet. Specifically, (mIoU) achieved by VPNet is far lower than that of SCPNet. Or authors can argue and clarify clearly the protocol differences that render the comparison invalid.\n\n2. SCPNet has already implemented multi-frame distillation to enhance performance. Thus, VPNet's use of Multi-Frame Knowledge Distillation (MFKD) is not readily qualified as a core contribution in the title. Or authors can dig into the effects of multi-frame distillation in different frameworks like SCPNet and [A].\n\n[A] MonoOcc: Digging into Monocular Semantic Occupancy Prediction, ICRA 2024\n\nIt is recommended to answer questions whether there is a protocol compatibility with SCPNet and whether there are unique insights about multi-frame distillation in this study over SCPNet and MonoOcc." }, { "confidence": 3, "rating": 6, "review_id": "FgfDxAWmpo", "review_text": "This paper focuses on semantic scene completion. The authors propose a novel voxel proposal network and combine multi-frame knowledge distillation technique to reconstructs the scene geometry and implicitly models the uncertainty of voxel-wise semantic labels by presenting multiple possibilities for voxels. The proposed method has been proven effective on SemanticKITTI and SemanticPOSS datasets.\n\n- The proposed method achieves state-of-the-art results on SemanticKITTI and SemanticPOSS datasets.\n- The Voxel Proposal Network sounds interesting.\n\n- My biggest concern is that the author's motivation for using MFKD is not clearly explained. Why use it to predict voxel-wise labels.\n- Figure 3 is too complicated to understand.\n\nCan authors open source code?" } ]
02CIZ8qeDc
PointAD: Comprehending 3D Anomalies from Points and Pixels for Zero-shot 3D Anomaly Detection
Zero-shot (ZS) 3D anomaly detection is a crucial yet unexplored field that addresses scenarios where target 3D training samples are unavailable due to practical concerns like privacy protection. This paper introduces PointAD, a novel approach that transfers the strong generalization capabilities of CLIP for recognizing 3D anomalies on unseen objects. PointAD provides a unified framework to comprehend 3D anomalies from both points and pixels. In this framework, PointAD renders 3D anomalies into multiple 2D renderings and projects them back into 3D space. To capture the generic anomaly semantics into PointAD, we propose hybrid representation learning that optimizes the learnable text prompts from 3D and 2D through auxiliary point clouds. The collaboration optimization between point and pixel representations jointly facilitates our model to grasp underlying 3D anomaly patterns, contributing to detecting and segmenting anomalies of unseen diverse 3D objects. Through the alignment of 3D and 2D space, our model can directly integrate RGB information, further enhancing the understanding of 3D anomalies in a plug-and-play manner. Extensive experiments show the superiority of PointAD in ZS 3D anomaly detection across diverse unseen objects.
https://openreview.net/pdf/514c993daadcf9006e79d05e65dcf706d696a9ce.pdf
[ { "confidence": 3, "rating": 5, "review_id": "SEBrw5NVwH", "review_text": "The paper titled \"PointAD: Comprehending 3D Anomalies from Points and Pixels for Zero-shot 3D Anomaly Detection\" introduces PointAD, a novel framework designed for zero-shot 3D anomaly detection by leveraging both point clouds and RGB images. The approach builds on the strong generalization capabilities of CLIP and adapts it to 3D anomaly detection. The framework renders 3D anomalies into multiple 2D views and then projects these 2D representations back into 3D space. This hybrid representation learning allows for capturing generic anomaly semantics across various unseen 3D objects. PointAD also incorporates auxiliary point clouds to optimize learnable text prompts for better anomaly detection and segmentation.\n\n1. The use of CLIP for 3D anomaly detection and the hybrid representation learning approach are novel contributions.\n2. The theoretical foundations are robust, and the empirical validation is comprehensive.\n3. The paper is clearly written, with well-structured sections and effective use of visual aids.\n4. The approach addresses a critical gap in the field, with potential for significant impact on future research and practical applications.\n\n1. The sensitivity of the method to the selection of rendering angles and the number of views could be explored further.\n2. While the experimental results are compelling, additional validation on more diverse and challenging datasets would further strengthen the claims.\n3. The discussion of potential limitations and future work could be expanded to provide a more comprehensive view of the method's applicability and areas for improvement.\n\n1. Could the authors provide more insights into the selection of rendering angles and the impact of different numbers of views on the detection performance?\n2. Have the authors considered the robustness of the method under different environmental conditions, such as varying lighting and occlusions?\n3. Could additional experiments on other industrial or medical datasets help to further validate the generalizability of the proposed method?\n\n\n======== post rebuttal ==========\nThe authors' rebuttal solve most of my concerns, hence I raise my score to borderline accept." }, { "confidence": 5, "rating": 6, "review_id": "mfUyrocIiX", "review_text": "The paper introduces PointAD, the first approach to explore the domain of zero-shot 3D anomaly detection, leveraging CLIP's strong generalization to identify anomalies in previously unseen objects. It offers a unified framework that integrates 3D point clouds with 2D renderings, employing hybrid representation learning to capture the semantics of anomalies. PointAD's key contributions are its pioneering exploration of the ZS 3D anomaly detection domain, its ability to seamlessly integrate multimodal data like RGB information for enhanced detection, and its superior performance over existing methods. The robustness of the framework is confirmed through extensive experiments.\n\nOriginality: The paper introduces PointAD, a novel method in 3D anomaly detection that leverages CLIP for 3D analysis, uniquely combining point clouds and pixel data. It expands the application of vision-language models into new domains, showcasing versatility in 3D point cloud analysis.\nSoundness: The paper exhibits methodological soundness through its rigorous experimental setup, including the use of diverse datasets and a thorough ablation study that substantiates the design decisions and effectiveness of the proposed PointAD framework. The state-of-the-art experiments performance confirms the model's outperforming ability. \nClarity: The paper stands out for its clarity, guiding readers smoothly from the problem statement to the final results. It skillfully explains complex concepts in an accessible way, ensuring that a wider audience can follow along. The paper also benefits from helpful visual aids that clearly demonstrate the model's effectiveness.\n\n1.\tThere’s some zero-shot anomaly detection methods with clip on 2D images, so this paper should compare with them, especially, from the view of techniques’ differences, not only the used data differences. For example, those zero-shot methods (WinCLIP, VAND and AnomalyCLIP) mentioned in [R1] and [R2].\n[R1] Cao, Y., Xu, X., Zhang, J., Cheng, Y., Huang, X., Pang, G., & Shen, W. (2024). A survey on visual anomaly detection: Challenge, approach, and prospect. arxiv preprint arxiv:2401.16402.\n[R2] Li, X., Huang, Z., Xue, F., & Zhou, Y. (2024). Musc: Zero-shot industrial anomaly classification and segmentation with mutual scoring of the unlabeled images. arxiv preprint arxiv:2401.16753.\n\n2.\tI also wonder if the new methods of 3D anomaly detection can also be used to deal with this zero-shot task. For example, [R3] and [R4]. \n[R3] Zuo, Z., Dong, J., Wu, Y., Qu, Y., & Wu, Z. (2024). CLIP3D-AD: Extending CLIP for 3D Few-Shot Anomaly Detection with Multi-View Images Generation. arxiv preprint arxiv:2406.18941.\n[R4] Li, W., Xu, X., Gu, Y., Zheng, B., Gao, S., & Wu, Y. (2024). Towards Scalable 3D Anomaly Detection and Localization: A Benchmark via 3D Anomaly Synthesis and A Self-Supervised Learning Network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 22207-22216).\n3.\tMore importantly, there’s some very similar studies with this work and task. Authors should mention and compare them within this study. For example, [R5].\n[R5] Li, Y., Goodge, A., Liu, F., & Foo, C. S. (2024). Promptad: Zero-shot anomaly detection using text prompts. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 1093-1102).\n[R6] X. Chen, J. Zhang, G. Tian, H. He, W. Zhang, Y. Wang, C. Wang,Y. Wu, and Y. Liu, “Clip-ad: A language-guided staged dual-\npath model for zero-shot anomaly detection,” arXiv preprint\narXiv:2311.00453, 2023.\n[R7] Wang, C., Zhu, H., Peng, J., Wang, Y., Yi, R., Wu, Y., ... & Zhang, J. (2024). M3DM-NR: RGB-3D Noisy-Resistant Industrial Anomaly Detection via Multimodal Denoising. arxiv preprint arxiv:2406.02263.\n4.\tComputational Efficiency: Although the authors have commendably addressed computational challenges, the overall computational cost associated with rendering multi-view images and processing them through the vision encoder is still substantial. Since this field is still developing with few comparable studies, it is recommended that the authors provide a broader comparison about the computational cost with existing anomaly detection techniques, including those like Cheraghian, CPFM and 3DSR that are not strictly within the zero-shot category. This expanded comparison will help to show that the computational demands of PointAD are reasonable and suitable for practical use cases.\n5.\tClarity and Completeness of Presentation: The paper's writing is generally clear and free of errors, yet minor aspects of the presentation could be polished for better reader comprehension. For instance, the figure overlays, specifically in Figure 1, currently only elucidate part (a), leaving explanations for parts (b) and (c) to be found in the main text. This separation can disrupt the reader's flow and understanding. Enhancing the figure overlays to include all relevant parts and ensuring that each component of the figure is self-explanatory will make the paper more accessible and its findings easier to grasp.\n\n1.\tPlease answer the main differences with these mentioned methods in Weaknesses and compare with part of them on experiments.\n2.\tGiven the concerns raised in the Weaknesses section 1, does PointAD still hold the merit of computational efficiency when compared to other methods such as AnomalyCLIP and Cheraghian?\n3.\tCould you elaborate on the multi-view rendering process of point clouds (as described from line 134 to line 140), particularly how the color of the rendered images is determined? Furthermore, at line 254, it's stated that additional RGB information is utilized only in the test dataset. Given the description at line 564, which refers to 2D RGB information, how is this 2D data integrated with the rendered multi-view images? Are there any strategies which are employed to bridge the gap between the rendered training data and the RGB information used in testing?" }, { "confidence": 3, "rating": 7, "review_id": "edPTVSza3p", "review_text": "The paper propose a unified framework, PointAD,to detect 3D anomalies in a ZS manner. Hybrid representation learning is proposed to incorporate the generic normality and abnormality semantics into PointAD. PointAD can incorporate 2D RGB information in a plug-and-play manner for testing, which can perform ZS M3D anomaly detection directly. Experiments results demonstrate the superiority of the model in detecting and segmenting 3D anomalies.\n\n1. PointAD introduces a novel method for Zero-Shot 3D anomaly detection by integrating the strong generalization capabilities of CLIP to recognize 3D anomalies from unseen objects. This approach is first of its kind in ZS 3D anomaly detection.\n2. The paper describes a unified framework that comprehends 3D anomalies from both points and pixels, allowing for a comprehensive understanding of anomalies by rendering 3D objects into 2D renderings and projecting them back to 3D space. It proposes hybrid representation learning to optimize learnable text prompts from both 3D and 2D data, enhancing the model's ability to generalize across diverse unseen objects.\n3. Extensive experiments demonstrate the superiority of PointAD over existing state-of-the-art methods, especially in ZS settings where traditional models struggle.\n\nI do not notice any obvious weaknesses in this paper, but I still have a few questions that I hope the authors can answer.\n1. The methodology, while innovative, is complex due to the need to render 3D objects into multiple 2D views and then re-project them, the performance of the model may heavily depends on the quality of the 2D renderings of the 3D objects, Therefore, if the renderings are of poor quality, could the model's performance degrade?\n2. The experiments in the paper utilize high-resolution point cloud scans. Given that the point clouds obtained in practical applications may not always be high resolution, how effective is this method in detecting anomalies in low-resolution, sparse scans?\n3. In the appendix's failure case section, the detection of the tire failed on the point score map. Thus, if a tire, or any object, has noise, how significantly does it affect the recognition on the point score map? I suspect that the possible reason for this failure could be the algorithm's sensitivity to point density and spatial distribution. When anomalies are located in areas where point density or distribution are inconsistent or significantly different from the training data, can this model still generalize well to new, unseen anomalies?\n\nSee weaknesses." } ]
01s5ODIHKd
FreqMark: Invisible Image Watermarking via Frequency Based Optimization in Latent Space
Invisible watermarking is essential for safeguarding digital content, enabling copyright protection and content authentication. However, existing watermarking methods fall short in robustness against regeneration attacks. In this paper, we propose a novel method called FreqMark that involves unconstrained optimization of the image latent frequency space obtained after VAE encoding. Specifically, FreqMark embeds the watermark by optimizing the latent frequency space of the images and then extracts the watermark through a pre-trained image encoder. This optimization allows a flexible trade-off between image quality with watermark robustness and effectively resists regeneration attacks. Experimental results demonstrate that FreqMark offers significant advantages in image quality and robustness, permits flexible selection of the encoding bit number, and achieves a bit accuracy exceeding 90\% when encoding a 48-bit hidden message under various attack scenarios.
https://openreview.net/pdf/9234ff8acbbb7b2334d075f931eeea536868b987.pdf
[ { "confidence": 4, "rating": 6, "review_id": "cDdsIEOfBH", "review_text": "This paper considers the problem of watermarking of images. In particular, it introduces a method called FreqMark where watermark is embedded in the latent frequency space obtained after variation auto encoder (VAE) encoding. Numerical results are carried out on test datasets containing 500 randomly selected images from the ImageNet and 500 images from the DiffusionDB dataset. Comparative results against DwtDctSvd, Stable Signature and SSL indicate that FreqMark exhibits superior quality-robustness tradeoff compared to the benchmark methods.\n\nThe main strengths of this paper are the novelty of the proposed FreqMark algorithm and the superior performance of FreqMark compared to the baseline image watermarking methods.\n\nThe main idea behind FreqMark is hiding the message bits in the FFT outputs of latent vectors coming from the VAE. In numerical experiments on two image datasets (ImageNet and Diffusion DB), authors show that FreqMark provides a bit accuracy of better than 90% for a 48-bit encoding setting. The Ablation Studies investigating image quality, number of encoding bits, noise levels and spatial perturbations are reasonably convincing of the benefits offered by FreqMark.\n\nThe Appendices provided contain significant additional information regarding FreqMark and is one of the strengths of this paper.\n\nOne of the main weaknesses of the work is that there is no convincing proof or theoretical justification of why hiding watermark bits in the FFT of latent space should be beneficial. The usual argument of using frequency domain for hiding watermark bits makes sense in that the FFT is applied to images (i.e., image pixels) and captures the spatial relationships between the image pixels. What kind of \"spatial\" relationship is captured by the FFT in the latent space and why is that beneficial for hiding watermark?\n\nFFT produces complex values, yet there is no discussion of how the watermark bits are embedded on these complex values. In Fig. 1, the outputs of FFT are displayed as magnitude images and if this is what is intended, what happened to the phase of the FFT outputs.\n\n1. Are the FFT outputs complex and if so, how are the watermark bits being embedded?\n\n2. What do FFT outputs represent when applied to the latent vectors? When applied to 1D time signals or 2D images, the FFT output describes the frequency content of the signals and images. What does the FFT of latent space represent and why is it useful?" }, { "confidence": 4, "rating": 6, "review_id": "DyXkoDJetk", "review_text": "This paper introduces FreqMark, a novel invisible watermarking method that enhances digital content protection through optimization in the image's latent frequency space. Experiments have been conducted to demonstrate the robustness against regeneration attacks like VAE and diffusion model.\n\n1. The paper is well-written and well-organized.\n\n2. Extensive experiments have been conducted to demonstrate the robustness against various attacks including VAE and diffusion models.\n\n1. Lacks some theoretical analysis. In the experiments, do you need to use the same VAE model for the attack as you have used during watermarking training? If not, then an analysis may be needed as to why it is still robust against attacks with different VAE models.\n\n2. Since the proposed method requires case-by-case optimization, what is the watermarking time for each case, and how does it compare to other competing methods?\n\n3. Why do we need a set of pre-trained N-dimensional direction vectors, but not directly produce the message? \n\n4. In Equation (4), why $k \\in N$? $N$ is a number, not a set, so $\\in N$ is not appropriate. Besides, using $v^N$ to denote a vector set and $v^k$ to represent a vector can lead to confusion. In Figure 2, there are $K+1$ vectors in the set, I think $K+1$ is the bit length.\n\n4. Minors:\n- $\\epsilon_1$ and $\\epsilon_2$ (or $\\varepsilon_1$ and $\\varepsilon_2$) appear in Figure 2, so it is better to mention them in the caption of Figure 2.\n- In line 213, page 7, the word 'We' should be 'we'.\n\nPlease see the Weaknesses." }, { "confidence": 3, "rating": 4, "review_id": "nQaE8Xok7P", "review_text": "This paper proposes a method called FreqMark that is able to prevent the invisible watermarks from the regeneration attack. By using the unconstrained optimization of the image latent frequency space obtained after VAE encoding, the proposed FreqMark achieves better robustness against the regeneration attacks and traditional attacks.\n\nThe proposed method achieves better robustness against regeneration attacks. By using a newly proposed optimization strategy, the proposed method achieves a balance between the image quality and the envisioned robustness.\n\n1. The major contribution of this paper is to introduce a kind of image watermark robust to regeneration attacks. However, this part is not highlighted in its title. The authors are suggested to consider this in the title. \n2. The proposed methodology follows an established manner with incremental innovations. The whole approach does not demonstrate enough differences when compared with the previous approaches. \n3. For the second contribution, the authors claim that their proposed framework is flexible and then say such flexibility guarantees a trade-off between the bits number, image quality, and robustness. However, why can such properties be considered as flexibility?\n4. The whole pipeline presented in Figure 2 lacks novelty. This is just a very common encoder and decoder framework. The authors need to better consider their contributions for this part.\n\nI have shown my concerns in the weakness part. The authors are suggested to consider them during the rebuttal." }, { "confidence": 4, "rating": 6, "review_id": "8M0EygXYhY", "review_text": "The authors propose a new post-processing watermark for imagery, FreqMark. The FreqMark embeds a binary message into the frequency domain of a VAE-encoded image via a small perturbation, and then following IFFT + decompression, utilizes a pre-trained image encoder to extract the message. A PSNR + LPIPS metric is used to protect mage quality against the message embedding. The dual approach of VAE encoding + Gaussian noise makes the mark resilient against regeneration and Gaussian noise attacks. The method demonstrates high bit accuracy and good image quality when compared against DwtDct, SSL, Stable Signature on watermarked DiffusionDB & Imagenet images.\n\n-Easy-to-follow narrative and nice motivation in the age of generative imagery. \n\nThe intuition behind the FreqMark algorithm is sound, easy to follow, and presumably simple to implement. \n\n-Method demonstrates robustness against regeneration, which is a very potent attack.\n\nThis work is missing two critical competitor baselines which must be addressed. \n\n-In my opinion, the FreqMark is an incremental variant of the StegaStamp [1]. Surprisingly, the authors did not compare against this method even though it is well-known in existing literature that the StegaStamp is robust against many attacks, including regeneration [2]. Like FreqMark, the StegaStamp increases resilience against attacks by incorporating them into the training pipeline and uses a critic loss to preserve image quality (LPIPS + L2, versus LPIPS + PSNR for FreqMark). The resemblance of equation (10) in this manuscript to the loss function equation (2) in [1] begs the question of novelty. It is also incorrectly stated in line 71 that the StegaStamp only relies on differential image perturbations for the training pipelines -- in fact, any attack can be added, as the decoder is trained after image manipulation. \n\n-Again, if the spirit of the paper is to increase resilience against regenerations, the authors also needed to compare against the state-of-the-art Tree-Ring watermark [4], which was noted to be incredibly resilient against regenerations by the authors of the regeneration attack [2]. As the Tree-Ring is an in-processing technique that embeds a message within a diffusion process, one way to set up a comparison is to post-process a collection of Tree-Ring watermarked images via FreqMark, and then independently extract both watermarks. \n\n-As noted in [3], there is no single perceptual metric that is an objective measure of image quality, thus low PSNR or LPIPS distance does not necessarily indicate the method is not introducing artifacts. The authors need to add 1-2 more metrics (maybe L2 and FID, for example) for a more convincing argument. \n\n-500 images is too small a sample size for the tested FPR thresholds. Modern literature in this field such as [3,4,5] are using several thousand images. \n\n-VAE regenerations are far weaker compared to diffusion regenerations. Readers will want to see how the FreqMark holds up against longer, deeper regenerations (>= 100 steps) to see how the decoding accuracy is affected.\n\n-As observed in [3], the use of publicly available VAEs to encode/decode watermarks is easily defeated if the attacker uses a regeneration leveraging encoders/decoders with the same architecture. \n\n\n[1] Tancik, M., Mildenhall, B., & Ng, R. (2020). Stegastamp: Invisible hyperlinks in physical photographs. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 2117-2126).\n\n[2] Zhao, X., Zhang, K., Su, Z., Vasan, S., Grishchenko, I., Kruegel, C., ... & Li, L. (2023). Invisible image watermarks are provably removable using generative ai. arXiv preprint arXiv:2306.01953.\n\n[3] An, B., Ding, M., Rabbani, T., Agrawal, A., Xu, Y., Deng, C., ... & Huang, F. (2024). Benchmarking the robustness of image watermarks. arXiv preprint arXiv:2401.08573.\n\n[4] Wen, Y., Kirchenbauer, J., Geiping, J., & Goldstein, T. (2023). Tree-ring watermarks: Fingerprints for diffusion images that are invisible and robust. arXiv preprint arXiv:2305.20030.\n\n[5] Saberi, M., Sadasivan, V. S., Rezaei, K., Kumar, A., Chegini, A., Wang, W., & Feizi, S. (2023). Robustness of ai-image detectors: Fundamental limits and practical attacks. arXiv preprint arXiv:2310.00076.\n\n1. See weaknesses.\n\n2. Which version of Stable Diffusion was used for the regeneration attack?" } ]
01qa1ZJs65
Bridge the Modality and Capability Gaps in Vision-Language Model Selection
Vision Language Models (VLMs) excel in zero-shot image classification by pairing images with textual category names. The expanding variety of Pre-Trained VLMs enhances the likelihood of identifying a suitable VLM for specific tasks. To better reuse the VLM resource and fully leverage its potential on different zero-shot image classification tasks, a promising strategy is selecting appropriate Pre-Trained VLMs from the VLM Zoo, relying solely on the text data of the target dataset without access to the dataset’s images. In this paper, we analyze two inherent challenges in assessing the ability of a VLM in this Language-Only VLM selection: the “Modality Gap”—the disparity in VLM’s embeddings across two different modalities, making text a less reliable substitute for images; and the “Capability Gap”— the discrepancy between the VLM’s overall ranking and its ranking for target dataset, hindering direct prediction of a model’s dataset-specific performance from its general performance. We propose VLM Selection With gAp Bridging (SWAB) to mitigate the negative impact of two gaps. SWAB first adopts optimal transport to capture the relevance between open-source and target datasets with a transportation matrix. It then uses this matrix to transfer useful statistics of VLMs from open-source datasets to the target dataset for bridging two gaps. By bridging two gaps to obtain better substitutes for test images, SWAB can accurately predict the performance ranking of different VLMs on the target task without the need for the dataset’s images. Experiments across various VLMs and image classification datasets validate SWAB’s effectiveness. Code is available at: https://github.com/YCaigogogo/SWAB.
https://openreview.net/pdf/5b952c94f858327b7c28f21c05fb943af4ee8717.pdf
[ { "confidence": 4, "rating": 5, "review_id": "iy2Y4ZI0e5", "review_text": "This paper considers a zero-shot image classification strategy by selecting the most appropriate Pre-Trained VLM from the VLM Zoo, relying solely on the text data of the target dataset without access to the dataset’s images. Two challenges, i.e., the “Modality Gap” across two different modalities, and the \"Capability Gap\" between the VLM’s overall ranking and its ranking for target dataset, hinder the appropriate selection of the different VLMs. To address the challenges, this paper adopts a transportation matrix to capture the relevance between open-source and target datasets, and also uses this matrix to transfer useful statistics of VLMs from open-source datasets to the target dataset. Extensive experiments on image classification datasets demonstrate the effectiveness of the proposed method.\n\nOriginality: \nThe main novelty of this paper is to utilize Language-Only VLM Selection to address the zero-shot image classification. This is the first time that the Language-Only VLM Selection is applied in the zero-shot scenario where the missing of image data makes it difficult to directly select the VLMs.\nQuality: \nThis paper has well motivated methods and extensive experiments.\nClarity: \nThe paper is well written with clear description and readable figures.\nSignificance: \nThe two key issues of modality gap and capability gap are considered in this paper. In addition, optimal transport-based methods are proposed to alleviate the issues and the experimental results on multiple VLMs and image classification datasets show the effectiveness of them.\n\nThe main weakness is that the proposed VLMs selection strategy relies on Open-Source Datasets’ Data, which restricts the practicability of the proposed method. For the zero-shot image classification task, using the aggregated knowledge from multiple VLMs is ok, but using open-source datasets may violate the zero-shot data restrictions, or even lead to benchmark leakage. In addition, the impact of the open-source datasets is not analyzed. For example, if the categories in the open-source datasets are very different from the ones in the target zero-shot classification tasks, whether the performance of the proposed method is still remarkable?\n\nA comprehensive analysis on the impact of the open source datasets may improve the rationality of the proposed method for addressing the zero-shot image classification task." }, { "confidence": 4, "rating": 4, "review_id": "Bqsn2ZC6cZ", "review_text": "With the popularity of Vision Language Model (VLM) research in recent years, many versions have emerged, forming the VLM Zoo. This paper aims to select the most appropriate pre-trained VLM from the VLM Zoo, relying solely on texts of the target dataset without access to images. Two challenges are analyzed, namely, “Modality Gap” and “Capability Gap”. One VLM selection method (SWAB) is proposed: first calculate optimal transportation matrix to capture the relevance between open-source and target datasets; then use this matrix to transfer useful statistics of VLMs from open-source to target dataset for bridging two gaps and enhancing the VLM’s capacity estimation. Experiments are carried out across some VLMs and classification datasets to validate the effectiveness.\n\n[+] Due to the differences in training data, architecture and pipelines, existing VLMs do have their own strengths in terms of capabilities. Thus, studying how to better utilize members in VLM Zoo under different scenarios is somewhat valuable.\n\n[+] The paper has rich symbols and formalizations, and the space utilization is also reasonable.\n\n[+] Some experiments are conducted to demonstrate the effectiveness of components and designs on a diverse set of datasets of image classification.\n\n[-] Problem rationality. Although each member in the VLM Zoo has their own strengths, relying solely on textual descriptions to find the most suitable VLM (LOVM) seems too demanding. And language is usually compact, which can easily confuse concepts. Why cannot we use one combination of images and texts (or even text-image pairs) for VLM selection, as images are very easy to obtain in real-world scenarios? \n\n[-] Selection strategy. Given that the members in VLM Zoo have different strengths and abilities, it means that they have somewhat degree of complementarity. One natural idea is to follow ensemble learning, that is, vote on the results of each member, or select the top-k VLMs’ results for complementary combination. Why only choose the most suitable one? Please provide more comparisons and explanations. \n\n[-] Generalization. This paper carries out experiments on many datasets, but all for image classification. Since VLMs excel in handling various tasks, such as detection, segmentation, captioning, and VQA, the generalization on these tasks is still unclear.\n\n[-] Better visualization. For Fig. 3, there are too many symbols and formulas, increasing difficulty in understanding (the significance of using images is to reduce understanding costs). To this end, the reviewer believes that this paper can be polished better." }, { "confidence": 3, "rating": 5, "review_id": "BMTyKsoDWJ", "review_text": "The paper proposes SWAB the modificaiton of LOVM to mitigate the negative impact of two gaps of LOVM: the modality gap and capability gap. The resutls show the effectiveness of SWAB.\n\n1. The motivation of this paper is clear and structure is easy to follow.\n\n1. For the modality gap, the SWAB using open-source datasets, could you please provide how many image samples used in the experiment. And could you please provide the accuracy vs number of open-source image samples?\n2. The SWAB seems like a learnig-based model, could you please add the ablition study for no learning based SWAB.\n\n1. Why adding Gaussian noise to the target generated embedding?\n2. How to select weighted parameter $\\alpha$ in Eq.9\n3. The Auxiliary texts is generated by ChatGPT, could you please test the open-source LLM like LLama2?" }, { "confidence": 5, "rating": 7, "review_id": "0nD6uQ7oh3", "review_text": "This article focuses on selecting the best model from a visual-language multimodal model zoo for specific downstream tasks without having images of those tasks. It provides a detailed analysis of two challenges faced in this problem — Modality Gap and Capacity Gap. This paper proposes a method called SWAB, which leverages the statistical measures of VLM on open-source tasks based on the category similarity between historical open-source tasks and target tasks, to mitigate the negative impacts of these gaps on Language-Only VLM model selection. Experimental results demonstrate the effectiveness of the proposed method.\n\n1.The problem studied in this paper is novel and interesting, with significant practical value. In recent years, the number of open-source VLMs has been increasing, and their zero-shot capabilities have been improving. For practical downstream users, especially those who lack computational resources, have limited funding to label extensive evaluation datasets, or are non-machine learning practitioners with little experience in setting up and evaluating various VLMs, the Language-Only Vision Model selection (LOVM) problem studied in this article is important.\n2.This paper presents a clear and well-motivated problem statement. The authors provide a detailed analysis of the negative impacts of the Modality Gap and Capacity Gap on Language-Only Vision Model (LOVM) selection through explanations and experimental validation. They demonstrate that we cannot directly use text samples related to categories in the target dataset as substitutes for test image samples, nor can we use the average performance of VLMs on open-source datasets as proxies for their performance on specific tasks. This problem statement is compelling and well-supported.\n3.The method presented in the paper is novel and interesting. It leverages the category similarity between target tasks and open-source tasks to estimate the corresponding gap vectors and accuracy/ranking of VLMs on the target tasks, based on their gap vectors and accuracies/rankings on the open-source tasks. This idea is grounded in a reasonable assumption: the statistics of VLMs on similar categories are likely to be similar. By doing so, the method effectively utilizes the rich information contained in category similarity to make targeted use of the models' statistics on historical tasks.\n4.The evaluation approach in the article is comprehensive and detailed. The benchmark includes 23 commonly used image classification datasets and 43 widely recognized VLMs (such as CLIP, CoCa, BLIP, BEiT, among others, with different pre-training methods and model architectures), making the experimental results highly convincing. Additionally, the article compares its method with various baseline approaches, including the current state-of-the-art method ModelGPT, and demonstrates its significant superiority over these methods. The article also conducts thorough ablation studies to validate the effectiveness of each component of the proposed method.\n\n1.The performance of ModelGPT is slightly inconsistent with the results in its paper. What are the reasons for this?\n2.The method of setting the \\alpha value in formula 9 is not stated. Is the same \\alpha used for all data sets? What is the exact value?\n\nRefer to the weakness." } ]
01XV5Za56k
Testing Calibration in Nearly-Linear Time
In the recent literature on machine learning and decision making, calibration has emerged as a desirable and widely-studied statistical property of the outputs of binary prediction models. However, the algorithmic aspects of measuring model calibration have remained relatively less well-explored. Motivated by Blasiok et al '23, which proposed a rigorous framework for measuring distances to calibration, we initiate the algorithmic study of calibration through the lens of property testing. We define the problem of calibration testing from samples where given $n$ draws from a distribution $\mathcal{D}$ on $(\text{predictions}, \text{binary outcomes})$, our goal is to distinguish between the cases where $\mathcal{D}$ is perfectly calibrated or $\epsilon$-far from calibration. We make the simple observation that the empirical smooth calibration linear program can be reformulated as an instance of minimum-cost flow on a highly-structured graph, and design an exact dynamic programming-based solver for it which runs in time $O(n\log^2(n))$, and solves the calibration testing problem information-theoretically optimally in the same time. This improves upon state-of-the-art black-box linear program solvers requiring $\Omega(n^\omega)$ time, where $\omega > 2$ is the exponent of matrix multiplication. We also develop algorithms for tolerant variants of our testing problem improving upon black-box linear program solvers, and give sample complexity lower bounds for alternative calibration measures to the one considered in this work. Finally, we present experiments showing the testing problem we define faithfully captures standard notions of calibration, and that our algorithms scale efficiently to accommodate large sample sizes.
https://openreview.net/pdf/57ec61f923ff97b2f5fe22a78242ae6391c4902d.pdf
[ { "confidence": 4, "rating": 7, "review_id": "Hs4Cxw1WjS", "review_text": "This paper considers the property testing of calibration, under the lower distance to calibration metric. By property testing, the authors adapt the standard definition in the TCS literature, where if the distance is at least $\\epsilon$ then the algorithm will reject, and accept if the distance is 0. The main contributions are algorithms for solving the aforementioned testing problem in nearly linear time, in particular, authors show that\n\n* The $\\epsilon$-calibration testing problem could be solved in time $O(n\\log^2 n)$;\n\n* For a tolerant version of calibration testing, where the algorithm accepts if the distance $\\leq \\epsilon_1$ and rejects if the distance $\\geq \\epsilon_2$, there is an algorithm that solves it in $O((\\epsilon_1-\\epsilon_2)^{-2} n\\log n)$ time. If one chooses $\\epsilon_1-\\epsilon_2=1/\\sqrt n$, which is the information-theoretically smallest possible choice, then the runtime is nearly quadratic: $O(n^2 \\log n)$.\n\nThe first result relies on solving the LP associated to the smooth calibration error, formulating it as a min-cost flow on a path graph adjoint by a single vertex. They show this structured LP could be solved with simple dynamic programming plus segment trees, without resorting the heavy machinery of IPMs and they empirically verify their algorithm outperforms commercial solvers (which is not very surprising, since the graph is so simple).\n\nThe second result solves a harder LP, using the rounding framework for combinatorial optimization, particularly solvers that could utilize area convexity well due to Sherman and extended by Jambulapati, Sidford and Tian. This improves upon using the state-of-the-art LP solver in a black-box way.\n\nThe theoretical results in this paper are quite interesting. This seems to be the first paper to formalize calibration as a property testing problem, and they give fast algorithms under the lower distance to calibration error. Techniques are not complicated but neat. In particular, authors show that the smooth calibration error LP could be cast into a min-cost flow problem on a simple, planar graph. Utilizing the graph structure, they develop combinatorial, dynamic programming-based approach that runs in nearly linear time. This is in drastic difference from most endeavors in algorithmic optimization community, where most works focus on developing complicated data structures and machinery to solve an IPM. This also enables authors to implement their algorithm and compares with CVXPY / commercial solvers in terms of efficiency, which is not known for most theoretically-efficient LP solvers since 2019.\n\nThe paper is overall well-written with solid results, however the paper format of NeurIPS is not particularly suitable for this paper. Theoretically, it also contains an algorithm for tolerant calibration testing, and a sample complexity lower bound. However, these results are only roughly conveyed in the main body of the paper. I wonder whether journals (JMLR, TMLR) or conferences such as COLT might be a better fit.\n\nA few questions:\n\n* For smooth calibration, the runtime seems not depending on $\\epsilon$. Is that because $\\epsilon$ is chosen as $1/\\sqrt n$, or smooth calibration is a constant factor approximation to lower dCE hence $\\epsilon$ is a constant? I think it's important to clarify what role does $\\epsilon$ play in the smooth calibration program.\n\n* Do you think the method of transforming smooth calibration to computing a min-cost flow on a path adjoint with a vertex could be generalized to other programs where the constraints are encoding Lipschitzness among the data points? Or is the definition of vector $b$ crucial here?\n\nA few comments:\n\n* For LP solvers, consider citing the state-of-the-art result by Jiang, Song, Weinstein and Zhang, STOC'21." }, { "confidence": 3, "rating": 7, "review_id": "mpS4EqSedt", "review_text": "This paper introduces a property testing formulation of verifying calibration, along with efficient algorithms for solving this problem as well as a relaxed/tolerant version. The empirical results support the theory and justify the efficacy of the proposed algorithms.\n\n**Originality:** The calibration testing formulation is new (although a natural extension of prior work [1]), and consequently the algorithm for solving the problem is also new.\n\n**Quality:** The paper is technically sound. I did not check the main proof details in the appendix, but I did check the in-lined proofs in the main body and I saw no issues.\n\n**Clarity:** Overall, the paper is straightforward to read and the necessary background is properly introduced.\n\n**Significance:** There has been a significant amount of recent theoretical work on how to properly measure calibration, and this paper makes a nontrivial contribution both in terms of perspective and methodology.\n\n[1] https://arxiv.org/abs/2203.01850\n\nOverall, I find the paper strong in what it sets out to achieve (and thus recommend accept); the main weakness in my view is proper contextualization and how to apply the ideas to larger scale calibration settings.\n\nIn particular, while the authors emphasize how different calibration metrics such as smoothECE perform worse for property testing, it would be interesting to have more concrete applications to model comparison in practice. For example, are there instances where comparing models using directly computed calibration metric values (i.e. ranking by smoothECE) leads to spurious rankings that would have been detected by testing with dCE? The results in Table 1 suggest this should be the case, but I think more clearly demonstrating how the proposed framework can be used as a better comparison method in practical settings would be very useful. Relatedly, I have some questions regarding the DenseNet experimental setup that I outline below under Questions.\n\nMinor Comments:\n- There seems to be some mixing up of directed/undirected throughout Lemma 2.\n\n- The DenseNet experiments rely on heavily subsampling the test data; this should not be necessary for the proposed approach correct? It seems to me that even the compared-to approaches should be fine on CIFAR-10-scale data without subsampling? Additionally, the subsampling setup seems somewhat arbitrary - would it be possible to provide more justification on the chosen sample size, the number of samples, etc.?\n\n- What is referred to as ConvECE in the paper was actually introduced as SmoothECE, correct? It would be useful to clarify this." }, { "confidence": 4, "rating": 6, "review_id": "hHERxw02oR", "review_text": "The paper studies the problem of *calibration testing*.\nHere, we are given a distribution $D$ over outcomes and the goal\nis to decide if the distribution is calibrated; specifically,\nthe property testing problem they formulate distinguishes\nbetween perfectly calibrated distributions and those that\nare $\\epsilon$-far from calibrated.\nA distribution $D$ over prediction-outcomes\n$[0, 1] \\times \\{0, 1\\}$ is said to be perfectly calibrated\nif $E_{(v, y) \\sim D}[y \\mid v] = v$. The \"distance to calibration\" defined here\nis based on [BGHN23a] which is intuitively an optimal transport\ndefinition;\nthey set $dCE(D)$ to be minimum of $E_{(u, v, y) \\sim \\Pi}[|u - v|]$\nacross all $\\pi$ such that $(v, y)$ has marginal distribution $D$\nand $(u, y)$ is a perfectly calibrated distribution.\n\nThe authors design a dynamic-algorithm based solver for the problem.\nA key insight here is a novel algorithm\nfor calculating the smooth calibration error\nby reformulating it as a min-cost flow linear program on specific graph.\nUtilizing the properties of the graph (which is a union of a star and a path),\nthe authors obtain a dynamic programming based algorithm which has an update time of $O(n \\log^2(n))$.\nThis improves on the existing bounds that are $O(n^{\\omega})$ where $\\omega$ is the matrix multiplication exponent.\n\nStrengths:\n- The paper is studies an interesting problem, is generally well-written, and makes important contributions.\n- The techniques are interesting, in particular the algorithm for calculating\n the Smooth calibration error is of independent interest.\n\nWeakness:\n- Framing: it appears to me that the main contribution of the paper is a new algorithm for calculating smCE.\n Indeed, the main \"property testing\" part of the paper (excluding the appendix) seems to be Lemma 3 which is from\n prior work. While the implications for property testing are interesting, it seems to me that they are\n mostly known. Lemma 5 also seems standard for property testing and seems like it holds for a much more general\n class of testing problems, rather than being specific to calibration.\n- Proof of Lemma 2 should be better explained (especially since it is the central insight in the paper).\n For instance, what does each variable of the program correspond to in the flow?\n How does optimization problem (7) intuitively correspond to the min-flow problem of the graph specified\n in Lemma 2? What does the constraint $B^Tf=d$ mean here?\n\nSee weaknesses above. \nIn addition, I think adding further motivation for the problem of testing calibration could improve the paper.\n\n- The linear program (2) seems to be already provided in [BGHN23a]; see Theorem 7.14. If this is the case, it should be noted more clearly.\n- The theorem statements imply that $\\epsilon_n=\\Theta(n^{-1/2})$ is the range in which\n the problem can be information theoretically solved. Lemma 5 provides a lower bound;\n the upper bounds is Lemma 3; this should be noted in the intro." }, { "confidence": 2, "rating": 5, "review_id": "s0UH7d4tic", "review_text": "This paper studies testing the calibration of predictors through joint distributions and contributes efficient methods using appropriate measures.\n\nOriginality: \nThere are new methods.\nThe work can be considered a novel combination of well-known techniques. \nIt is clarified how this work differs from previous contributions. \nRelated work appears to be adequately cited.\n\nQuality: \nThe submission appears to be technically sound. \nThe claims are rather well supported with experimental results. \nThe methods used are appropriate.\nThis is a rather complete piece of work.\n\nClarity: \nThe submission is somewhat clearly written. \n\nSignificance: \nThe results seem important. \nOther researchers and practitioners will possibly use the ideas or build on them, owing to the code availability.\nThe submission seems to address a difficult task in a better way than previous works. \nThe work advances the state of the art as demonstrated via run-times. \nThe work provides provide unique theoretical and experimental approaches with demonstrated run-time gains.\n\nQuality:\nThe authors are not careful (and possibly honest) about evaluating their work, specifically weaknesses and limitations.\n\nClarity:\nThe work is not well organized, many parts (deferred to appendix) need to be in the paper. \nThe work informs the reader in rather superficial level, with regards to exact implementations.\n\nMajor Questions:\n- Definition 1, clarify the empirical forms of dCE and ECE and the benefits dCE brings about.\n- \"Page 3 Line 83\" and Theorem 1 disagree about the information-theoretic possibility.\n- Page 8 Line 288: it is unclear how Lemma 4 is achieved, more explanations are needed.\n\nMinor Questions:\n- Page 5 Line 168: why 2(n 2) constraints?\n- Page 5 Line 175: the exact use of triangle inequality is needed here.\n- In (5), why did it switch from max to min?\n\nSuggestions:\n- Clarify if Theorem 1 solves with smCE and Theorem 2 solves with LDTC. It is not clear why Theorem 2 suffers additional complexity.\n- Deferred definitions (4 and 5) seem central and need to be included in the paper." } ]
00uVk06eVK
On the Noise Robustness of In-Context Learning for Text Generation
Large language models (LLMs) have shown impressive performance on downstream tasks by in-context learning (ICL), which heavily relies on the quality of demonstrations selected from a large set of annotated examples. Recent works claim that in-context learning is robust to noisy demonstrations in text classification. In this work, we show that, on text generation tasks, noisy annotations significantly hurt the performance of in-context learning. To circumvent the issue, we propose a simple and effective approach called Local Perplexity Ranking (LPR), which replaces the "noisy" candidates with their nearest neighbors that are more likely to be clean. Our method is motivated by analyzing the perplexity deviation caused by noisy labels and decomposing perplexity into inherent perplexity and matching perplexity. Our key idea behind LPR is thus to decouple the matching perplexity by performing the ranking among the neighbors in semantic space. Our approach can prevent the selected demonstrations from including mismatched input-label pairs while preserving the effectiveness of the original selection methods. Extensive experiments demonstrate the effectiveness of LPR, improving the EM score by up to 18.75 on common benchmarks with noisy annotations.
https://openreview.net/pdf/dec1e5a067348a5ee9f561e390315390b4b1398e.pdf
[ { "confidence": 3, "rating": 7, "review_id": "vOG7rlkg3O", "review_text": "The paper introduces a new method to deal with noisy annotations for in-context learning. The authors suppose that the examples that cause higher perplexity than their neighbors are more likely corrupted than their neighbors. So, the authors suggest replacing the examples, causing suspiciously high perplexity by their neighbors with lower perplexity (the process is formalized by the \"Local Perplexity Ranking\" formula). They show experimentally that this method significantly improves the quality of in-context learning for several tasks for several models of size 13B or less.\n\n- The paper approaches an important problem with a novel method that significantly improves the quality of LLMs in in-context learning scenarios;\n- The main idea of the paper is easy to understand.\n\n- In Table 2, the authors compare their method with several demonstration selection methods. However, it is still unclear whether **any** demonstration selection method, including LPR, is really the best way to improve the model quality in this situation. What about simpler methods to improve the answer quality, such as a chain of thought? What if it could work better even when coupled with the demonstration of noisy examples? Adding more types of baselines, such as some simple variations of a chain of thoughts and chain of thoughts + noisy examples would make the paper's claims much more solid, but there is no such comparison.\n- See the \"Questions\" section and \"Limitations\" section.\n\nEssential questions:\n- Why Perplexity = Inherent Perplexity + Matching Perplexity? I didn't find any experimental or theoretical confirmation.\n- In Figure 2, there is the perplexity distribution of Llama2-7B on clean and noisy annotations. Does this perplexity distribution change for bigger models, such as Llama-13B? Does it change for smaller models, such as OPT?\n- How to choose a good gamma? You wrote in lines 233-234 that \"LPR shows robustness to the choice of threshold γ\", but I didn't find any experimental confirmation of this point.\n\nQuestions about the presentation:\n- What model is used for Table 2 results? Is it LLAMA-2 7B?\n- What metric is shown in Table 3?\n- You wrote that figures 3 (a)-(d) use different gamma values, but which exactly? There is no information in the figures caption." }, { "confidence": 3, "rating": 7, "review_id": "Cr7BOjjHYb", "review_text": "The paper \"On the Noise Robustness of In-Context Learning for Text Generation\" investigates how LLMs handle noisy annotations during in-context learning (ICL). The authors propose a method called Local Perplexity Ranking (LPR) that replaces noisy candidates with nearby neighbors that are less noisy. They also explore the impact of noisy labels on ICL performance and compare different selection methods for these examples, such as TopK and DPP. Their proposed method can effectively mitigate the negative effects of noise, offering insights into enhancing the noise robustness of ICL in practical applications.\n\n- The paper introduces a novel approach (LPR) to address the issue of noisy annotations in in-context learning for text generation tasks. This is particularly original as it challenges previous assumptions about the robustness of in-context learning to noisy demonstrations in classification tasks.\n- The paper is generally well-structured and easy to follow. The definitions and metrics are defined properly. The methodology is described in clear detail.\n- The paper shows that LPR does improve the performance of various selection methods, including Random, TopK, and DPP, especially in the presence of noisy annotations. The proposed method addresses an important and practical issue in the field.\n\n- The absence of code makes it difficult for the broader research community to reproduce the results claimed in the paper or verify the method's effectiveness on the tasks (the authors claim that the code is released in section 4, but I'm not sure where it is).\n- The paper did not provide an analysis of the types of errors that LPR makes versus the baseline methods. Such an analysis could provide insights into the strengths and weaknesses of the approach.\n- While LPR is compared to baseline selection methods like TopK and DPP, it is not compared to other techniques specifically designed to handle noisy annotations during in-context learning.\n\n- What are the specific cases where LPR may tend to fail? Does each model's inherent capabilities affect these results?\n- What is the specific model version for GPT-4 and the approximate cost for generating relevant noise for each task? What was the specific method (or prompt) used for generating relevant/irrelevant noise?" }, { "confidence": 3, "rating": 5, "review_id": "FHaoe7vb7Y", "review_text": "This paper proposes Local Perplexity Ranking (LPR), a method to improve the robustness of in-context learning for text generation tasks when dealing with noisy annotations. The key contributions are:\n\n- Empirically demonstrating that noisy annotations hurts performance of in-context learning for text generation, unlike for classification tasks.\n\n- Proposing LPR, which ranks candidate demonstrations based on perplexity within local semantic neighborhoods to identify and replace likely noisy examples.\n\n- Experiments showing LPR improves noise robustness in most scenarios across multiple text generation datasets, 2 noise types, and 3 baseline selection methods.\n\nThe method is motivated by analyzing perplexity distributions of clean vs noisy examples and decomposing perplexity into inherent and matching components. Overall, LPR provides a simple but effective approach to mitigate issues with noisy demonstrations in in-context learning for text generation.\n\n1. Provides clear analysis on the effect of noisy labels to text generation tasks.\n\n2. The explanations in the disentanglement of perplexity justifies the method of LPR.\n \n3. Comprehensive empirical analysis and ablation studies in the appendix.\n\n1. The author states that the paper's motivation stems from the occurrence of noisy annotations in in-context demonstrations:\n> For those candidates, input-label mappings solicited from humans [58, 68] or LLMs [55] can often be noisy, especially in **complex tasks**. This gives rise to the importance of noise-robust ICL, which aims to construct effective demonstrations in the presence of noisy and erroneous labels.\n\nHowever, the current evaluation of the proposed LPR approach is limited to short-form, closed-domain question-answering tasks using traditional NLP datasets. These datasets typically don't suffer from noisy annotations, as the tasks are relatively simple, and ensuring the correctness of a few (fewer than 10) in-context samples shouldn't be challenging. Including experiments on long-form, open-domain question-answering tasks would better justify the paper's motivation and demonstrate the broader applicability of LPR to more novel tasks.\n\n2. The evaluation experiments on justifying the benefits of LPR are conducted using noisy annotations with noise ratios exceeding 20%, which is likely unrealistic for short-form QA tasks in real-world scenarios. This experimental setting appears overly synthetic.\n\n3. It would be helpful if the author could provide a clear illustration demonstrating how LPM is conducted.\n\n1. Have the authors conducted experiments on long-form, open-domain question-answering tasks (e.g., MT-Bench)?\n\n2. I observe that the benefits of LPR decrease as the noise level diminishes. In fact, in Table 2, when the labels are clean, the effect of LPR appears negligible, as it doesn't improve upon the baseline 50% of the time. While I understand that LPR is designed to handle noisy labels, have the authors conducted experiments with noise ratios between 0% and 20%? At what threshold does LPR start to show a noticeable effect compared to the baseline?" } ]