title
stringlengths 14
154
| paper_url
stringlengths 42
42
| authors
sequencelengths 1
21
| type
stringclasses 3
values | abstract
stringlengths 413
2.52k
| keywords
stringlengths 4
397
| TL;DR
stringlengths 5
250
⌀ | submission_number
int64 2
14.3k
| arxiv_id
stringlengths 10
10
⌀ |
---|---|---|---|---|---|---|---|---|
On Disentangled Training for Nonlinear Transform in Learned Image Compression | https://openreview.net/forum?id=U67J0QNtzo | [
"Han Li",
"Shaohui Li",
"Wenrui Dai",
"Maida Cao",
"Nuowen Kan",
"Chenglin Li",
"Junni Zou",
"Hongkai Xiong"
] | Spotlight | Learned image compression (LIC) has demonstrated superior rate-distortion (R-D) performance compared to traditional codecs, but is challenged by training inefficiency that could incur more than two weeks to train a state-of-the-art model from scratch. Existing LIC methods overlook the slow convergence caused by compacting energy in learning nonlinear transforms. In this paper, we first reveal that such energy compaction consists of two components, \emph{i.e.}, feature decorrelation and uneven energy modulation. On such basis, we propose a linear auxiliary transform (AuxT) to disentangle energy compaction in training nonlinear transforms. The proposed AuxT obtains coarse approximation to achieve efficient energy compaction such that distribution fitting with the nonlinear transforms can be simplified to fine details. We then develop wavelet-based linear shortcuts (WLSs) for AuxT that leverages wavelet-based downsampling and orthogonal linear projection for feature decorrelation and subband-aware scaling for uneven energy modulation. AuxT is lightweight and plug-and-play to be integrated into diverse LIC models to address the slow convergence issue. Experimental results demonstrate that the proposed approach can accelerate training of LIC models by 2 times and simultaneously achieves an average 1\% BD-rate reduction. To our best knowledge, this is one of the first successful attempt that can significantly improve the convergence of LIC with comparable or superior rate-distortion performance. | learned image compression, training efficiency, auxiliary transform | null | 3,760 | 2501.13751 |
Decomposition Polyhedra of Piecewise Linear Functions | https://openreview.net/forum?id=vVCHWVBsLH | [
"Marie-Charlotte Brandenburg",
"Moritz Leo Grillo",
"Christoph Hertrich"
] | Spotlight | In this paper we contribute to the frequently studied question of how to decompose a continuous piecewise linear (CPWL) function into a difference of two convex CPWL functions. Every CPWL function has infinitely many such decompositions, but for applications in optimization and neural network theory, it is crucial to find decompositions with as few linear pieces as possible. This is a highly challenging problem, as we further demonstrate by disproving a recently proposed approach by Tran and Wang [Minimal representations of tropical rational functions. Algebraic Statistics, 15(1):27–59, 2024]. To make the problem more tractable, we propose to fix an underlying polyhedral complex determining the possible locus of nonlinearity. Under this assumption, we prove that the set of decompositions forms a polyhedron that arises as intersection of two translated cones. We prove that irreducible decompositions correspond to the bounded faces of this polyhedron and minimal solutions must be vertices. We then identify cases with a unique minimal decomposition, and illustrate how our insights have consequences in the theory of submodular functions. Finally, we improve upon previous constructions of neural networks for a given convex CPWL function and apply our framework to obtain results in the nonconvex case. | Piecewise Linear Functions, Polyhedral Geometry, Minimal Convex Decompositions, Submodular Functions, Neural Networks | We describe the set of convex decompositions of a piecewise linear function as a polyhedron and apply this to submodular functions and neural networks. | 3,729 | 2410.04907 |
Provably Accurate Shapley Value Estimation via Leverage Score Sampling | https://openreview.net/forum?id=wg3rBImn3O | [
"Christopher Musco",
"R. Teal Witter"
] | Spotlight | Originally introduced in game theory, Shapley values have emerged as a central tool in explainable machine learning, where they are used to attribute model predictions to specific input features. However, computing Shapley values exactly is expensive: for a model with $n$ features, $O(2^n)$ model evaluations are necessary. To address this issue, approximation algorithms are widely used. One of the most popular is the Kernel SHAP algorithm, which is model agnostic and remarkably effective in practice. However, to the best of our knowledge, Kernel SHAP has no strong non-asymptotic complexity guarantees. We address this issue by introducing *Leverage SHAP*, a light-weight modification of Kernel SHAP that provides provably accurate Shapley value estimates with just $O(n\log n)$ model evaluations. Our approach takes advantage of a connection between Shapley value estimation and agnostic active learning by employing *leverage score sampling*, a powerful regression tool. Beyond theoretical guarantees, we show that Leverage SHAP consistently outperforms even the highly optimized implementation of Kernel SHAP available in the ubiquitous SHAP library [Lundberg \& Lee, 2017]. | Explainable AI, Active Regression, Shapley Values, Leverage Scores | We propose a theoretically motivated method for estimating Shapley values that outperforms Kernel SHAP. | 3,726 | 2410.01917 |
Rethinking Reward Model Evaluation: Are We Barking up the Wrong Tree? | https://openreview.net/forum?id=Cnwz9jONi5 | [
"Xueru Wen",
"Jie Lou",
"Yaojie Lu",
"Hongyu Lin",
"XingYu",
"Xinyu Lu",
"Ben He",
"Xianpei Han",
"Debing Zhang",
"Le Sun"
] | Spotlight | Reward Models (RMs) are crucial for aligning language models with human preferences.
Currently, the evaluation of RMs depends on measuring accuracy against a validation set of manually annotated preference data.
Although this method is straightforward and widely adopted, the relationship between RM accuracy and downstream policy performance remains under-explored.
In this work, we conduct experiments in a synthetic setting to investigate how differences in RM measured by accuracy translate into gaps in optimized policy performance.
Our findings reveal that while there is a weak positive correlation between accuracy and downstream performance, policies optimized towards RMs with similar accuracy can exhibit quite different performance.
Moreover, we discover that the way of measuring accuracy significantly impacts its ability to predict the final policy performance.
Through the lens of the Regressional Goodhart effect, we recognize that accuracy, when used for measuring RM quality, can fail to fully capture the potential RM overoptimization.
This underscores the inadequacy of relying solely on accuracy to reflect their impact on policy optimization. | Reinforcement Learning from Human Feedback; Reward Model; | null | 3,703 | 2410.05584 |
Time-MoE: Billion-Scale Time Series Foundation Models with Mixture of Experts | https://openreview.net/forum?id=e1wDDFmlVu | [
"Xiaoming Shi",
"Shiyu Wang",
"Yuqi Nie",
"Dianqi Li",
"Zhou Ye",
"Qingsong Wen",
"Ming Jin"
] | Spotlight | Deep learning for time series forecasting has seen significant advancements over the past decades. However, despite the success of large-scale pre-training in language and vision domains, pre-trained time series models remain limited in scale and operate at a high cost, hindering the development of larger capable forecasting models in real-world applications. In response, we introduce Time-MoE, a scalable and unified architecture designed to pre-train larger, more capable forecasting foundation models while reducing inference costs. By leveraging a sparse mixture-of-experts (MoE) design, Time-MoE enhances computational efficiency by activating only a subset of networks for each prediction, reducing computational load while maintaining high model capacity. This allows Time-MoE to scale effectively without a corresponding increase in inference costs. Time-MoE comprises a family of decoder-only transformer models that operate in an auto-regressive manner and support flexible forecasting horizons with varying input context lengths. We pre-trained these models on our newly introduced large-scale data Time-300B, which spans over 9 domains and encompassing over 300 billion time points. For the first time, we scaled a time series foundation model up to 2.4 billion parameters, achieving significantly improved forecasting precision. Our results validate the applicability of scaling laws for training tokens and model size in the context of time series forecasting. Compared to dense models with the same number of activated parameters or equivalent computation budgets, our models consistently outperform them by large margin. These advancements position Time-MoE as a state-of-the-art solution for tackling real-world time series forecasting challenges with superior capability, efficiency, and flexibility. Code is available at https://github.com/Time-MoE/Time-MoE | time series, foundation model, forecasting | Time-MoE is a family of time-series foundation models with the mixture-of-experts architecture. For the first time, Time-MoE has scaled up to 2.4 billion parameters, resulting in substantially improved zero-shot/full-shot forecasting performance. | 3,661 | null |
Generalization Guarantees for Representation Learning via Data-Dependent Gaussian Mixture Priors | https://openreview.net/forum?id=fGdF8Bq1FV | [
"Milad Sefidgaran",
"Abdellatif Zaidi",
"Piotr Krasnowski"
] | Spotlight | We establish in-expectation and tail bounds on the generalization error of representation learning type algorithms. The bounds are in terms of the relative entropy between the distribution of the representations extracted from the training and "test'' datasets and a data-dependent symmetric prior, i.e., the Minimum Description Length (MDL) of the latent variables for the training and test datasets. Our bounds are shown to reflect the "structure" and "simplicity'' of the encoder and significantly improve upon the few existing ones for the studied model. We then use our in-expectation bound to devise a suitable data-dependent regularizer; and we investigate thoroughly the important question of the selection of the prior. We propose a systematic approach to simultaneously learning a data-dependent Gaussian mixture prior and using it as a regularizer. Interestingly, we show that a weighted attention mechanism emerges naturally in this procedure. Our experiments show that our approach outperforms the now popular Variational Information Bottleneck (VIB) method as well as the recent Category-Dependent VIB (CDVIB). | Representation learning algorithm, Gaussian-Mixture, regularizer, rate-disotortion | We derive generalization bounds for the representation learning algorithms and, inspired by the bounds, propose a regularizer with data-dependent Gaussian mixture priors. | 3,622 | 2502.15540 |
OSDA Agent: Leveraging Large Language Models for De Novo Design of Organic Structure Directing Agents | https://openreview.net/forum?id=9YNyiCJE3k | [
"Zhaolin Hu",
"Yixiao Zhou",
"Zhongan Wang",
"Xin Li",
"Weimin Yang",
"Hehe Fan",
"Yi Yang"
] | Spotlight | Zeolites are crystalline porous materials that have been widely utilized in petrochemical industries as well as sustainable chemistry areas. Synthesis of zeolites often requires small molecules termed Organic Structure Directing Agents (OSDAs), which are critical in forming the porous structure. Molecule generation models can aid the design of OSDAs, but they are limited by single functionality and lack of interactivity. Meanwhile, large language models (LLMs) such as GPT-4, as general-purpose artificial intelligence systems, excel in instruction comprehension, logical reasoning, and interactive communication. However, LLMs lack in-depth chemistry knowledge and first-principle computation capabilities, resulting in uncontrollable outcomes even after fine-tuning. In this paper, we propose OSDA Agent, an interactive OSDA design framework that leverages LLMs as the brain, coupled with computational chemistry tools. The OSDA Agent consists of three main components: the Actor, responsible for generating potential OSDA structures; the Evaluator, which assesses and scores the generated OSDAs using computational chemistry tools; and the Self-reflector, which produces reflective summaries based on the Evaluator's feedback to refine the Actor's subsequent outputs. Experiments on representative zeolite frameworks show the generation-evaluation-reflection-refinement workflow can perform de novo design of OSDAs with superior generation quality than the pure LLM model, generating candidates consistent with experimentally validated OSDAs and optimizing known OSDAs. | Keywords: Large Language Model, OSDA, Zeolite, Molecular Design | null | 3,544 | null |
ReDeEP: Detecting Hallucination in Retrieval-Augmented Generation via Mechanistic Interpretability | https://openreview.net/forum?id=ztzZDzgfrh | [
"ZhongXiang Sun",
"Xiaoxue Zang",
"Kai Zheng",
"Jun Xu",
"Xiao Zhang",
"Weijie Yu",
"Yang Song",
"Han Li"
] | Spotlight | Retrieval-Augmented Generation (RAG) models are designed to incorporate external knowledge, reducing hallucinations caused by insufficient parametric (internal) knowledge. However, even with accurate and relevant retrieved content, RAG models can still produce hallucinations by generating outputs that conflict with the retrieved information. Detecting such hallucinations requires disentangling how Large Language Models (LLMs) balance external and parametric knowledge. Current detection methods often focus on one of these mechanisms or without decoupling their intertwined effects, making accurate detection difficult. In this paper, we investigate the internal mechanisms behind hallucinations in RAG scenarios. We discover hallucinations occur when the **Knowledge FFNs** in LLMs overemphasize parametric knowledge in the residual stream, while **Copying Heads** fail to effectively retain or integrate external knowledge from retrieved content. Based on these findings, we propose **ReDeEP**, a novel method that detects hallucinations by decoupling LLM’s utilization of external context and parametric knowledge. Our experiments show that ReDeEP significantly improves RAG hallucination detection accuracy. Additionally, we introduce AARF, which mitigates hallucinations by modulating the contributions of Knowledge FFNs and Copying Heads. | Retrieval-Augmented Generation Hallucination, Hallucination Detection, Mechanistic Interpretability | We propose ReDeEP for detecting hallucinations in RAG models by decoupling external context and parametric knowledge, and AARF to reduce hallucinations by modulating the contributions of Knowledge FFNs and Copying Heads. | 3,536 | 2410.11414 |
Accelerating Goal-Conditioned Reinforcement Learning Algorithms and Research | https://openreview.net/forum?id=4gaySj8kvX | [
"Michał Bortkiewicz",
"Władysław Pałucki",
"Vivek Myers",
"Tadeusz Dziarmaga",
"Tomasz Arczewski",
"Łukasz Kuciński",
"Benjamin Eysenbach"
] | Spotlight | Self-supervision has the potential to transform reinforcement learning (RL), paralleling the breakthroughs it has enabled in other areas of machine learning. While self-supervised learning in other domains aims to find patterns in a fixed dataset, self-supervised goal-conditioned reinforcement learning (GCRL) agents discover *new* behaviors by learning from the goals achieved during unstructured interaction with the environment. However, these methods have failed to see similar success, both due to a lack of data from slow environment simulations as well as a lack of stable algorithms. We take a step toward addressing both of these issues by releasing a high-performance codebase and benchmark (`JaxGCRL`) for self-supervised GCRL, enabling researchers to train agents for millions of environment steps in minutes on a single GPU. By utilizing GPU-accelerated replay buffers, environments, and a stable contrastive RL algorithm, we reduce training time by up to $22\times$. Additionally, we assess key design choices in contrastive RL, identifying those that most effectively stabilize and enhance training performance. With this approach, we provide a foundation for future research in self-supervised GCRL, enabling researchers to quickly iterate on new ideas and evaluate them in diverse and challenging environments. Code: [https://anonymous.4open.science/r/JaxGCRL-2316/README.md](https://anonymous.4open.science/r/JaxGCRL-2316/README.md) | Deep Reinforcement Learning, GPU-accelerated Physics Simulators, Contrastive Learning, Unsupervised Reinforcement Learning | This paper presents JaxGCRL, a high-performance codebase and benchmark designed for self-supervised goal-conditioned reinforcement learning, offering faster training and promoting more efficient RL research. | 3,502 | null |
Transformers Learn to Implement Multi-step Gradient Descent with Chain of Thought | https://openreview.net/forum?id=r3DF5sOo5B | [
"Jianhao Huang",
"Zixuan Wang",
"Jason D. Lee"
] | Spotlight | Chain of Thought (CoT) prompting has been shown to significantly improve the performance of large language models (LLMs), particularly in arithmetic and reasoning tasks, by instructing the model to produce intermediate reasoning steps. Despite the remarkable empirical success of CoT and its theoretical advantages in enhancing expressivity, the mechanisms underlying CoT training remain largely unexplored. In this paper, we study the training dynamics of transformers over a CoT objective on a in-context weight prediction task for linear regression. We prove that while a one-layer linear transformer without CoT can only implement a single step of gradient descent (GD) and fails to recover the ground-truth weight vector, a transformer with CoT prompting can learn to perform multi-step GD autoregressively, achieving near-exact recovery. Furthermore, we show that the trained transformer effectively generalizes on the unseen data. Empirically, we demonstrate that CoT prompting yields substantial performance improvements. | Chain of Thought, Transformer optimization, Training dynamics | null | 3,480 | 2502.21212 |
SimBa: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning | https://openreview.net/forum?id=jXLiDKsuDo | [
"Hojoon Lee",
"Dongyoon Hwang",
"Donghu Kim",
"Hyunseung Kim",
"Jun Jet Tai",
"Kaushik Subramanian",
"Peter R. Wurman",
"Jaegul Choo",
"Peter Stone",
"Takuma Seno"
] | Spotlight | Recent advances in CV and NLP have been largely driven by scaling up the number of network parameters, despite traditional theories suggesting that larger networks are prone to overfitting.
These large networks avoid overfitting by integrating components that induce a simplicity bias, guiding models toward simple and generalizable solutions.
However, in deep RL, designing and scaling up networks have been less explored.
Motivated by this opportunity, we present SimBa, an architecture designed to scale up parameters in deep RL by injecting a simplicity bias. SimBa consists of three components: (i) an observation normalization layer that standardizes inputs with running statistics, (ii) a residual feedforward block to provide a linear pathway from the input to output, and (iii) a layer normalization to control feature magnitudes.
By scaling up parameters with SimBa, the sample efficiency of various deep RL algorithms—including off-policy, on-policy, and unsupervised methods—is consistently improved.
Moreover, solely by integrating SimBa architecture into SAC, it matches or surpasses state-of-the-art deep RL methods with high computational efficiency across DMC, MyoSuite, and HumanoidBench.
These results demonstrate SimBa's broad applicability and effectiveness across diverse RL algorithms and environments. | reinforcement learning | In NLP and CV, scaling up model parameters improves performance, but in RL, scaling up often degrades it. This paper proposes a network architecture that allows RL algorithms to improve performance as model size increases. | 3,446 | 2410.09754 |
Tuning Frequency Bias of State Space Models | https://openreview.net/forum?id=wkHcXDv7cv | [
"Annan Yu",
"Dongwei Lyu",
"Soon Hoe Lim",
"Michael W. Mahoney",
"N. Benjamin Erichson"
] | Spotlight | State space models (SSMs) leverage linear, time-invariant (LTI) systems to effectively learn sequences with long-range dependencies. By analyzing the transfer functions of LTI systems, we find that SSMs exhibit an implicit bias toward capturing low-frequency components more effectively than high-frequency ones. This behavior aligns with the broader notion of frequency bias in deep learning model training. We show that the initialization of an SSM assigns it an innate frequency bias and that training the model in a conventional way does not alter this bias. Based on our theory, we propose two mechanisms to tune frequency bias: either by scaling the initialization to tune the inborn frequency bias; or by applying a Sobolev-norm-based filter to adjust the sensitivity of the gradients to high-frequency inputs, which allows us to change the frequency bias via training. Using an image-denoising task, we empirically show that we can strengthen, weaken, or even reverse the frequency bias using both mechanisms. By tuning the frequency bias, we can also improve SSMs' performance on learning long-range sequences, averaging an $88.26\\%$ accuracy on the Long-Range Arena (LRA) benchmark tasks. | state-space models, sequence models, Long-Range Arena, frequency bias | We propose two mechanisms to diminish or increase the learning rate of high-frequency components relative to low-frequency ones in a state space model (SSM). | 3,382 | 2410.02035 |
Planning in Natural Language Improves LLM Search for Code Generation | https://openreview.net/forum?id=48WAZhwHHw | [
"Evan Z Wang",
"Federico Cassano",
"Catherine Wu",
"Yunfeng Bai",
"William Song",
"Vaskar Nath",
"Ziwen Han",
"Sean M. Hendryx",
"Summer Yue",
"Hugh Zhang"
] | Spotlight | While scaling training compute has led to remarkable improvements in large language models (LLMs), scaling inference compute only recently began to yield analogous gains. We hypothesize that a core missing component is a lack of diverse LLM outputs, leading to inefficient search due to models repeatedly sampling highly similar, yet incorrect generations. We empirically demonstrate that this lack of diversity can be mitigated by searching over candidate plans for solving a problem in natural language. Based on this insight, we propose PlanSearch, a novel search algorithm which shows strong results across HumanEval+, MBPP+, and LiveCodeBench (a contamination-free benchmark for competitive coding). PlanSearch generates a diverse set of observations about the problem and uses these observations to construct plans for solving the problem. By searching over plans in natural language rather than directly over code solutions, PlanSearch explores a significantly more diverse range of potential solutions compared to baseline search methods. Using PlanSearch on top of Claude 3.5 Sonnet achieves a pass@200 of 77.0% on LiveCodeBench, outperforming both the best pass-rate achieved without any search (pass@1 = 41.4%) and using standard repeated sampling on top of existing non-search models (pass@200 = 60.6%). Finally, we show that, across all models, search algorithms, and benchmarks analyzed, we can accurately predict performance gains from search as a function of the diversity over generated ideas. | LLM, search, inference-time compute, competitive programming, reasoning, code generation, pass@k, diversity | Searching over high level plans in natural language rather than directly over code induces diversity in generated outputs, which drastically increases effectiveness of inference-time compute. | 3,335 | 2409.03733 |
Recovering Manifold Structure Using Ollivier Ricci Curvature | https://openreview.net/forum?id=aX7X9z3vQS | [
"Tristan Luca Saidi",
"Abigail Hickok",
"Andrew J. Blumberg"
] | Spotlight | We introduce ORC-ManL, a new algorithm to prune spurious edges from nearest neighbor graphs using a criterion based on Ollivier-Ricci curvature and estimated metric distortion. Our motivation comes from manifold learning: we show that when the data generating the nearest-neighbor graph consists of noisy samples from a low-dimensional manifold, edges that shortcut through the ambient space have more negative Ollivier-Ricci curvature than edges that lie along the data manifold. We demonstrate that our method outperforms alternative pruning methods and that it significantly improves performance on many downstream geometric data analysis tasks that use nearest neighbor graphs as input. Specifically, we evaluate on manifold learning, persistent homology, dimension estimation, and others. We also show that ORC-ManL can be used to improve clustering and manifold learning of single-cell RNA sequencing data. Finally, we provide empirical convergence experiments that support our theoretical findings. | Manifold Learning, Persistent Homology, Ollivier-Ricci Curvature, Pruning, Nearest-Neighbor Graphs | We present and test a theoretically grounded method that uses discrete graph curvature to prune nearest-neighbor graphs. | 3,315 | null |
Improved Approximation Algorithms for k-Submodular Maximization via Multilinear Extension | https://openreview.net/forum?id=EPHsIa0Ytg | [
"Huanjian Zhou",
"Lingxiao Huang",
"Baoxiang Wang"
] | Spotlight | We investigate a generalized form of submodular maximization, referred to as $k$-submodular maximization, with applications across the domains of social networks and machine learning. In this work, we propose the multilinear extension of $k$-submodular functions and unified Frank-Wolfe-type frameworks based on that. This continuous framework accommodates 1) monotone or non-monotone functions, and 2) various constraint types including matroid constraints, knapsack constraints, and their combinations. Notably, we attain an asymptotically optimal $1/2$-approximation for monotone $k$-submodular maximization problems with knapsack constraints, surpassing previous $1/3$-approximation results, and a factor-$1/3$ approximation for non-monotone $k$-submodular maximization problems with knapsack constraints and matroid constraints which outperforms previous $0.245$-approximation results. The foundation for our analysis stems from new insights into specific linear and monotone properties pertaining to the multilinear extension. | $k$-submodular maximization, approximation algorithm, $k$-multilinear extension | We provide improved or optimal approximation algorithms for $k$-submodular maximization with various constraints, via a novel framework of multilinear extension. | 3,295 | null |
Nonlinear multiregion neural dynamics with parametric impulse response communication channels | https://openreview.net/forum?id=LbgIZpSUCe | [
"Matthew Dowling",
"Cristina Savin"
] | Spotlight | Cognition arises from the coordinated interaction of brain regions with distinct computational roles. Despite improvements in our ability to extract the dynamics underlying circuit computation from population activity recorded in individual areas, understanding how multiple areas jointly support distributed computation remains a challenge. As part of this effort, we propose a multi-region neural dynamics model composed of two building blocks: _i)_ within-region (potentially driven) nonlinear dynamics and _ii)_ communication channels between regions, parameterized through their impulse response. Together, these choices make it possible to learn nonlinear neural population dynamics and understand the flow of information between regions by drawing from the rich literature of linear systems theory. We develop a state noise inversion free variational filtering and learning algorithm for our model and show, through neuroscientifically inspired numerical experiments, how the proposed model can reveal interpretable characterizations of the local computations within and the flow of information between neural populations. We further validate the efficacy of our approach using simultaneous population recordings from areas V1 and V2. | neural dynamics, multiregion, variational inference | a multiregion neural population dynamics model capable of extracting nonlinear neural population dynamics with communication channels between regions parameterized by their impulse response. | 3,261 | null |
Diffusion On Syntax Trees For Program Synthesis | https://openreview.net/forum?id=wN3KaUXA5X | [
"Shreyas Kapur",
"Erik Jenner",
"Stuart Russell"
] | Spotlight | Large language models generate code one token at a time. Their autoregressive generation process lacks the feedback of observing the program's output. Training LLMs to suggest edits directly can be challenging due to the scarcity of rich edit data. To address these problems, we propose neural diffusion models that operate on syntax trees of any context-free grammar. Similar to image diffusion models, our method also inverts "noise" applied to syntax trees. Rather than generating code sequentially, we iteratively edit it while preserving syntactic validity, which makes it easy to combine this neural model with search. We apply our approach to inverse graphics tasks, where our model learns to convert images into programs that produce those images. Combined with search, our model is able to write graphics programs, see the execution result, and debug them to meet the required specifications. We additionally show how our system can write graphics programs for hand-drawn sketches. Video results can be found at https://tree-diffusion.github.io. | neurosymbolic, search, programming languages, inverse graphics | We propose a diffusion based approach on syntax trees to do program synthesis for inverse graphics tasks | 3,244 | 2405.20519 |
Scaling up the Banded Matrix Factorization Mechanism for Large Scale Differentially Private ML | https://openreview.net/forum?id=69Fp4dcmJN | [
"Ryan McKenna"
] | Spotlight | Correlated noise mechanisms such as DP Matrix Factorization (DP-MF) have proven to be effective alternatives to DP-SGD in large-epsilon few-epoch training regimes. Significant work has been done to find the best correlated noise strategies, and the current state-of-the-art approach is DP-BandMF , which optimally balances the benefits of privacy amplification and noise correlation. Despite it's utility advantages, severe scalability limitations prevent this mechanism from handling large-scale training scenarios where the number of training iterations may be more than $10^4$ and the number of model parameters may exceed $10^7$. In this work, we present techniques to scale up DP-BandMF along these two dimensions, significantly extending it's reach and enabling it to effectively handle settings with over $10^6$ training iterations and $10^9$ model parameters, with no utility degradation at smaller scales. | differential privacy, large models, DP-SGD, matrix factorization | We propose new techniques to improve the scalability of the banded matrix factorization mechanism, which is the current state-of-the-art mechanism in the DP-MF family. | 3,194 | null |
Self-play with Execution Feedback: Improving Instruction-following Capabilities of Large Language Models | https://openreview.net/forum?id=cRR0oDFEBC | [
"Guanting Dong",
"Keming Lu",
"Chengpeng Li",
"Tingyu Xia",
"Bowen Yu",
"Chang Zhou",
"Jingren Zhou"
] | Spotlight | One core capability of large language models~(LLMs) is to follow natural language instructions. However, the issue of automatically constructing high-quality training data to enhance the complex instruction-following abilities of LLMs without manual annotation remains unresolved. In this paper, we introduce AutoIF, the first scalable and reliable method for automatically generating instruction-following training data. AutoIF transforms the validation of instruction-following data quality into code verification, requiring LLMs to generate instructions, the corresponding code to verify the correctness of the instruction responses, and unit test samples to cross-validate the code's correctness. Then, execution feedback-based rejection sampling can generate data for Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF) training. AutoIF achieves significant improvements across three training algorithms, SFT, Offline DPO, and Online DPO, when applied to the advanced open-source LLMs, Qwen2 and LLaMA3, in self-alignment and strong-to-weak distillation settings. Using two widely-used and three challenging general instruction-following benchmarks, we demonstrate that AutoIF significantly improves LLM performance across a wide range of natural instruction constraints. Notably, AutoIF is the first to surpass 90\% accuracy in IFEval’s loose instruction accuracy, without compromising general, math and coding capabilities. Further analysis of quality, scaling, combination, and data efficiency highlights AutoIF's strong generalization and alignment potential. Our code are available at https://github.com/QwenLM/AutoIF | Instruction Following, Large Language Models, Execution Feedback, On-policy Learning, Strong-to-Weak Distillation, Self-Alignment | null | 3,173 | 2406.13542 |
How Feature Learning Can Improve Neural Scaling Laws | https://openreview.net/forum?id=dEypApI1MZ | [
"Blake Bordelon",
"Alexander Atanasov",
"Cengiz Pehlevan"
] | Spotlight | We develop a simple solvable model of neural scaling laws beyond the kernel limit. Theoretical analysis of this model predicts the performance scaling predictions with model size, training time and total amount of available data. From the scaling analysis we identify three relevant regimes: hard tasks, easy tasks, and super easy tasks. For easy and super-easy target functions, which are in the Hilbert space (RKHS) of the initial infinite-width neural tangent kernel (NTK), there is no change in the scaling exponents between feature learning models and models in the kernel regime. For hard tasks, which we define as tasks outside of the RKHS of the initial NTK, we show analytically and empirically that feature learning can improve the scaling with training time and compute, approximately doubling the exponent for very hard tasks. This leads to a new compute optimal scaling law for hard tasks in the feature learning regime. We support our finding that feature learning improves the scaling law for hard tasks with experiments of nonlinear MLPs fitting functions with power-law Fourier spectra on the circle and CNNs learning vision tasks. | neural scaling laws, feature learning, kernel methods, linear networks, mean field theory | A solveable model of neural scaling laws beyond the lazy training regime. | 3,157 | 2409.17858 |
A CLIP-Powered Framework for Robust and Generalizable Data Selection | https://openreview.net/forum?id=9bMZ29SPVx | [
"Suorong Yang",
"Peng Ye",
"Wanli Ouyang",
"Dongzhan Zhou",
"Furao Shen"
] | Spotlight | Large-scale datasets have been pivotal to the advancements of deep learning models in recent years, but training on such large datasets inevitably incurs substantial storage and computational overhead.
Meanwhile, real-world datasets often contain redundant and noisy data, imposing a negative impact on training efficiency and model performance.
Data selection has shown promise in identifying the most representative samples from the entire dataset, which aims to minimize the performance gap with reduced training costs.
Existing works typically rely on single-modality information to assign importance scores for individual samples, which may lead to inaccurate assessments, especially when dealing with noisy or corrupted samples.
To address this limitation, we propose a novel CLIP-powered data selection framework that leverages multimodal information for more robust and generalizable sample selection.
Specifically, our framework consists of three key modules—dataset adaptation, sample scoring, and selection optimization—that together harness extensive pre-trained multimodal knowledge to comprehensively assess sample influence and optimize the selection results through multi-objective optimization.
Extensive experiments demonstrate that our approach consistently outperforms existing state-of-the-art baselines on various benchmark datasets. Notably, our method effectively removes noisy or damaged samples from the dataset, enabling it to achieve even higher performance with less data. This indicates that it is not only a way to accelerate training but can also improve overall data quality.
The implementation is available at https://github.com/Jackbrocp/clip-powered-data-selection. | Data selection, generalization, multimodal | null | 3,088 | 2410.11215 |
PABBO: Preferential Amortized Black-Box Optimization | https://openreview.net/forum?id=YhfrKB3Ah7 | [
"Xinyu Zhang",
"Daolang Huang",
"Samuel Kaski",
"Julien Martinelli"
] | Spotlight | Preferential Bayesian Optimization (PBO) is a sample-efficient method to learn latent user utilities from preferential feedback over a pair of designs. It relies on a statistical surrogate model for the latent function, usually a Gaussian process, and an acquisition strategy to select the next candidate pair to get user feedback on. Due to the non-conjugacy of the associated likelihood, every PBO step requires a significant amount of computations with various approximate inference techniques. This computational overhead is incompatible with the way humans interact with computers, hindering the use of PBO in real-world cases. Building on the recent advances of amortized BO, we propose to circumvent this issue by fully amortizing PBO, meta-learning both the surrogate and the acquisition function. Our method comprises a novel transformer neural process architecture, trained using reinforcement learning and tailored auxiliary losses.
On a benchmark composed of synthetic and real-world datasets, our method is several orders of magnitude faster than the usual Gaussian process-based strategies and often outperforms them in accuracy. | Bayesian optimization, preference learning, amortized inference, neural processes | We present the first fully amortized preferential black-box optimization framework, featuring a novel transformer-based neural process architecture. | 3,005 | 2503.00924 |
Test-time Alignment of Diffusion Models without Reward Over-optimization | https://openreview.net/forum?id=vi3DjUhFVm | [
"Sunwoo Kim",
"Minkyu Kim",
"Dongmin Park"
] | Spotlight | Diffusion models excel in generative tasks, but aligning them with specific objectives while maintaining their versatility remains challenging. Existing fine-tuning methods often suffer from reward over-optimization, while approximate guidance approaches fail to optimize target rewards effectively. Addressing these limitations, we propose a training-free, test-time method based on Sequential Monte Carlo (SMC) to sample from the reward-aligned target distribution. Our approach, tailored for diffusion sampling and incorporating tempering techniques, achieves comparable or superior target rewards to fine-tuning methods while preserving diversity and cross-reward generalization. We demonstrate its effectiveness in single-reward optimization, multi-objective scenarios, and online black-box optimization. This work offers a robust solution for aligning diffusion models with diverse downstream objectives without compromising their general capabilities. Code is available at https://github.com/krafton-ai/DAS. | diffusion models, alignment, reward over-optimization, sequential monte carlo samplers | null | 2,940 | 2501.05803 |
Vision-RWKV: Efficient and Scalable Visual Perception with RWKV-Like Architectures | https://openreview.net/forum?id=nGiGXLnKhl | [
"Yuchen Duan",
"Weiyun Wang",
"Zhe Chen",
"Xizhou Zhu",
"Lewei Lu",
"Tong Lu",
"Yu Qiao",
"Hongsheng Li",
"Jifeng Dai",
"Wenhai Wang"
] | Spotlight | Transformers have revolutionized computer vision and natural language processing, but their high computational complexity limits their application in high-resolution image processing and long-context analysis. This paper introduces Vision-RWKV (VRWKV), a model that builds upon the RWKV architecture from the NLP field with key modifications tailored specifically for vision tasks. Similar to the Vision Transformer (ViT), our model demonstrates robust global processing capabilities, efficiently handles sparse inputs like masked images, and can scale up to accommodate both large-scale parameters and extensive datasets. Its distinctive advantage is its reduced spatial aggregation complexity, enabling seamless processing of high-resolution images without the need for window operations. Our evaluations demonstrate that VRWKV surpasses ViT's performance in image classification and has significantly faster speeds and lower memory usage processing high-resolution inputs. In dense prediction tasks, it outperforms window-based models, maintaining comparable speeds. These results highlight VRWKV's potential as a more efficient alternative for visual perception tasks. Code and models are available at~\url{https://github.com/OpenGVLab/Vision-RWKV}. | RWKV, Visual Perception, Linear Attention | null | 2,928 | null |
GOLD: Graph Out-of-Distribution Detection via Implicit Adversarial Latent Generation | https://openreview.net/forum?id=y5einmJ0Yx | [
"Danny Wang",
"Ruihong Qiu",
"Guangdong Bai",
"Zi Huang"
] | Spotlight | Despite graph neural networks' (GNNs) great success in modelling graph-structured data, out-of-distribution (OOD) test instances still pose a great challenge for current GNNs. One of the most effective techniques to detect OOD nodes is to expose the detector model with an additional OOD node-set, yet the extra OOD instances are often difficult to obtain in practice. Recent methods for image data address this problem using OOD data synthesis, typically relying on pre-trained generative models like Stable Diffusion. However, these approaches require vast amounts of additional data, as well as one-for-all pre-trained generative models, which are not available for graph data. Therefore, we propose the GOLD framework for graph OOD detection, an implicit adversarial learning pipeline with synthetic OOD exposure without pre-trained models. The implicit adversarial training process employs a novel alternating optimisation framework by training: (1) a latent generative model to regularly imitate the in-distribution (ID) embeddings from an evolving GNN, and (2) a GNN encoder and an OOD detector to accurately classify ID data while increasing the energy divergence between the ID embeddings and the generative model's synthetic embeddings. This novel approach implicitly transforms the synthetic embeddings into pseudo-OOD instances relative to the ID data, effectively simulating exposure to OOD scenarios without auxiliary data. Extensive OOD detection experiments are conducted on five benchmark graph datasets, verifying the superior performance of GOLD without using real OOD data compared with the state-of-the-art OOD exposure and non-exposure baselines. | Graph Neural Network, Out-of-Distribution Detection | null | 2,927 | 2502.05780 |
Fine-tuning with Reserved Majority for Noise Reduction | https://openreview.net/forum?id=ZV7CLf0RHK | [
"Shuyang Jiang",
"Yusheng Liao",
"Ya Zhang",
"Yanfeng Wang",
"Yu Wang"
] | Spotlight | Parameter-efficient fine-tuning (PEFT) has revolutionized supervised fine-tuning, where LoRA and its variants gain the most popularity due to their low training costs and zero inference latency.
However, LoRA tuning not only injects knowledgeable features but also noisy hallucination during fine-tuning, which hinders the utilization of tunable parameters with the increasing LoRA rank.
In this work, we first investigate in-depth the redundancies among LoRA parameters with substantial empirical studies.
Aiming to resemble the learning capacity of high ranks from the findings, we set up a new fine-tuning framework, \textbf{P}arameter-\textbf{Re}dundant \textbf{F}ine-\textbf{T}uning (\preft), which follows the vanilla LoRA tuning process but is required to reduce redundancies before merging LoRA parameters back to pre-trained models.
Based on this framework, we propose \textbf{No}ise reduction with \textbf{R}eserved \textbf{M}ajority~(\norm), which decomposes the LoRA parameters into majority parts and redundant parts with random singular value decomposition.
The major components are determined by the proposed \search method, specifically employing subspace similarity to confirm the parameter groups that share the highest similarity with the base weight.
By employing \norm, we enhance both the learning capacity and benefits from larger ranks, which consistently outperforms both LoRA and other \preft-based methods on various downstream tasks, such as general instruction tuning, math reasoning and code generation.
Code is available at \url{https://github.com/pixas/NoRM}. | large language models, parameter redundancy fine-tuning, noisy reduction | A novel fine-tuning method for low-rank adaptation noise reduction | 2,918 | null |
LLaVA-Interleave: Tackling Multi-image, Video, and 3D in Large Multimodal Models | https://openreview.net/forum?id=oSQiao9GqB | [
"Feng Li",
"Renrui Zhang",
"Hao Zhang",
"Yuanhan Zhang",
"Bo Li",
"Wei Li",
"Zejun MA",
"Chunyuan Li"
] | Spotlight | Visual instruction tuning has made considerable strides in enhancing the capabilities of Large Multimodal Models (LMMs). However, existing open LMMs largely focus on single-image tasks, their applications to multi-image scenarios remains less explored. Additionally, prior LMM research separately tackles different scenarios, leaving it impossible to generalize cross scenarios with new
emerging capabilities. To this end, we introduce LLaVA-Interleave, which simultaneously tackles Multi-image, Multi-frame (video), Multi-view (3D), and Multi-patch (single-image) scenarios in LMMs. To enable these capabilities, we regard the interleaved data format as a general template and compile the M4-Instruct dataset with 1,177.6k samples, spanning 4 primary domains with 14
tasks and 41 datasets. We also curate the LLaVA-Interleave Bench to comprehensively evaluate the multi-image performance of LMMs. Through extensive
experiments, LLaVA-Interleave achieves leading results in multi-image, video,
and 3D benchmarks, while maintaining the performance of single-image tasks.
Besides, our model also exhibits several emerging capabilities, e.g., transferring tasks across different settings and modalities. | large language model, multimodal learning, interleaved image-text | null | 2,791 | null |
Formation of Representations in Neural Networks | https://openreview.net/forum?id=Njx1NjHIx4 | [
"Liu Ziyin",
"Isaac L. Chuang",
"Tomer Galanti",
"Tomaso A Poggio"
] | Spotlight | Understanding neural representations will help open the black box of neural networks and advance our scientific understanding of modern AI systems. However, how complex, structured, and transferable representations emerge in modern neural networks has remained a mystery. Building on previous results, we propose the Canonical Representation Hypothesis (CRH), which posits a set of six alignment relations to universally govern the formation of representations in most hidden layers of a neural network. Under the CRH, the latent representations (R), weights (W), and neuron gradients (G) become mutually aligned during training. This alignment implies that neural networks naturally learn compact representations, where neurons and weights are invariant to task-irrelevant transformations. We then show that the breaking of CRH leads to the emergence of reciprocal power-law relations between R, W, and G, which we refer to as the Polynomial Alignment Hypothesis (PAH). We present a minimal-assumption theory proving that the balance between gradient noise and regularization is crucial for the emergence of the canonical representation. The CRH and PAH lead to an exciting possibility of unifying major key deep learning phenomena, including neural collapse and the neural feature ansatz, in a single framework. | representation learning, neural collapse, neural feature ansatz | null | 2,480 | 2410.03006 |
ImpScore: A Learnable Metric For Quantifying The Implicitness Level of Sentences | https://openreview.net/forum?id=gYWqxXE5RJ | [
"Yuxin Wang",
"Xiaomeng Zhu",
"Weimin Lyu",
"Saeed Hassanpour",
"Soroush Vosoughi"
] | Spotlight | Handling implicit language is essential for natural language processing systems to achieve precise text understanding and facilitate natural interactions with users. Despite its importance, the absence of a metric for accurately measuring the implicitness of language significantly constrains the depth of analysis possible in evaluating models' comprehension capabilities. This paper addresses this gap by developing a scalar metric that quantifies the implicitness level of language without relying on external references. Drawing on principles from traditional linguistics, we define "implicitness" as the divergence between semantic meaning and pragmatic interpretation. To operationalize this definition, we introduce ImpScore, a reference-free metric formulated through an interpretable regression model. This model is trained using pairwise contrastive learning on a specially curated dataset consisting of (*implicit sentence*, *explicit sentence*) pairs. We validate ImpScore through a user study that compares its assessments with human evaluations on out-of-distribution data, demonstrating its accuracy and strong correlation with human judgments. Additionally, we apply ImpScore to hate speech detection datasets, illustrating its utility and highlighting significant limitations in current large language models' ability to understand highly implicit content. Our metric is publicly available at https://github.com/audreycs/ImpScore. | implicit language, pragmatics, learnable metric, text evaluation, automatic evaluation, explicit language | null | 2,475 | null |
ADIFF: Explaining audio difference using natural language | https://openreview.net/forum?id=l4fMj4Vnly | [
"Soham Deshmukh",
"Shuo Han",
"Rita Singh",
"Bhiksha Raj"
] | Spotlight | Understanding and explaining differences between audio recordings is crucial for fields like audio forensics, quality assessment, and audio generation. This involves identifying and describing audio events, acoustic scenes, signal characteristics, and their emotional impact on listeners. This paper stands out as the first work to comprehensively study the task of explaining audio differences and then propose benchmark, baselines for the task. First, we present two new datasets for audio difference explanation derived from the AudioCaps and Clotho audio captioning datasets. Using Large Language Models (LLMs), we generate three levels of difference explanations: (1) concise descriptions of audio events and objects, (2) brief sentences about audio events, acoustic scenes, and signal properties, and (3) comprehensive explanations that include semantics and listener emotions. For the baseline, we use prefix tuning where audio embeddings from two audio files are used to prompt a frozen language model. Our empirical analysis and ablation studies reveal that the naive baseline struggles to distinguish perceptually similar sounds and generate detailed tier 3 explanations. To address these limitations, we propose ADIFF, which introduces a cross-projection module, position captioning, and a three-step training process to enhance the model’s ability to produce detailed explanations. We evaluate our model using objective metrics and human evaluation and show our model enhancements lead to significant improvements in performance over naive baseline and SoTA Audio-Language Model (ALM) Qwen Audio. Lastly, we conduct multiple ablation studies to study the effects of cross-projection, language model parameters, position captioning, third stage fine-tuning, and present our findings. Our benchmarks, findings, and strong baseline pave the way for nuanced and human-like explanations of audio differences. | audio processing; audio-language; multimodal learning | A new task, dataset and model for explaining audio differences using natural language | 2,468 | 2502.04476 |
Enhancing Compositional Text-to-Image Generation with Reliable Random Seeds | https://openreview.net/forum?id=5BSlakturs | [
"Shuangqi Li",
"Hieu Le",
"Jingyi Xu",
"Mathieu Salzmann"
] | Spotlight | Text-to-image diffusion models have demonstrated remarkable capability in generating realistic images from arbitrary text prompts. However, they often produce inconsistent results for compositional prompts such as "two dogs" or "a penguin on the right of a bowl". Understanding these inconsistencies is crucial for reliable image generation. In this paper, we highlight the significant role of initial noise in these inconsistencies, where certain noise patterns are more reliable for compositional prompts than others. Our analyses reveal that different initial random seeds tend to guide the model to place objects in distinct image areas, potentially adhering to specific patterns of camera angles and image composition associated with the seed. To improve the model's compositional ability, we propose a method for mining these reliable cases, resulting in a curated training set of generated images without requiring any manual annotation.
By fine-tuning text-to-image models on these generated images, we significantly enhance their compositional capabilities. For numerical composition, we observe relative increases of 29.3\% and 19.5\% for Stable Diffusion and PixArt-$\alpha$, respectively. Spatial composition sees even larger gains, with 60.7\% for Stable Diffusion and 21.1\% for PixArt-$\alpha$. | Diffusion models, text-to-image generation | All Seeds are Not Equal: Some random seeds are more reliable than others. We propose a seed mining strategy to identify and leverage these seeds, significantly improving compositional consistency. | 2,445 | null |
LoRA-Pro: Are Low-Rank Adapters Properly Optimized? | https://openreview.net/forum?id=gTwRMU3lJ5 | [
"Zhengbo Wang",
"Jian Liang",
"Ran He",
"Zilei Wang",
"Tieniu Tan"
] | Spotlight | Low-rank adaptation, also known as LoRA, has emerged as a prominent method for parameter-efficient fine-tuning of foundation models.
Despite its computational efficiency, LoRA still yields inferior performance compared to full fine-tuning.
In this paper, we first uncover a fundamental connection between the optimization processes of LoRA and full fine-tuning: using LoRA for optimization is mathematically equivalent to full fine-tuning using a low-rank gradient for parameter updates.
And this low-rank gradient can be expressed in terms of the gradients of the two low-rank matrices in LoRA.
Leveraging this insight, we introduce LoRA-Pro, a method that enhances LoRA's performance by strategically adjusting the gradients of these low-rank matrices.
This adjustment allows the low-rank gradient to more accurately approximate the full fine-tuning gradient, thereby narrowing the performance gap between LoRA and full fine-tuning.
Furthermore, we theoretically derive the optimal solutions for adjusting the gradients of the low-rank matrices, applying them during fine-tuning in LoRA-Pro.
We conduct extensive experiments across natural language understanding, dialogue generation, mathematical reasoning, code generation, and image classification tasks, demonstrating that LoRA-Pro substantially improves LoRA's performance, effectively narrowing the gap with full fine-tuning.
Our code is publicly available at https://github.com/mrflogs/LoRA-Pro. | Parameter Efficient Fine-Tuning, Large Language Models, Low-Rank Adaptation | null | 2,390 | null |
Large-scale and Fine-grained Vision-language Pre-training for Enhanced CT Image Understanding | https://openreview.net/forum?id=nYpPAT4L3D | [
"Zhongyi Shui",
"Jianpeng Zhang",
"Weiwei Cao",
"Sinuo Wang",
"Ruizhe Guo",
"Le Lu",
"Lin Yang",
"Xianghua Ye",
"Tingbo Liang",
"Qi Zhang",
"Ling Zhang"
] | Spotlight | Artificial intelligence (AI) shows great potential in assisting radiologists to improve the efficiency and accuracy of medical image interpretation and diagnosis. However, a versatile AI model requires large-scale data and comprehensive annotations, which are often impractical in medical settings. Recent studies leverage radiology reports as a naturally high-quality supervision for medical images, using contrastive language-image pre-training (CLIP) to develop language-informed models for radiological image interpretation. Nonetheless, these approaches typically contrast entire images with reports, neglecting the local associations between imaging regions and report sentences, which may undermine model performance and interoperability. In this paper, we propose a fine-grained vision-language model (fVLM) for anatomy-level CT image interpretation. Specifically, we explicitly match anatomical regions of CT images with corresponding descriptions in radiology reports and perform contrastive pre-training for each anatomy individually. Fine-grained alignment, however, faces considerable false-negative challenges, mainly from the abundance of anatomy-level healthy samples and similarly diseased abnormalities, leading to ambiguous patient-level pairings. To tackle this issue, we propose identifying false negatives of both normal and abnormal samples and calibrating contrastive learning from patient-level to disease-aware pairing. We curated the largest CT dataset to date, comprising imaging and report data from 69,086 patients, and conducted a comprehensive evaluation of 54 major and important disease (including several most deadly cancers) diagnosis tasks across 15 main anatomies. Experimental results demonstrate the substantial potential of fVLM in versatile medical image interpretation. In the zero-shot classification task, we achieved an average AUC of 81.3% on 54 diagnosis tasks, surpassing CLIP and supervised methods by 12.9% and 8.0%, respectively. Additionally, on the publicly available CT-RATE and Rad-ChestCT benchmarks, our fVLM outperformed the current state-of-the-art methods with absolute AUC gains of 7.4% and 4.8%, respectively. | Vision-language model, fine-grained alignment, large-scale pre-training, CT image | null | 2,336 | 2501.14548 |
AutoCGP: Closed-Loop Concept-Guided Policies from Unlabeled Demonstrations | https://openreview.net/forum?id=9ehJCZz4aM | [
"Pei Zhou",
"Ruizhe Liu",
"Qian Luo",
"Fan Wang",
"Yibing Song",
"Yanchao Yang"
] | Spotlight | Training embodied agents to perform complex robotic tasks presents significant challenges due to the entangled factors of task compositionality, environmental diversity, and dynamic changes. In this work, we introduce a novel imitation learning framework to train closed-loop concept-guided policies that enhance long-horizon task performance by leveraging discovered manipulation concepts. Unlike methods that rely on predefined skills and human-annotated labels, our approach allows agents to autonomously abstract manipulation concepts from their proprioceptive states, thereby alleviating misalignment due to ambiguities in human semantics and environmental complexity. Our framework comprises two primary components: an *Automatic Concept Discovery* module that identifies meaningful and consistent manipulation concepts, and a *Concept-Guided Policy Learning* module that effectively utilizes these manipulation concepts for adaptive task execution, including a *Concept Selection Transformer* for concept-based guidance and a *Concept-Guided Policy* for action prediction with the selected concepts. Experiments demonstrate that our approach significantly outperforms baseline methods across a range of tasks and environments, while showcasing emergent consistency in motion patterns associated with the discovered manipulation concepts. Codes are available at: https://github.com/PeiZhou26/AutoCGP. | Self-Supervised Manipulation Concept Discovery, Concept-Guided Policy for Robotic Tasks | null | 2,315 | null |
Perm: A Parametric Representation for Multi-Style 3D Hair Modeling | https://openreview.net/forum?id=WKfb1xGXGx | [
"Chengan He",
"Xin Sun",
"Zhixin Shu",
"Fujun Luan",
"Soren Pirk",
"Jorge Alejandro Amador Herrera",
"Dominik Michels",
"Tuanfeng Yang Wang",
"Meng Zhang",
"Holly Rushmeier",
"Yi Zhou"
] | Spotlight | We present Perm, a learned parametric representation of human 3D hair designed to facilitate various hair-related applications. Unlike previous work that jointly models the global hair structure and local curl patterns, we propose to disentangle them using a PCA-based strand representation in the frequency domain, thereby allowing more precise editing and output control. Specifically, we leverage our strand representation to fit and decompose hair geometry textures into low- to high-frequency hair structures, termed guide textures and residual textures, respectively. These decomposed textures are later parameterized with different generative models, emulating common stages in the hair grooming process. We conduct extensive experiments to validate the architecture design of Perm, and finally deploy the trained model as a generic prior to solve task-agnostic problems, further showcasing its flexibility and superiority in tasks such as single-view hair reconstruction, hairstyle editing, and hair-conditioned image generation. More details can be found on our project page: https://cs.yale.edu/homes/che/projects/perm/. | Hair Modeling, Parametric Models, Generative Models | We present a parametric model of 3D human hair. | 2,303 | 2407.19451 |
Easing Training Process of Rectified Flow Models Via Lengthening Inter-Path Distance | https://openreview.net/forum?id=RaR3ETzyKp | [
"Xu Shifeng",
"Yanzhu Liu",
"Adams Wai-Kin Kong"
] | Spotlight | Recent research pinpoints that different diffusion methods and architectures
trained on the same dataset produce similar results for the same input noise.
This property suggests that they have some preferable noises for a given sample.
By visualizing the noise-sample pairs of rectified flow models and stable diffusion models in two-dimensional spaces,
we observe that the preferable paths, connecting preferable noises to the corresponding samples,
are better organized with significant fewer crossings comparing with
the random paths, connecting random noises to training samples.
In high-dimensional space, paths rarely intersect.
The path crossings in two-dimensional spaces indicate the shorter inter-path distance
in the corresponding high-dimensional spaces.
Inspired by this observation, we propose the Distance-Aware Noise-Sample Matching (DANSM) method
to lengthen the inter-path distance for speeding up the model training.
DANSM is derived from rectified flow models, which allow using a closed-form formula to calculate the inter-path distance.
To further simplify the optimization, we derive the relationship between inter-path distance and path length,
and use the latter in the optimization surrogate.
DANSM is evaluated on both image and latent spaces by rectified flow models and diffusion models.
The experimental results show that DANSM can significantly improve the training speed by 30\% $\sim$ 40\%
without sacrificing the generation quality. | Rectified Flow Models, Training, Easing, Distance-Aware Noise-Sample Matching, DANSM | easing the training process of rectified flow models by better noise-sample pairs in each training epoch | 2,291 | null |
GETS: Ensemble Temperature Scaling for Calibration in Graph Neural Networks | https://openreview.net/forum?id=qgsXsqahMq | [
"Dingyi Zhuang",
"Chonghe Jiang",
"Yunhan Zheng",
"Shenhao Wang",
"Jinhua Zhao"
] | Spotlight | Graph Neural Networks (GNNs) deliver strong classification results but often suffer from poor calibration performance, leading to overconfidence or underconfidence. This is particularly problematic in high-stakes applications where accurate uncertainty estimates are essential. Existing post-hoc methods, such as temperature scaling, fail to effectively utilize graph structures, while current GNN calibration methods often overlook the potential of leveraging diverse input information and model ensembles jointly. In the paper, we propose Graph Ensemble Temperature Scaling (GETS), a novel calibration framework that combines input and model ensemble strategies within a Graph Mixture-of-Experts (MoE) architecture. GETS integrates diverse inputs, including logits, node features, and degree embeddings, and adaptively selects the most relevant experts for each node’s calibration procedure. Our method outperforms state-of-the-art calibration techniques, reducing expected calibration error (ECE) by $\geq$ 25% across 10 GNN benchmark datasets. Additionally, GETS is computationally efficient, scalable, and capable of selecting effective input combinations for improved calibration performance. The implementation is available at https://github.com/ZhuangDingyi/GETS/. | Uncertainty Quantification; Graph Neural Networks; Ensemble Learning; Mixture of Experts | GETS is a novel framework for calibrating GNNs, combining input and model ensemble strategies to improve uncertainty calibration across 10 benchmark datasets, offering significant performance gains while remaining efficient and scalable. | 2,271 | 2410.09570 |
Conditional Diffusion with Ordinal Regression: Longitudinal Data Generation for Neurodegenerative Disease Studies | https://openreview.net/forum?id=9UGfOJBuL8 | [
"Hyuna Cho",
"Ziquan Wei",
"Seungjoo Lee",
"Tingting Dan",
"Guorong Wu",
"Won Hwa Kim"
] | Spotlight | Modeling the progression of neurodegenerative diseases such as Alzheimer’s disease (AD) is crucial for early detection and prevention given their irreversible nature. However, the scarcity of longitudinal data and complex disease dynamics make the analysis highly challenging. Moreover, longitudinal samples often contain irregular and large intervals between subject visits, which underscore the necessity for advanced data generation techniques that can accurately simulate disease progression over time. In this regime, we propose a novel conditional generative model for synthesizing longitudinal sequences and present its application to neurodegenerative disease data generation conditioned on multiple time-dependent ordinal factors, such as age and disease severity. Our method sequentially generates continuous data by bridging gaps between sparse data points with a diffusion model, ensuring a realistic representation of disease progression. The synthetic data are curated to integrate both cohort-level and individual-specific characteristics, where the cohort-level representations are modeled with an ordinal regression to capture longitudinally monotonic behavior. Extensive experiments on four AD biomarkers validate the superiority of our method over nine baseline approaches, highlighting its potential to be applied to a variety of longitudinal data generation. | neurodegenerative disease, conditional diffusion model, longitudinal data analysis | null | 2,260 | null |
SoftCVI: Contrastive variational inference with self-generated soft labels | https://openreview.net/forum?id=PiZtlzMWUj | [
"Daniel Ward",
"Mark Beaumont",
"Matteo Fasiolo"
] | Spotlight | Estimating a distribution given access to its unnormalized density is pivotal in Bayesian inference, where the posterior is generally known only up to an unknown normalizing constant. Variational inference and Markov chain Monte Carlo methods are the predominant tools for this task; however, both are often challenging to apply reliably, particularly when the posterior has complex geometry. Here, we introduce Soft Contrastive Variational Inference (SoftCVI), which allows a family of variational objectives to be derived through a contrastive estimation framework. The approach parameterizes a classifier in terms of a variational distribution, reframing the inference task as a contrastive estimation problem aiming to identify a single true posterior sample among a set of samples. Despite this framing, we do not require positive or negative samples, but rather learn by sampling the variational distribution and computing ground truth soft classification labels from the unnormalized posterior itself. The objectives have zero variance gradient when the variational approximation is exact, without the need for specialized gradient estimators. We empirically investigate the performance on a variety of Bayesian inference tasks, using both simple (e.g. normal) and expressive (normalizing flow) variational distributions. We find that SoftCVI can be used to form objectives which are stable to train and mass-covering, frequently outperforming inference with other variational approaches. | contrastive learning, variational inference | We present SoftCVI, a reframing of variational inference as a contrastive estimation problem, and use it derive stable, mass-covering objectives. | 2,135 | 2407.15687 |
MonST3R: A Simple Approach for Estimating Geometry in the Presence of Motion | https://openreview.net/forum?id=lJpqxFgWCM | [
"Junyi Zhang",
"Charles Herrmann",
"Junhwa Hur",
"Varun Jampani",
"Trevor Darrell",
"Forrester Cole",
"Deqing Sun",
"Ming-Hsuan Yang"
] | Spotlight | Estimating geometry from dynamic scenes, where objects move and deform over time, remains a core challenge in computer vision. Current approaches often rely on multi-stage pipelines or global optimizations that decompose the problem into subtasks, like depth and flow, leading to complex systems prone to errors. In this paper, we present Motion DUSt3R (MonST3R), a novel geometry-first approach
that directly estimates per-timestep geometry from dynamic scenes. Our key insight is that by simply estimating a pointmap for each timestep, we can effectively adapt DUSt3R’s representation, previously only used for static scenes, to dynamic scenes. However, this approach presents a significant challenge: the scarcity of suitable training data, namely dynamic, posed videos with depth labels. Despite this, we show that by posing the problem as a fine-tuning task, identifying several suitable datasets, and strategically training the model on this limited data, we can surprisingly enable the model to handle dynamics, even without an explicit motion representation. Based on this, we introduce new optimizations for several downstream video-specific tasks and demonstrate strong performance on video depth and camera pose estimation, outperforming prior work in terms of robustness and efficiency. Moreover, MonST3R shows promising results for primarily feed-forward 4D reconstruction. Interactive 4D results, source code, and trained models are available at: https://monst3r-project.github.io/. | 3D computer vision, structure from motion, depth estimation | We adapt pointmaps for dynamic scenes and do a primarily feed forward estimation of geometry for videos, resulting in SOTA performance for several downstream tasks including estimating camera pose and video depth. | 2,122 | 2410.03825 |
ND-SDF: Learning Normal Deflection Fields for High-Fidelity Indoor Reconstruction | https://openreview.net/forum?id=4HRRcqE9SU | [
"Ziyu Tang",
"Weicai Ye",
"Yifan Wang",
"Di Huang",
"Hujun Bao",
"Tong He",
"Guofeng Zhang"
] | Spotlight | Neural implicit reconstruction via volume rendering has demonstrated its effectiveness in recovering dense 3D surfaces. However, it is non-trivial to simultaneously recover meticulous geometry and preserve smoothness across regions with differing characteristics. To address this issue, previous methods typically employ geometric priors, which are often constrained by the performance of the prior models. In this paper, we propose ND-SDF, which learns a Normal Deflection field to represent the angular deviation between the scene normal and the prior normal. Unlike previous methods that uniformly apply geometric priors on all samples, introducing significant bias in accuracy, our proposed normal deflection field dynamically learns and adapts the utilization of samples based on their specific characteristics, thereby improving both the accuracy and effectiveness of the model. Our method not only obtains smooth weakly textured regions such as walls and floors but also preserves the geometric details of complex structures. In addition, we introduce a novel ray sampling strategy based on the deflection angle to facilitate the unbiased rendering process, which significantly improves the quality and accuracy of intricate surfaces, especially on thin structures. Consistent improvements on various challenging datasets demonstrate the superiority of our method. | Normal Deflection Fields, High-Fidelity Indoor Reconstruction | Learning Normal Deflection Fields for High-Fidelity Indoor Reconstruction | 2,063 | null |
SlowFast-VGen: Slow-Fast Learning for Action-Driven Long Video Generation | https://openreview.net/forum?id=UL8b54P96G | [
"Yining Hong",
"Beide Liu",
"Maxine Wu",
"Yuanhao Zhai",
"Kai-Wei Chang",
"Linjie Li",
"Kevin Lin",
"Chung-Ching Lin",
"Jianfeng Wang",
"Zhengyuan Yang",
"Ying Nian Wu",
"Lijuan Wang"
] | Spotlight | Human beings are endowed with a complementary learning system, which bridges the slow learning of general world dynamics with fast storage of episodic memory from a new experience. Previous video generation models, however, primarily focus on slow learning by pre-training on vast amounts of data, overlooking the fast learning phase crucial for episodic memory storage. This oversight leads to inconsistencies across temporally distant frames when generating longer videos, as these frames fall beyond the model's context window. To this end, we introduce SlowFast-VGen, a novel dual-speed learning system for action-driven long video generation. Our approach incorporates a masked conditional video diffusion model for the slow learning of world dynamics, alongside an inference-time fast learning strategy based on a temporal LoRA module. Specifically, the fast learning process updates its temporal LoRA parameters based on local inputs and outputs, thereby efficiently storing episodic memory in its parameters. We further propose a slow-fast learning loop algorithm that seamlessly integrates the inner fast learning loop into the outer slow learning loop, enabling the recall of prior multi-episode experiences for context-aware skill learning. To facilitate the slow learning of an approximate world model, we collect a large-scale dataset of 200k videos with language action annotations, covering a wide range of scenarios. Extensive experiments show that SlowFast-VGen outperforms baselines across various metrics for action-driven video generation, achieving an FVD score of 514 compared to 782, and maintaining consistency in longer videos, with an average of 0.37 scene cuts versus 0.89. The slow-fast learning loop algorithm significantly enhances performances on long-horizon planning tasks as well. | video generation, complimentary learning system, slow-fast learning, diffusion | null | 1,988 | null |
Atlas Gaussians Diffusion for 3D Generation | https://openreview.net/forum?id=H2Gxil855b | [
"Haitao Yang",
"Yuan Dong",
"Hanwen Jiang",
"Dejia Xu",
"Georgios Pavlakos",
"Qixing Huang"
] | Spotlight | Using the latent diffusion model has proven effective in developing novel 3D generation techniques. To harness the latent diffusion model, a key challenge is designing a high-fidelity and efficient representation that links the latent space and the 3D space. In this paper, we introduce Atlas Gaussians, a novel representation for feed-forward native 3D generation. Atlas Gaussians represent a shape as the union of local patches, and each patch can decode 3D Gaussians. We parameterize a patch as a sequence of feature vectors and design a learnable function to decode 3D Gaussians from the feature vectors. In this process, we incorporate UV-based sampling, enabling the generation of a sufficiently large, and theoretically infinite, number of 3D Gaussian points. The large amount of 3D Gaussians enables the generation of high-quality details. Moreover, due to local awareness of the representation, the transformer-based decoding procedure operates on a patch level, ensuring efficiency. We train a variational autoencoder to learn the Atlas Gaussians representation, and then apply a latent diffusion model on its latent space for learning 3D Generation. Experiments show that our approach outperforms the prior arts of feed-forward native 3D generation. Project page: https://yanghtr.github.io/projects/atlas_gaussians. | 3D generation, diffusion, 3D Gaussian Splatting | null | 1,983 | 2408.13055 |
Min-K%++: Improved Baseline for Pre-Training Data Detection from Large Language Models | https://openreview.net/forum?id=ZGkfoufDaU | [
"Jingyang Zhang",
"Jingwei Sun",
"Eric Yeats",
"Yang Ouyang",
"Martin Kuo",
"Jianyi Zhang",
"Hao Frank Yang",
"Hai Li"
] | Spotlight | The problem of pre-training data detection for large language models (LLMs) has received growing attention due to its implications in critical issues like copyright violation and test data contamination. Despite improved performance, existing methods (including the state-of-the-art, Min-K%) are mostly developed upon simple heuristics and lack solid, reasonable foundations. In this work, we propose a novel and theoretically motivated methodology for pre-training data detection, named Min-K%++. Specifically, we present a key insight that training samples tend to be local maxima of the modeled distribution along each input dimension through maximum likelihood training, which in turn allow us to insightfully translate the problem into identification of local maxima. Then, we design our method accordingly that works under the discrete distribution modeled by LLMs, whose core idea is to determine whether the input forms a mode or has relatively high probability under the conditional categorical distribution. Empirically, the proposed method achieves new SOTA performance across multiple settings (evaluated with 5 families of 10 models and 2 benchmarks). On the WikiMIA benchmark, Min-K%++ outperforms the runner-up by 6.2% to 10.5% in detection AUROC averaged over five models. On the more challenging MIMIR benchmark, it consistently improves upon reference-free methods while performing on par with reference-based method that requires an extra reference model. | pre-training data detection, large language model | null | 1,957 | null |
4K4DGen: Panoramic 4D Generation at 4K Resolution | https://openreview.net/forum?id=qxRoo7ULCo | [
"Renjie Li",
"Panwang Pan",
"Bangbang Yang",
"Dejia Xu",
"Shijie Zhou",
"Xuanyang Zhang",
"Zeming Li",
"Achuta Kadambi",
"Zhangyang Wang",
"Zhengzhong Tu",
"Zhiwen Fan"
] | Spotlight | The blooming of virtual reality and augmented reality (VR/AR) technologies has driven an increasing demand for the creation of high-quality, immersive, and dynamic environments. However, existing generative techniques either focus solely on dynamic objects or perform outpainting from a single perspective image, failing to meet the requirements of VR/AR applications that need free-viewpoint, 360$^{\circ}$ virtual views where users can move in all directions. In this work, we tackle the challenging task of elevating a single panorama to an immersive 4D experience. For the first time, we demonstrate the capability to generate omnidirectional dynamic scenes with 360$^{\circ}$ views at 4K (4096 $\times$ 2048) resolution, thereby providing an immersive user experience. Our method introduces a pipeline that facilitates natural scene animations and optimizes a set of 3D Gaussians using efficient splatting techniques for real-time exploration. To overcome the lack of scene-scale annotated 4D data and models, especially in panoramic formats, we propose a novel Panoramic Denoiser that adapts generic 2D diffusion priors to animate consistently in 360$^{\circ}$ images, transforming them into panoramic videos with dynamic scenes at targeted regions. Subsequently, we propose Dynamic Panoramic Lifting to elevate the panoramic video into a 4D immersive environment while preserving spatial and temporal consistency. By transferring prior knowledge from 2D models in the perspective domain to the panoramic domain and the 4D lifting with spatial appearance and geometry regularization, we achieve high-quality Panorama-to-4D generation at a resolution of 4K for the first time. Project page: https://4k4dgen.github.io/. | 4D Generation, Panoramic Video, Panoramic Gaussian Splatting | null | 1,955 | 2406.13527 |
Samba: Synchronized Set-of-Sequences Modeling for Multiple Object Tracking | https://openreview.net/forum?id=OeBY9XqiTz | [
"Mattia Segu",
"Luigi Piccinelli",
"Siyuan Li",
"Yung-Hsu Yang",
"Luc Van Gool",
"Bernt Schiele"
] | Spotlight | Multiple object tracking in complex scenarios - such as coordinated dance performances, team sports, or dynamic animal groups - presents unique challenges. In these settings, objects frequently move in coordinated patterns, occlude each other, and exhibit long-term dependencies in their trajectories. However, it remains a key open research question on how to model long-range dependencies within tracklets, interdependencies among tracklets, and the associated temporal occlusions. To this end, we introduce Samba, a novel linear-time set-of-sequences model designed to jointly process multiple tracklets by synchronizing the multiple selective state-spaces used to model each tracklet. Samba autoregressively predicts the future track query for each sequence while maintaining synchronized long-term memory representations across tracklets. By integrating Samba into a tracking-by-propagation framework, we propose SambaMOTR, the first tracker effectively addressing the aforementioned issues, including long-range dependencies, tracklet interdependencies, and temporal occlusions. Additionally, we introduce an effective technique for dealing with uncertain observations (MaskObs) and an efficient training recipe to scale SambaMOTR to longer sequences. By modeling long-range dependencies and interactions among tracked objects, SambaMOTR implicitly learns to track objects accurately through occlusions without any hand-crafted heuristics. Our approach significantly surpasses prior state-of-the-art on the DanceTrack, BFT, and SportsMOT datasets. | Synchronized sequence modeling; multiple object tracking | SambaMOTR is a tracking-by-propagation method that (i) leverages long-range dependencies and (ii) synchronizes memory across object trajectories to better estimate future track queries and handle occlusions. | 1,948 | 2410.01806 |
Programming Refusal with Conditional Activation Steering | https://openreview.net/forum?id=Oi47wc10sm | [
"Bruce W. Lee",
"Inkit Padhi",
"Karthikeyan Natesan Ramamurthy",
"Erik Miehling",
"Pierre Dognin",
"Manish Nagireddy",
"Amit Dhurandhar"
] | Spotlight | LLMs have shown remarkable capabilities, but precisely controlling their response behavior remains challenging.
Existing activation steering methods alter LLM behavior indiscriminately, limiting their practical applicability in settings where selective responses are essential, such as content moderation or domain-specific assistants.
In this paper, we propose Conditional Activation Steering (CAST), which analyzes LLM activation patterns during inference to selectively apply or withhold activation steering based on the input context.
Our method is based on the observation that different categories of prompts activate distinct patterns in the model's hidden states.
Using CAST, one can systematically control LLM behavior with rules like "if input is about hate speech or adult content, then refuse" or "if input is not about legal advice, then refuse."
This allows for selective modification of responses to specific content while maintaining normal responses to other content, all without requiring weight optimization.
We release an open-source implementation of our framework. | Activation Engineering, Safety, Alignment, Steering Vector | Conditional Activation Steering enables context-dependent control over LLM behaviors by conditionally applying activation steering based on input context during inference without weight optimization. | 1,937 | 2409.05907 |
Eagle: Exploring The Design Space for Multimodal LLMs with Mixture of Encoders | https://openreview.net/forum?id=Y2RW9EVwhT | [
"Min Shi",
"Fuxiao Liu",
"Shihao Wang",
"Shijia Liao",
"Subhashree Radhakrishnan",
"Yilin Zhao",
"De-An Huang",
"Hongxu Yin",
"Karan Sapra",
"Yaser Yacoob",
"Humphrey Shi",
"Bryan Catanzaro",
"Andrew Tao",
"Jan Kautz",
"Zhiding Yu",
"Guilin Liu"
] | Spotlight | The ability to accurately interpret complex visual information is a crucial topic of multimodal large language models (MLLMs). Recent work indicates that enhanced visual perception significantly reduces hallucinations and improves performance on resolution-sensitive tasks, such as optical character recognition and document analysis. A number of recent MLLMs achieve this goal using a mixture of vision encoders. Despite their success, there is a lack of systematic comparisons and detailed ablation studies addressing critical aspects, such as expert selection and the integration of multiple vision experts. This study provides an extensive exploration of the design space for MLLMs using a mixture of vision encoders and resolutions. Our findings reveal several underlying principles common to various existing strategies, leading to a streamlined yet effective design approach. We discover that simply concatenating visual tokens from a set of complementary vision encoders is as effective as more complex mixing architectures or strategies. We additionally introduce Pre-Alignment to bridge the gap between vision-focused encoders and language tokens, enhancing model coherence. The resulting family of MLLMs, Eagle, surpasses other leading open-source models on major MLLM benchmarks. | LLM, Vision-Language Model, Vision-Centric Multimodal LLM | This work presents a thorough design space exploration to strengthen multimodal LLM perception with mixture of vision encoders. | 1,935 | 2408.15998 |
PerturboLLaVA: Reducing Multimodal Hallucinations with Perturbative Visual Training | https://openreview.net/forum?id=j4LITBSUjs | [
"Cong Chen",
"Mingyu Liu",
"Chenchen Jing",
"Yizhou Zhou",
"Fengyun Rao",
"Hao Chen",
"Bo Zhang",
"Chunhua Shen"
] | Spotlight | This paper aims to address the challenge of hallucinations in Multimodal Large Language Models (MLLMs) particularly for dense image captioning tasks. To tackle the challenge, we identify the current lack of a metric that finely measures the caption quality in concept level. We hereby introduce HalFscore, a novel metric built upon the language graph and is designed to evaluate both the accuracy and completeness of dense captions at a
granular level. Additionally, we identify the root cause of hallucination as the model's over-reliance on its language prior. To address this, we propose PerturboLLaVA, which reduces the model's reliance on the language prior by incorporating adversarially perturbed text during training. This method enhances the model's focus on visual inputs, effectively reducing hallucinations and producing accurate, image-grounded descriptions without incurring additional computational overhead. PerturboLLaVA significantly improves the fidelity of generated captions, outperforming existing approaches in handling multimodal hallucinations and achieving improved performance across general multimodal benchmarks. | Multi-Modal Large Language Models, Hallucinations Mitigation, Hallucinations Evaluation, Language Model Priors | null | 1,911 | 2503.06486 |
CLoSD: Closing the Loop between Simulation and Diffusion for multi-task character control | https://openreview.net/forum?id=pZISppZSTv | [
"Guy Tevet",
"Sigal Raab",
"Setareh Cohan",
"Daniele Reda",
"Zhengyi Luo",
"Xue Bin Peng",
"Amit Haim Bermano",
"Michiel van de Panne"
] | Spotlight | Motion diffusion models and Reinforcement Learning (RL) based control for physics-based simulations have complementary strengths for human motion generation. The former is capable of generating a wide variety of motions, adhering to intuitive control such as text, while the latter offers physically plausible motion and direct interaction with the environment. In this work, we present a method that combines their respective strengths. CLoSD is a text-driven RL physics-based controller, guided by diffusion generation for various tasks. Our key insight is that motion diffusion can serve as an on-the-fly universal planner for a robust RL controller. To this end, CLoSD maintains a closed-loop interaction between two modules — a Diffusion Planner (DiP), and a tracking controller. DiP is a fast-responding autoregressive diffusion model, controlled by textual prompts and target locations, and the controller is a simple and robust motion imitator that continuously receives motion plans from DiP and provides feedback from the environment. CLoSD is capable of seamlessly performing a sequence of different tasks, including navigation to a goal location, striking an object with a hand or foot as specified in a text prompt, sitting down, and getting up. | RL, PPO, motion, motion generation, motion synthesis, synthesis, generative models, diffusion, animation | null | 1,902 | 2410.03441 |
Linear SCM Identification in the Presence of Confounders and Gaussian Noise | https://openreview.net/forum?id=bjxuqI4KwU | [
"Vahideh Sanjaroonpouri",
"Pouria Ramazi"
] | Spotlight | Noisy linear structural causal models (SCMs) in the presence of confounding variables are known to be identifiable if all confounding and noise variables are non-Gaussian and unidentifiable if all are Gaussian.
The identifiability when only some are Gaussian remains concealed.
We show that, in the presence of Gaussian noise, a linear SCM is uniquely identifiable provided that \emph{(i)} the number of confounders is at most the number of the observed variables, \emph{(ii)} the confounders do not have a Gaussian component, and \emph{(iii)} the causal structure of the SCM is known.
If the third condition is relaxed, the SCM becomes finitely identifiable; more specifically, it belongs to a set of at most $n!$ linear SCMS, where $n$ is the number of observed variables.
The confounders in all of these $n!$ SCMs share the same joint probability distribution function (PDF), which we obtain analytically.
For the case where both the noise and confounders are Gaussian, we provide further insight into the existing counter-example-based unidentifiability result and demonstrate that every SCM with confounders can be represented as an SCM without confounders but with the same joint PDF. | identifiability, SCM, causal discovery; linear SCM; confounder | Identifiability of Linear SCMs with Gaussian noise is investigated in both cases of Gaussian and non-Gaussian confounders. | 1,892 | null |
Geometric Inductive Biases of Deep Networks: The Role of Data and Architecture | https://openreview.net/forum?id=cmXWYolrlo | [
"Sajad Movahedi",
"Antonio Orvieto",
"Seyed-Mohsen Moosavi-Dezfooli"
] | Spotlight | In this paper, we propose the *geometric invariance hypothesis (GIH)*, which argues that the input space curvature of a neural network remains invariant under transformation in certain architecture-dependent directions during training. We investigate a simple, non-linear binary classification problem residing on a plane in a high dimensional space and observe that—unlike MPLs—ResNets fail to generalize depending on the orientation of the plane. Motivated by this example, we define a neural network's **average geometry** and **average geometry evolution** as compact *architecture-dependent* summaries of the model's input-output geometry and its evolution during training. By investigating the average geometry evolution at initialization, we discover that the geometry of a neural network evolves according to the data covariance projected onto its average geometry. This means that the geometry only changes in a subset of the input space when the average geometry is low-rank, such as in ResNets. This causes an architecture-dependent invariance property in the input space curvature, which we dub GIH. Finally, we present extensive experimental results to observe the consequences of GIH and how it relates to generalization in neural networks. | Optimization, Deep Neural Networks, Deep Learning, Deep Learning Theory, Machine Learning Theory | We try to understand the role of architecture and data in the geometry of deep neural networks in the input space and how it evolves during training. | 1,808 | 2410.12025 |
MotionAura: Generating High-Quality and Motion Consistent Videos using Discrete Diffusion | https://openreview.net/forum?id=bW9fGYo44s | [
"Onkar Kishor Susladkar",
"Jishu Sen Gupta",
"Chirag Sehgal",
"Sparsh Mittal",
"Rekha Singhal"
] | Spotlight | The spatio-temporal complexity of video data presents significant challenges in tasks such as compression, generation, and inpainting. We present four key contributions to address the challenges of spatiotemporal video processing. First, we introduce the 3D Mobile Inverted Vector-Quantization Variational Autoencoder (3D-MBQ-VAE), which combines Variational Autoencoders (VAEs) with masked modeling to enhance spatiotemporal video compression. The model achieves superior temporal consistency and state-of-the-art (SOTA) reconstruction quality by employing a novel training strategy with full frame masking. Second, we present MotionAura, a text-to-video generation framework that utilizes vector-quantized diffusion models to discretize the latent space and capture complex motion dynamics, producing temporally coherent videos aligned with text prompts. Third, we propose a spectral transformer-based denoising network that processes video data in the frequency domain using the Fourier Transform. This method effectively captures global context and long-range dependencies for high-quality video generation and denoising. Lastly, we introduce a downstream task of Sketch Guided Video Inpainting. This task leverages Low-Rank Adaptation (LoRA) for parameter-efficient fine-tuning. Our models achieve SOTA performance on a range of benchmarks. Our work offers robust frameworks for spatiotemporal modeling and user-driven video content manipulation. | text2video, VQ-Diffusion, video Inpainting, Large scale pretraining | High Quality text to video generation with discrete diffusion | 1,775 | 2410.07659 |
Rare-to-Frequent: Unlocking Compositional Generation Power of Diffusion Models on Rare Concepts with LLM Guidance | https://openreview.net/forum?id=BgxsmpVoOX | [
"Dongmin Park",
"Sebin Kim",
"Taehong Moon",
"Minkyu Kim",
"Kangwook Lee",
"Jaewoong Cho"
] | Spotlight | State-of-the-art text-to-image (T2I) diffusion models often struggle to generate rare compositions of concepts, e.g., objects with unusual attributes. In this paper, we show that the compositional generation power of diffusion models on such rare concepts can be significantly enhanced by the Large Language Model (LLM) guidance. We start with empirical and theoretical analysis, demonstrating that exposing frequent concepts relevant to the target rare concepts during the diffusion sampling process yields more accurate concept composition. Based on this, we propose a training-free approach, R2F, that plans and executes the overall rare-to-frequent concept guidance throughout the diffusion inference by leveraging the abundant semantic knowledge in LLMs. Our framework is flexible across any pre-trained diffusion models and LLMs, and can be seamlessly integrated with the region-guided diffusion approaches. Extensive experiments on three datasets, including our newly proposed benchmark, RareBench, containing various prompts with rare compositions of concepts, R2F significantly surpasses existing models including SD3.0 and FLUX by up to 28.1%p in T2I alignment. Code is available at https://github.com/krafton-ai/Rare-to-Frequent. | Text-to-image, Diffusion, Large Language Models | State-of-the-art text-to-image diffusion models often struggle to accurately generate images from prompts with rare concepts. Our framework enables this by converting rare concepts to frequent ones with LLMs and using them in diffusion inference. | 1,711 | null |
POTEC: Off-Policy Contextual Bandits for Large Action Spaces via Policy Decomposition | https://openreview.net/forum?id=LXftdR11io | [
"Yuta Saito",
"Jihan Yao",
"Thorsten Joachims"
] | Spotlight | We study off-policy learning (OPL) of contextual bandit policies in large discrete action spaces where existing methods -- most of which rely crucially on reward-regression models or importance-weighted policy gradients -- fail due to excessive bias or variance. To overcome these issues in OPL, we propose a novel two-stage algorithm, called Policy Optimization via Two-Stage Policy Decomposition (POTEC). It leverages clustering in the action space and learns two different policies via policy- and regression-based approaches, respectively. In particular, we derive a novel low-variance gradient estimator that enables to learn a first-stage policy for cluster selection efficiently via a policy-based approach. To select a specific action within the cluster sampled by the first-stage policy, POTEC uses a second-stage policy derived from a regression-based approach within each cluster. We show that a local correctness condition, which only requires that the regression model preserves the relative expected reward differences of the actions within each cluster, ensures that our policy-gradient estimator is unbiased and the second-stage policy is optimal. We also show that POTEC provides a strict generalization of policy- and regression-based approaches and their associated assumptions. Comprehensive experiments demonstrate that POTEC provides substantial improvements in OPL effectiveness particularly in large and structured action spaces. | Off-Policy Learning, Contextual Bandits, Large Action Space, Importance Weighting, Clustering, Policy Gradient | We propose a new two-stage off-policy learning algorithm called POTEC, which unifies regression- and policy-based approaches via a policy decomposition and is particularly effective in large action spaces. | 1,670 | null |
MetaUrban: An Embodied AI Simulation Platform for Urban Micromobility | https://openreview.net/forum?id=kFsWpSxkFz | [
"Wayne Wu",
"Honglin He",
"Jack He",
"Yiran Wang",
"Chenda Duan",
"Zhizheng Liu",
"Quanyi Li",
"Bolei Zhou"
] | Spotlight | Public urban spaces such as streetscapes and plazas serve residents and accommodate social life in all its vibrant variations. Recent advances in robotics and embodied AI make public urban spaces no longer exclusive to humans. Food delivery bots and electric wheelchairs have started sharing sidewalks with pedestrians, while robot dogs and humanoids have recently emerged in the street. **Micromobility** enabled by AI for short-distance travel in public urban spaces plays a crucial component in future transportation systems. It is essential to ensure the generalizability and safety of AI models used for maneuvering mobile machines. In this work, we present **MetaUrban**, a *compositional* simulation platform for the AI-driven urban micromobility research. MetaUrban can construct an *infinite* number of interactive urban scenes from compositional elements, covering a vast array of ground plans, object placements, pedestrians, vulnerable road users, and other mobile agents' appearances and dynamics. We design point navigation and social navigation tasks as the pilot study using MetaUrban for urban micromobility research and establish various baselines of Reinforcement Learning and Imitation Learning. We conduct extensive evaluation across mobile machines, demonstrating that heterogeneous mechanical structures significantly influence the learning and execution of AI policies. We perform a thorough ablation study, showing that the compositional nature of the simulated environments can substantially improve the generalizability and safety of the trained mobile agents. MetaUrban will be made publicly available to provide research opportunities and foster safe and trustworthy embodied AI and micromobility in cities. The code and data have been released. | Embodied AI, Simulation, Micromobility | MetaUrban is a compositional simulation platform for AI-driven urban micromobility research. It will be open-source to enable more research opportunities for the community, and foster generalizable and safe embodied AI and micromobility in cities. | 1,653 | 2407.08725 |
D-FINE: Redefine Regression Task of DETRs as Fine-grained Distribution Refinement | https://openreview.net/forum?id=MFZjrTFE7h | [
"Yansong Peng",
"Hebei Li",
"Peixi Wu",
"Yueyi Zhang",
"Xiaoyan Sun",
"Feng Wu"
] | Spotlight | We introduce D-FINE, a powerful real-time object detector that achieves outstanding localization precision by redefining the bounding box regression task in DETR models. D-FINE comprises two key components: Fine-grained Distribution Refinement (FDR) and Global Optimal Localization Self-Distillation (GO-LSD). FDR transforms the regression process from predicting fixed coordinates to iteratively refining probability distributions, providing a fine-grained intermediate representation that significantly enhances localization accuracy. GO-LSD is a bidirectional optimization strategy that transfers localization knowledge from refined distributions to shallower layers through self-distillation, while also simplifying the residual prediction tasks for deeper layers. Additionally, D-FINE incorporates lightweight optimizations in computationally intensive modules and operations, achieving a better balance between speed and accuracy. Specifically, D-FINE-L / X achieves 54.0% / 55.8% AP on the COCO dataset at 124 / 78 FPS on an NVIDIA T4 GPU. When pretrained on Objects365, D-FINE-L / X attains 57.1% / 59.3% AP, surpassing all existing real-time detectors. Furthermore, our method significantly enhances the performance of a wide range of DETR models by up to 5.3% AP with negligible extra parameters and training costs. Our code and models: https://github.com/Peterande/D-FINE. | Object Detection, Real-Time, Detection Transformer, Knowledge Distillation | null | 1,611 | null |
DartControl: A Diffusion-Based Autoregressive Motion Model for Real-Time Text-Driven Motion Control | https://openreview.net/forum?id=XNA3Mnnbvb | [
"Kaifeng Zhao",
"Gen Li",
"Siyu Tang"
] | Spotlight | Text-conditioned human motion generation, which allows for user interaction through natural language, has become increasingly popular. Existing methods typically generate short, isolated motions based on a single input sentence. However, human motions are continuous and can extend over long periods, carrying rich semantics. Creating long, complex motions that precisely respond to streams of text descriptions, particularly in an online and real-time setting, remains a significant challenge. Furthermore, incorporating spatial constraints into text-conditioned motion generation presents additional challenges, as it requires aligning the motion semantics specified by text descriptions with geometric information, such as goal locations and 3D scene geometry. To address these limitations, we propose **DartC**ontrol, in short **DART**, a **D**iffusion-based **A**utoregressive motion primitive model for **R**eal-time **T**ext-driven motion **C**ontrol. Our model, DART, effectively learns a compact motion primitive space jointly conditioned on motion history and text inputs using latent diffusion models. By autoregressively generating motion primitives based on the preceding history and current text input, DART enables real-time, sequential motion generation driven by natural language descriptions. Additionally, the learned motion primitive space allows for precise spatial motion control, which we formulate either as a latent noise optimization problem or as a Markov decision process addressed through reinforcement learning. We present effective algorithms for both approaches, demonstrating our model’s versatility and superior performance in various motion synthesis tasks. Experiments show our method outperforms existing baselines in motion realism, efficiency, and controllability. Video results and code are available at https://zkf1997.github.io/DART/. | Human Motion Generation | We present DartControl, a method for high-quality, real-time motion generation from streaming text inputs. By incorporating latent space control, our method further enables diverse motion generation tasks requiring spatial control. | 1,582 | 2410.05260 |
Mitigating Information Loss in Tree-Based Reinforcement Learning via Direct Optimization | https://openreview.net/forum?id=qpXctF2aLZ | [
"Sascha Marton",
"Tim Grams",
"Florian Vogt",
"Stefan Lüdtke",
"Christian Bartelt",
"Heiner Stuckenschmidt"
] | Spotlight | Reinforcement learning (RL) has seen significant success across various domains, but its adoption is often limited by the black-box nature of neural network policies, making them difficult to interpret. In contrast, symbolic policies allow representing decision-making strategies in a compact and interpretable way. However, learning symbolic policies directly within on-policy methods remains challenging.
In this paper, we introduce SYMPOL, a novel method for SYMbolic tree-based on-POLicy RL. SYMPOL employs a tree-based model integrated with a policy gradient method, enabling the agent to learn and adapt its actions while maintaining a high level of interpretability.
We evaluate SYMPOL on a set of benchmark RL tasks, demonstrating its superiority over alternative tree-based RL approaches in terms of performance and interpretability. Unlike existing methods, it enables gradient-based, end-to-end learning of interpretable, axis-aligned decision trees within standard on-policy RL algorithms. Therefore, SYMPOL can become the foundation for a new class of interpretable RL based on decision trees. Our implementation is available under: https://github.com/s-marton/sympol | Symbolic Reinforcement Learning, Interpretable Reinforcement Learning, Reinforcement Learning, Decision Trees, Policy Gradient, Proximal Policy Optimization | SYMPOL is a novel method for symbolic RL that enables end-to-end gradient-based learning of interpretable, axis-aligned decision trees, combining policy gradient optimization with symbolic decision-making | 1,507 | 2408.08761 |
Sharpness-Aware Minimization Efficiently Selects Flatter Minima Late In Training | https://openreview.net/forum?id=aD2uwhLbnA | [
"Zhanpeng Zhou",
"Mingze Wang",
"Yuchen Mao",
"Bingrui Li",
"Junchi Yan"
] | Spotlight | Sharpness-Aware Minimization (SAM) has substantially improved the generalization of neural networks under various settings. Despite the success, its effectiveness remains poorly understood. In this work, we discover an intriguing phenomenon in the training dynamics of SAM, shedding light on understanding its implicit bias towards flatter minima over Stochastic Gradient Descent (SGD). Specifically, we find that *SAM efficiently selects flatter minima late in training*. Remarkably, even a few epochs of SAM applied at the end of training yield nearly the same generalization and solution sharpness as full SAM training. Subsequently, we delve deeper into the underlying mechanism behind this phenomenon. Theoretically, we identify two phases in the learning dynamics after applying SAM late in training: i) SAM first escapes the minimum found by SGD exponentially fast; and ii) then rapidly converges to a flatter minimum within the same valley. Furthermore, we empirically investigate the role of SAM during the early training phase. We conjecture that the optimization method chosen in the late phase is more crucial in shaping the final solution's properties. Based on this viewpoint, we extend our findings from SAM to Adversarial Training. | Sharpness-Aware Minimization, Implicit Bias, Training Dynamics | We study the implicit bias of SAM during the late phase of training, revealing that SAM efficiently selects flatter minima over SGD even when applied in the last few epochs. | 1,483 | 2410.10373 |
What Makes a Good Diffusion Planner for Decision Making? | https://openreview.net/forum?id=7BQkXXM8Fy | [
"Haofei Lu",
"Dongqi Han",
"Yifei Shen",
"Dongsheng Li"
] | Spotlight | Diffusion models have recently shown significant potential in solving decision-making problems, particularly in generating behavior plans -- also known as diffusion planning. While numerous studies have demonstrated the impressive performance of diffusion planning, the mechanisms behind the key components of a good diffusion planner remain unclear and the design choices are highly inconsistent in existing studies. In this work, we address this issue through systematic empirical experiments on diffusion planning in an offline reinforcement learning (RL) setting, providing practical insights into the essential components of diffusion planning. We trained and evaluated over 6,000 diffusion models, identifying the critical components such as guided sampling, network architecture, action generation and planning strategy. We revealed that some design choices opposite to the common practice in previous work in diffusion planning actually lead to better performance, e.g., unconditional sampling with selection can be better than guided sampling and Transformer outperforms U-Net as denoising network. Based on these insights, we suggest a simple yet strong diffusion planning baseline that achieves state-of-the-art results on standard offline RL benchmarks. Code: https://github.com/Josh00-Lu/DiffusionVeteran. | Diffusion Models, Offline Reinforcement Learning, Decision Making, Planning | A comprehensive empirical study about key elements underlying a good diffusion planner for deicsion making. | 1,456 | 2503.00535 |
ThinK: Thinner Key Cache by Query-Driven Pruning | https://openreview.net/forum?id=n0OtGl6VGb | [
"Yuhui Xu",
"Zhanming Jie",
"Hanze Dong",
"Lei Wang",
"Xudong Lu",
"Aojun Zhou",
"Amrita Saha",
"Caiming Xiong",
"Doyen Sahoo"
] | Spotlight | Large Language Models (LLMs) have revolutionized the field of natural language processing, achieving unprecedented performance across a variety of applications.
However, their increased computational and memory demands present significant challenges, especially when handling long sequences.
This paper focuses on the long-context scenario, addressing the inefficiencies in KV cache memory consumption during inference.
Unlike existing approaches that optimize the memory based on the sequence length, we identify substantial redundancy in the channel dimension of the KV cache, as indicated by an uneven magnitude distribution and a low-rank structure in the attention weights.
In response, we propose ThinK, a novel query-dependent KV cache pruning method designed to minimize attention weight loss while selectively pruning the least significant channels. Our approach not only maintains or enhances model accuracy but also achieves a reduction in KV cache memory costs by over 20\% compared with vanilla KV cache eviction and quantization methods. For instance, ThinK integrated with KIVI can achieve $2.8\times$ peak memory reduction while maintaining nearly the same quality, enabling a batch size increase from 4$\times$ (with KIVI alone) to 5$\times$ when using a single GPU. Extensive evaluations on the LLaMA and Mistral models across various long-sequence datasets verified the efficiency of ThinK. Our code has been made available at https://github.com/SalesforceAIResearch/ThinK. | Large Language Models; KV Cache Compression; KV Cache Pruning | We propose a novel query-dependent KV cache channel pruning method to reduce the memory usage of LLMs during inference. | 1,419 | 2407.21018 |
Linear Spherical Sliced Optimal Transport: A Fast Metric for Comparing Spherical Data | https://openreview.net/forum?id=fgUFZAxywx | [
"Xinran Liu",
"Yikun Bai",
"Rocio Diaz Martin",
"Kaiwen Shi",
"Ashkan Shahbazi",
"Bennett Allan Landman",
"Catie Chang",
"Soheil Kolouri"
] | Spotlight | Efficient comparison of spherical probability distributions becomes important in fields such as computer vision, geosciences, and medicine. Sliced optimal transport distances, such as spherical and stereographic spherical sliced Wasserstein distances, have recently been developed to address this need. These methods reduce the computational burden of optimal transport by slicing hyperspheres into one-dimensional projections, i.e., lines or circles. Concurrently, linear optimal transport has been proposed to embed distributions into $L^2$ spaces, where the $L^2$ distance approximates the optimal transport distance, thereby simplifying comparisons across multiple distributions. In this work, we introduce the Linear Spherical Sliced Optimal Transport (LSSOT) framework, which utilizes slicing to embed spherical distributions into $L^2$ spaces while preserving their intrinsic geometry, offering a computationally efficient metric for spherical probability measures. We establish the metricity of LSSOT and demonstrate its superior computational efficiency in applications such as cortical surface registration, 3D point cloud interpolation via gradient flow, and shape embedding. Our results demonstrate the significant computational benefits and high accuracy of LSSOT in these applications. | Optimal Transport, Spherical Data Analysis | We introduce a fast Linear Spherical Sliced Optimal Transport metric for comparing spherical probability measures in various applications. | 1,399 | 2411.06055 |
RelitLRM: Generative Relightable Radiance for Large Reconstruction Models | https://openreview.net/forum?id=3Oli4u6q3p | [
"Tianyuan Zhang",
"Zhengfei Kuang",
"Haian Jin",
"Zexiang Xu",
"Sai Bi",
"Hao Tan",
"He Zhang",
"Yiwei Hu",
"Milos Hasan",
"William T. Freeman",
"Kai Zhang",
"Fujun Luan"
] | Spotlight | We propose RelitLRM, a Large Reconstruction Model (LRM) for generating high-quality Gaussian splatting representations of 3D objects under novel illuminations from sparse (4-8) posed images captured under unknown static lighting. Unlike prior inverse rendering methods requiring dense captures and slow optimization, often causing artifacts like incorrect highlights or shadow baking, RelitLRM adopts a feed-forward transformer-based model with a novel combination of a geometry reconstructor and a relightable appearance generator based on diffusion. The model is trained end-to-end on synthetic multi-view renderings of objects under varying known illuminations. This architecture design enables to effectively decompose geometry and appearance, resolve the ambiguity between material and lighting, and capture the multi-modal distribution of shadows and specularity in the relit appearance. We show our sparse-view feed-forward RelitLRM offers competitive relighting results to state-of-the-art dense-view optimization-based baselines while being significantly faster. Our project page is available at: https://relit-lrm.github.io/. | Relightable reconstruction, Inverse Rendering, Generative Relighting | We propose \emph{\method}, a Large Reconstruction Model (LRM) for generating high-quality Gaussian splatting representations of 3D objects under novel illuminations from sparse (4-8) posed images captured under unknown static lighting. | 1,381 | 2410.06231 |
A Geometric Framework for Understanding Memorization in Generative Models | https://openreview.net/forum?id=aZ1gNJu8wO | [
"Brendan Leigh Ross",
"Hamidreza Kamkari",
"Tongzi Wu",
"Rasa Hosseinzadeh",
"Zhaoyan Liu",
"George Stein",
"Jesse C. Cresswell",
"Gabriel Loaiza-Ganem"
] | Spotlight | As deep generative models have progressed, recent work has shown them to be capable of memorizing and reproducing training datapoints when deployed. These findings call into question the usability of generative models, especially in light of the legal and privacy risks brought about by memorization. To better understand this phenomenon, we propose the *manifold memorization hypothesis* (MMH), a geometric framework which leverages the manifold hypothesis into a clear language in which to reason about memorization. We propose to analyze memorization in terms of the relationship between the dimensionalities of $(i)$ the ground truth data manifold and $(ii)$ the manifold learned by the model. This framework provides a formal standard for "how memorized" a datapoint is and systematically categorizes memorized data into two types: memorization driven by overfitting and memorization driven by the underlying data distribution. By analyzing prior work in the context of the MMH, we explain and unify assorted observations in the literature. We empirically validate the MMH using synthetic data and image datasets up to the scale of Stable Diffusion, developing new tools for detecting and preventing generation of memorized samples in the process. | deep generative modelling, generative models, memorization, data copying, privacy, diffusion, diffusion models, GANs, manifold hypothesis, local intrinsic dimension, lid, lid estimation, geometry | We show that the manifold hypothesis is a useful way to understand memorization in generative models | 1,375 | 2411.00113 |
Adjoint Matching: Fine-tuning Flow and Diffusion Generative Models with Memoryless Stochastic Optimal Control | https://openreview.net/forum?id=xQBRrtQM8u | [
"Carles Domingo-Enrich",
"Michal Drozdzal",
"Brian Karrer",
"Ricky T. Q. Chen"
] | Spotlight | Dynamical generative models that produce samples through an iterative process, such as Flow Matching and denoising diffusion models, have seen widespread use, but there have not been many theoretically-sound methods for improving these models with reward fine-tuning. In this work, we cast reward fine-tuning as stochastic optimal control (SOC). Critically, we prove that a very specific *memoryless* noise schedule must be enforced during fine-tuning, in order to account for the dependency between the noise variable and the generated samples. We also propose a new algorithm named *Adjoint Matching* which outperforms existing SOC algorithms, by casting SOC problems as a regression problem. We find that our approach significantly improves over existing methods for reward fine-tuning, achieving better consistency, realism, and generalization to unseen human preference reward models, while retaining sample diversity. | Reward fine-tuning, stochastic optimal control, flow matching, diffusion models, RLHF, adjoint method | We introduce a reward fine-tuning framework for diffusion and flow matching models, based on stochastic optimal control (SOC), and Adjoint Matching, a new SOC algorithm. | 1,370 | 2409.08861 |
ConFIG: Towards Conflict-free Training of Physics Informed Neural Networks | https://openreview.net/forum?id=APojAzJQiq | [
"Qiang Liu",
"Mengyu Chu",
"Nils Thuerey"
] | Spotlight | The loss functions of many learning problems contain multiple additive terms that can disagree and yield conflicting update directions. For Physics-Informed Neural Networks (PINNs), loss terms on initial/boundary conditions and physics equations are particularly interesting as they are well-established as highly difficult tasks. To improve learning the challenging multi-objective task posed by PINNs, we propose the ConFIG method, which provides conflict-free updates by ensuring a positive dot product between the final update and each loss-specific gradient. It also maintains consistent optimization rates for all loss terms and dynamically adjusts gradient magnitudes based on conflict levels. We additionally leverage momentum to accelerate optimizations by alternating the back-propagation of different loss terms. We provide a mathematical proof showing the convergence of the ConFIG method, and it is evaluated across a range of challenging PINN scenarios. ConFIG consistently shows superior performance and runtime compared to baseline methods. We also test the proposed method in a classic multi-task benchmark, where the ConFIG method likewise exhibits a highly promising performance. Source code is available at https://tum-pbs.github.io/ConFIG | Physics Informed Neural Networks, Multi-task learning, Conflicting gradients | Our method (1) mitigates the conflicts of sub-gradients during PINN training and (2) substantially reduces the computations. | 1,287 | 2408.11104 |
DLEFT-MKC: Dynamic Late Fusion Multiple Kernel Clustering with Robust Tensor Learning via Min-Max Optimization | https://openreview.net/forum?id=HE5JmwniHm | [
"Yi Zhang",
"Siwei Wang",
"Jiyuan Liu",
"Shengju Yu",
"Zhibin Dong",
"Suyuan Liu",
"Xinwang Liu",
"En Zhu"
] | Spotlight | Recent advancements in multiple kernel clustering (MKC) have highlighted the effectiveness of late fusion strategies, particularly in enhancing computational efficiency to near-linear complexity while achieving promising clustering performance. However, existing methods encounter three significant limitations: (1) reliance on fixed base partition matrices that do not adaptively optimize during the clustering process, thereby constraining their performance to the inherent representational capabilities of these matrices; (2) a focus on adjusting kernel weights to explore inter-view consistency and complementarity, which often neglects the intrinsic high-order correlations among views, thereby limiting the extraction of comprehensive multiple kernel information; (3) a lack of adaptive mechanisms to accommodate varying distributions within the data, which limits robustness and generalization. To address these challenges, this paper proposes a novel algorithm termed Dynamic Late Fusion Multiple Kernel Clustering with Robust {Tensor Learning via min-max optimization (DLEFT-MKC), which effectively overcomes the representational bottleneck of base partition matrices and facilitates the learning of meaningful high-order cross-view information. Specifically, it is the first to incorporate a min-max optimization paradigm into tensor-based MKC, enhancing algorithm robustness and generalization. Additionally, it dynamically reconstructs decision layers to enhance representation capabilities and subsequently stacks the reconstructed representations for tensor learning that promotes the capture of high-order associations and cluster structures across views, ultimately yielding consensus clustering partitions. To solve the resultant optimization problem, we innovatively design a strategy that combines reduced gradient descent with the alternating direction method of multipliers, ensuring convergence to local optima while maintaining high computational efficiency. Extensive experimental results across various benchmark datasets validate the superior effectiveness and efficiency of the proposed DLEFT-MKC. | multiple kernel clustering; multi-view clustering; late fusion MVC | First to incorporate min-max paradigm into tensor-based MKC; first to dynamic reconstruct base partitions from late fusion MKC. | 1,283 | null |
Hymba: A Hybrid-head Architecture for Small Language Models | https://openreview.net/forum?id=A1ztozypga | [
"Xin Dong",
"Yonggan Fu",
"Shizhe Diao",
"Wonmin Byeon",
"ZIJIA CHEN",
"Ameya Sunil Mahabaleshwarkar",
"Shih-Yang Liu",
"Matthijs Van keirsbilck",
"Min-Hung Chen",
"Yoshi Suhara",
"Yingyan Celine Lin",
"Jan Kautz",
"Pavlo Molchanov"
] | Spotlight | We propose Hymba, a family of small language models featuring a hybrid-head parallel architecture that integrates attention mechanisms and state space models (SSMs) within the same layer, offering parallel and complementary processing of the same inputs. In this hybrid-head module, attention heads provide high-resolution recall, while SSM heads facilitate efficient context summarization. Additionally, we introduce learnable meta tokens, which are prepended to prompts to store critical meta information, guiding subsequent tokens and alleviating the “forced-to-attend” burden associated with attention mechanisms. Thanks to the global context summarized by SSMs, the attention heads in our model can be further optimized through cross-layer key-value (KV) sharing and a mix of global and local attention, resulting in a compact cache size without compromising accuracy. Notably, Hymba achieves state-of-the-art performance among small LMs: Our Hymba-1.5B-Base model surpasses all sub-2B public models and even outperforms Llama-3.2-3B, achieving 1.32\% higher average accuracy, an 11.67$\times$ reduction in cache size, and 3.49$\times$ higher throughput. | hybrid model, language model | We propose Hymba, a family of small language models featuring a hybrid-head parallel architecture. | 1,217 | 2411.13676 |
Severing Spurious Correlations with Data Pruning | https://openreview.net/forum?id=Bk13Qfu8Ru | [
"Varun Mulchandani",
"Jung-Eun Kim"
] | Spotlight | Deep neural networks have been shown to learn and rely on spurious correlations present in the data that they are trained on. Reliance on such correlations can cause these networks to malfunction when deployed in the real world, where these correlations may no longer hold. To overcome the learning of and reliance on such correlations, recent studies propose approaches that yield promising results. These works, however, study settings where the strength of the spurious signal is significantly greater than that of the core, invariant signal, making it easier to detect the presence of spurious features in individual training samples and allow for further processing. In this paper, we identify new settings where the strength of the spurious signal is relatively weaker, making it difficult to detect any spurious information while continuing to have catastrophic consequences. We also discover that spurious correlations are learned primarily due to only a handful of all the samples containing the spurious feature and develop a novel data pruning technique that identifies and prunes small subsets of the training data that contain these samples. Our proposed technique does not require inferred domain knowledge, information regarding the sample-wise presence or nature of spurious information, or human intervention. Finally, we show that such data pruning attains state-of-the-art performance on previously studied settings where spurious information is identifiable. | Spurious Correlations, Data Pruning | null | 1,207 | 2503.18258 |
CubeDiff: Repurposing Diffusion-Based Image Models for Panorama Generation | https://openreview.net/forum?id=M2SsqpxGtc | [
"Nikolai Kalischek",
"Michael Oechsle",
"Fabian Manhardt",
"Philipp Henzler",
"Konrad Schindler",
"Federico Tombari"
] | Spotlight | We introduce a novel method for generating 360° panoramas from text prompts or images. Our approach leverages recent advances in 3D generation by employing multi-view diffusion models to jointly synthesize the six faces of a cubemap. Unlike previous methods that rely on processing equirectangular projections or autoregressive generation, our method treats each face as a standard perspective image, simplifying the generation process and enabling the use of existing multi-view diffusion models. We demonstrate that these models can be adapted to produce high-quality cubemaps without requiring correspondence-aware attention layers. Our model allows for fine-grained text control, generates high resolution panorama images and generalizes well beyond its training set, whilst achieving state-of-the-art results, both qualitatively and quantitatively. | panorama generation, diffusion, multi-view | null | 1,203 | 2501.17162 |
Language Model Alignment in Multilingual Trolley Problems | https://openreview.net/forum?id=VEqPDZIDAh | [
"Zhijing Jin",
"Max Kleiman-Weiner",
"Giorgio Piatti",
"Sydney Levine",
"Jiarui Liu",
"Fernando Gonzalez Adauto",
"Francesco Ortu",
"András Strausz",
"Mrinmaya Sachan",
"Rada Mihalcea",
"Yejin Choi",
"Bernhard Schölkopf"
] | Spotlight | We evaluate the moral alignment of large language models (LLMs) with human preferences in multilingual trolley problems. Building on the Moral Machine experiment, which captures over 40 million human judgments across 200+ countries, we develop a cross-lingual corpus of moral dilemma vignettes in over 100 languages called MultiTP. This dataset enables the assessment of LLMs' decision-making processes in diverse linguistic contexts. Our analysis explores the alignment of 19 different LLMs with human judgments, capturing preferences across six moral dimensions: species, gender, fitness, status, age, and the number of lives involved. By correlating these preferences with the demographic distribution of language speakers and examining the consistency of LLM responses to various prompt paraphrasings, our findings provide insights into cross-lingual and ethical biases of LLMs and their intersection. We discover significant variance in alignment across languages, challenging the assumption of uniform moral reasoning in AI systems and highlighting the importance of incorporating diverse perspectives in AI ethics. The results underscore the need for further research on the integration of multilingual dimensions in responsible AI research to ensure fair and equitable AI interactions worldwide. | LLM alignment, moral evaluation, trolley problems, language model evaluation, AI alignment | We test the alignment of 19 LLMs with human preferences on Trolley Problems in 100+ different languages. | 1,191 | 2407.02273 |
Century: A Framework and Dataset for Evaluating Historical Contextualisation of Sensitive Images | https://openreview.net/forum?id=1KLBvrYz3V | [
"Canfer Akbulut",
"Kevin Robinson",
"Maribeth Rauh",
"Isabela Albuquerque",
"Olivia Wiles",
"Laura Weidinger",
"Verena Rieser",
"Yana Hasson",
"Nahema Marchal",
"Iason Gabriel",
"William Isaac",
"Lisa Anne Hendricks"
] | Spotlight | How do multi-modal generative models describe images of recent historical events and figures, whose legacies may be nuanced, multifaceted, or contested? This task necessitates not only accurate visual recognition, but also socio-cultural knowledge and cross-modal reasoning. To address this evaluation challenge, we introduce Century -- a novel dataset of sensitive historical images. This dataset consists of 1,500 images from recent history, created through an automated method combining knowledge graphs and language models with quality and diversity criteria created from the practices of museums and digital archives. We demonstrate through automated and human evaluation that this method produces a set of images that depict events and figures that are diverse across topics and represents all regions of the world.
We additionally propose an evaluation framework for evaluating the historical contextualisation capabilities along dimensions of accuracy, thoroughness, and objectivity. We demonstrate this approach by using Century to evaluate four foundation models, scoring performance using both automated and human evaluation. We find that historical contextualisation of sensitive images poses a significant challenge for modern multi-modal foundation models, and offer practical recommendations for how developers can use Century to evaluate improvements to models and applications. | historical, contextualisation, image, dataset, multimodal, VLM, evaluation | A dataset of sensitive historical images is curated and used to demonstrate historical contextualisation capabilities of SOTA multi-modal models. | 1,110 | null |
Determine-Then-Ensemble: Necessity of Top-k Union for Large Language Model Ensembling | https://openreview.net/forum?id=FDnZFpHmU4 | [
"Yuxuan Yao",
"Han Wu",
"Mingyang LIU",
"Sichun Luo",
"Xiongwei Han",
"Jie Liu",
"Zhijiang Guo",
"Linqi Song"
] | Spotlight | Large language models (LLMs) exhibit varying strengths and weaknesses across different tasks, prompting recent studies to explore the benefits of ensembling models to leverage their complementary advantages. However, existing LLM ensembling methods often overlook model compatibility and struggle with inefficient alignment of probabilities across the entire vocabulary. In this study, we empirically investigate the factors influencing ensemble performance, identifying model performance, vocabulary size, and response style as key determinants, revealing that compatibility among models is essential for effective ensembling. This analysis leads to the development of a simple yet effective model selection strategy that identifies compatible models. Additionally, we introduce the \textsc{Uni}on \textsc{T}op-$k$ \textsc{E}nsembling (\textsc{UniTE}), a novel approach that efficiently combines models by focusing on the union of the top-k tokens from each model, thereby avoiding the need for full vocabulary alignment and reducing computational overhead. Extensive evaluations across multiple benchmarks demonstrate that \textsc{UniTE} significantly enhances performance compared to existing methods, offering a more efficient framework for LLM ensembling. | Model ensembling, LLM | This study introduces a general model selection strategy for ensembling and proposes an efficient ensemble method that operates on the top-k candidate tokens. | 1,088 | null |
MathCoder2: Better Math Reasoning from Continued Pretraining on Model-translated Mathematical Code | https://openreview.net/forum?id=1Iuw1jcIrf | [
"Zimu Lu",
"Aojun Zhou",
"Ke Wang",
"Houxing Ren",
"Weikang Shi",
"Junting Pan",
"Mingjie Zhan",
"Hongsheng Li"
] | Spotlight | Code has been shown to be effective in enhancing the mathematical reasoning abilities of large language models due to its precision and accuracy. Previous works involving continued mathematical pretraining often include code that utilizes math-related packages, which are primarily designed for fields such as engineering, machine learning, signal processing, or module testing, rather than being directly focused on mathematical reasoning. In this paper, we introduce a novel method for generating mathematical code accompanied with corresponding reasoning steps for continued pretraining. Our approach begins with the construction of a high-quality mathematical continued pretraining dataset by incorporating math-related web data, code using mathematical packages, math textbooks, and synthetic data. Next, we construct reasoning steps by extracting LaTeX expressions, the conditions needed for the expressions, and the results of the expressions from the previously collected dataset. Based on this extracted information, we generate corresponding code to accurately capture the mathematical reasoning process. Appending the generated code to each reasoning step results in data consisting of paired natural language reasoning steps and their corresponding code. Combining this data with the original dataset results in a 19.2B-token high-performing mathematical pretraining corpus, which we name MathCode-Pile. Training several popular base models with this corpus significantly improves their mathematical abilities, leading to the creation of the MathCoder2 family of models. All of our data processing and training code is open-sourced, ensuring full transparency and easy reproducibility of the entire data collection and training pipeline. | large language model, mathematical reasoning, continued pretraining | null | 1,035 | 2410.08196 |
Training-Free Activation Sparsity in Large Language Models | https://openreview.net/forum?id=dGVZwyq5tV | [
"James Liu",
"Pragaash Ponnusamy",
"Tianle Cai",
"Han Guo",
"Yoon Kim",
"Ben Athiwaratkun"
] | Spotlight | Activation sparsity can enable practical inference speedups in large language models (LLMs) by reducing the compute and memory-movement required for matrix multiplications during the forward pass.
However, existing methods face limitations that inhibit widespread adoption. Some approaches are tailored towards older models with ReLU-based sparsity, while others require extensive continued pre-training on up to hundreds of billions of tokens.
This paper describes TEAL (**T**raining-Fre**e** **A**ctivation Sparsity in **L**LMs), a simple training-free method that applies magnitude-based activation sparsity to hidden states throughout the entire model. TEAL achieves 40-50\% model-wide sparsity with minimal performance degradation across Llama-2, Llama-3, and Mistral families, with sizes varying from 7B to 70B. We improve existing sparse kernels and demonstrate wall-clock decoding speed-ups of up to 1.53× and 1.8× at 40\% and 50\% model-wide sparsity. TEAL is compatible with weight quantization, enabling further efficiency gains. | Large Language Models, Activation Sparsity, Efficiency | Training-free activation sparsity in LLMs, with a hardware-aware kernel to achieve speedup. | 1,021 | 2408.14690 |
Poison-splat: Computation Cost Attack on 3D Gaussian Splatting | https://openreview.net/forum?id=ExrEw8cVlU | [
"Jiahao Lu",
"Yifan Zhang",
"Qiuhong Shen",
"Xinchao Wang",
"Shuicheng YAN"
] | Spotlight | 3D Gaussian splatting (3DGS), known for its groundbreaking performance and efficiency, has become a dominant 3D representation and brought progress to many 3D vision tasks. However, in this work, we reveal a significant security vulnerability that has been largely overlooked in 3DGS: the computation cost of training 3DGS could be maliciously tampered by poisoning the input data. By developing an attack named Poison-splat, we reveal a novel attack surface where the adversary can poison the input images to drastically increase the computation memory and time needed for 3DGS training, pushing the algorithm towards its worst computation complexity. In extreme cases, the attack can even consume all allocable memory, leading to a Denial-of-Service (DoS) that disrupts servers, resulting in practical damages to real-world 3DGS service vendors. Such a computation cost attack is achieved by addressing a bi-level optimization problem through three tailored strategies: attack objective approximation, proxy model rendering, and optional constrained optimization. These strategies not only ensure the effectiveness of our attack but also make it difficult to defend with simple defensive measures. We hope the revelation of this novel attack surface can spark attention to this crucial yet overlooked vulnerability of 3DGS systems. Our code is available at https://github.com/jiahaolu97/poison-splat . | gaussian splatting, computation cost attack, energy-latency attack, data poisoning attack, 3D security, AI security | We reveal a severe security vulnerability of 3D Gaussian Splatting: the computation cost of training 3DGS (GPU consumption, training time) could be significantly manipulated by poisoning input data. | 968 | null |
Gap-Dependent Bounds for Q-Learning using Reference-Advantage Decomposition | https://openreview.net/forum?id=6tyPSkshtF | [
"Zhong Zheng",
"Haochen Zhang",
"Lingzhou Xue"
] | Spotlight | We study the gap-dependent bounds of two important algorithms for on-policy $Q$-learning for finite-horizon episodic tabular Markov Decision Processes (MDPs): UCB-Advantage (Zhang et al. 2020) and Q-EarlySettled-Advantage (Li et al. 2021). UCB-Advantage and Q-EarlySettled-Advantage improve upon the results based on Hoeffding-type bonuses and achieve the {almost optimal} $\sqrt{T}$-type regret bound in the worst-case scenario, where $T$ is the total number of steps. However, the benign structures of the MDPs such as a strictly positive suboptimality gap can significantly improve the regret. While gap-dependent regret bounds have been obtained for $Q$-learning with Hoeffding-type bonuses, it remains an open question to establish gap-dependent regret bounds for $Q$-learning using variance estimators in their bonuses and reference-advantage decomposition for variance reduction. We develop a novel error decomposition
framework to prove gap-dependent regret bounds of UCB-Advantage and Q-EarlySettled-Advantage that are logarithmic in $T$ and improve upon existing ones for $Q$-learning algorithms. Moreover, we establish the gap-dependent bound for the policy switching cost of UCB-Advantage and improve that under the worst-case MDPs. To our knowledge, this paper presents the first gap-dependent regret analysis for $Q$-learning using variance estimators and reference-advantage decomposition and also provides the first gap-dependent analysis on policy switching cost for $Q$-learning. | Reinforcement Learning, Q-Learning, Regret | This paper analyzes the gap-dependent regrets and policy switching costs of two Q-Learning algorithms with variance reduction. | 953 | null |
Effective Interplay between Sparsity and Quantization: From Theory to Practice | https://openreview.net/forum?id=wJv4AIt4sK | [
"Simla Burcu Harma",
"Ayan Chakraborty",
"Elizaveta Kostenok",
"Danila Mishin",
"Dongho Ha",
"Babak Falsafi",
"Martin Jaggi",
"Ming Liu",
"Yunho Oh",
"Suvinay Subramanian",
"Amir Yazdanbakhsh"
] | Spotlight | The increasing size of deep neural networks (DNNs) necessitates effective model compression to reduce their computational and memory footprints. Sparsity and quantization are two prominent compression methods that have been shown to reduce DNNs' computational and memory footprints significantly while preserving model accuracy. However, how these two methods interact when combined together remains a key question for developers, as many tacitly assume that they are orthogonal, meaning that their combined use does not introduce additional errors beyond those introduced by each method independently. In this paper, we provide the first mathematical proof that sparsity and quantization are non-orthogonal. We corroborate these results with experiments spanning a range of large language models, including the OPT and LLaMA model families (with 125M to 8B parameters), and vision models like ViT and ResNet. We show that the order in which we apply these methods matters because applying quantization before sparsity may disrupt the relative importance of tensor elements, which may inadvertently remove significant elements from a tensor. More importantly, we show that even if applied in the correct order, the compounded errors from sparsity and quantization can significantly harm accuracy. Our findings extend to the efficient deployment of large models in resource-constrained compute platforms to reduce serving cost, offering insights into best practices for applying these compression methods to maximize hardware resource efficiency without compromising accuracy. | theory of compression, model compression, quantization, max-scaled numerical encoding, sparsity, unstructured sparsity, structured sparsity, N:M sparsity, large language models, magnitude pruning, post-training quantization, efficient inference | We mathematically analyze the interplay of sparsity and quantization, proving they are not orthogonal operations. This means their combined error is greater than the sum of their parts, especially due to quantization. | 942 | 2405.20935 |
One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation Using a Single Prompt | https://openreview.net/forum?id=cD1kl2QKv1 | [
"Tao Liu",
"Kai Wang",
"Senmao Li",
"Joost van de Weijer",
"Fahad Shahbaz Khan",
"Shiqi Yang",
"Yaxing Wang",
"Jian Yang",
"Ming-Ming Cheng"
] | Spotlight | Text-to-image generation models can create high-quality images from input prompts. However, they struggle to support the consistent generation of identity-preserving requirements for storytelling. Existing approaches to this problem typically require extensive training in large datasets or additional modifications to the original model architectures. This limits their applicability across different domains and diverse diffusion model configurations. In this paper, we first observe the inherent capability of language models, coined $\textit{context consistency}$, to comprehend identity through context with a single prompt. Drawing inspiration from the inherent $\textit{context consistency}$, we propose a novel $\textit{training-free}$ method for consistent text-to-image (T2I) generation, termed "One-Prompt-One-Story" ($\textit{1Prompt1Story}$). Our approach $\textit{1Prompt1Story}$ concatenates all prompts into a single input for T2I diffusion models, initially preserving character identities. We then refine the generation process using two novel techniques: $\textit{Singular-Value
Reweighting}$ and $\textit{Identity-Preserving Cross-Attention}$, ensuring better alignment with the input description for each frame. In our experiments, we compare our method against various existing consistent T2I generation approaches to demonstrate its effectiveness, through quantitative metrics and qualitative assessments. Code is available at https://github.com/byliutao/1Prompt1Story. | diffusion model; consistent T2I image generation; storytelling | We propose a training-free approach named 1Prompt1Story for consistent text-to-image generations with a single concatenated prompt. Our method is built on the inherent context consistency propoerty of language models. | 903 | null |
GrabS: Generative Embodied Agent for 3D Object Segmentation without Scene Supervision | https://openreview.net/forum?id=wXSshrxlP4 | [
"Zihui Zhang",
"Yafei YANG",
"Hongtao Wen",
"Bo Yang"
] | Spotlight | We study the hard problem of 3D object segmentation in complex point clouds
without requiring human labels of 3D scenes for supervision. By relying on the
similarity of pretrained 2D features or external signals such as motion to group 3D
points as objects, existing unsupervised methods are usually limited to identifying
simple objects like cars or their segmented objects are often inferior due to the
lack of objectness in pretrained features. In this paper, we propose a new two-
stage pipeline called GrabS. The core concept of our method is to learn generative
and discriminative object-centric priors as a foundation from object datasets in the
first stage, and then design an embodied agent to learn to discover multiple ob-
jects by querying against the pretrained generative priors in the second stage. We
extensively evaluate our method on two real-world datasets and a newly created
synthetic dataset, demonstrating remarkable segmentation performance, clearly
surpassing all existing unsupervised methods. | 3D scene object segmentation, unsupervised learning | null | 883 | null |
Scalable and Certifiable Graph Unlearning: Overcoming the Approximation Error Barrier | https://openreview.net/forum?id=pPyJyeLriR | [
"Lu Yi",
"Zhewei Wei"
] | Spotlight | Graph unlearning has emerged as a pivotal research area for ensuring privacy protection, given the widespread adoption of Graph Neural Networks (GNNs) in applications involving sensitive user data. Among existing studies, certified graph unlearning is distinguished by providing robust privacy guarantees. However, current certified graph unlearning methods are impractical for large-scale graphs because they necessitate the costly re-computation of graph propagation for each unlearning request. Although numerous scalable techniques have been developed to accelerate graph propagation for GNNs, their integration into certified graph unlearning remains uncertain as these scalable approaches introduce approximation errors into node embeddings. In contrast, certified graph unlearning demands bounded model error on exact node embeddings to maintain its certified guarantee.
To address this challenge, we present ScaleGUN, the first approach to scale certified graph unlearning to billion-edge graphs. ScaleGUN integrates the approximate graph propagation technique into certified graph unlearning, offering certified guarantees for three unlearning scenarios: node feature, edge and node unlearning.
Extensive experiments on real-world datasets demonstrate the efficiency and unlearning efficacy of ScaleGUN. Remarkably, ScaleGUN accomplishes $(\epsilon,\delta)=(1,10^{-4})$ certified unlearning on the billion-edge graph ogbn-papers100M in 20 seconds for a 5,000 random edge removal request -- of which only 5 seconds are required for updating the node embeddings -- compared to 1.91 hours for retraining and 1.89 hours for re-propagation. Our code is available at https://github.com/luyi256/ScaleGUN. | Machine Unlearning, Graph Neural Networks, Scalability | We propose a certifiable graph unlearning model that scales to billion-edge graphs by non-trivial theoretical analysis. | 881 | 2408.09212 |
Enhancing Pre-trained Representation Classifiability can Boost its Interpretability | https://openreview.net/forum?id=GjfIZan5jN | [
"Shufan Shen",
"Zhaobo Qi",
"Junshu Sun",
"Qingming Huang",
"Qi Tian",
"Shuhui Wang"
] | Spotlight | The visual representation of a pre-trained model prioritizes the classifiability on downstream tasks, while the widespread applications for pre-trained visual models have posed new requirements for representation interpretability. However, it remains unclear whether the pre-trained representations can achieve high interpretability and classifiability simultaneously. To answer this question, we quantify the representation interpretability by leveraging its correlation with the ratio of interpretable semantics within the representations. Given the pre-trained representations, only the interpretable semantics can be captured by interpretations, whereas the uninterpretable part leads to information loss. Based on this fact, we propose the Inherent Interpretability Score (IIS) that evaluates the information loss, measures the ratio of interpretable semantics, and quantifies the representation interpretability. In the evaluation of the representation interpretability with different classifiability, we surprisingly discover that the interpretability and classifiability are positively correlated, i.e., representations with higher classifiability provide more interpretable semantics that can be captured in the interpretations. This observation further supports two benefits to the pre-trained representations. First, the classifiability of representations can be further improved by fine-tuning with interpretability maximization. Second, with the classifiability improvement for the representations, we obtain predictions based on their interpretations with less accuracy degradation. The discovered positive correlation and corresponding applications show that practitioners can unify the improvements in interpretability and classifiability for pre-trained vision models. Codes are available at https://github.com/ssfgunner/IIS. | Representation interpretability, vision representations, image understanding | We find a positive correlation between representation interpretability and classifiability. | 850 | null |
TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters | https://openreview.net/forum?id=oQ4igHyh3N | [
"Haiyang Wang",
"Yue Fan",
"Muhammad Ferjad Naeem",
"Yongqin Xian",
"Jan Eric Lenssen",
"Liwei Wang",
"Federico Tombari",
"Bernt Schiele"
] | Spotlight | Transformers have become the predominant architecture in foundation models due to their excellent performance across various domains. However, the substantial cost of scaling these models remains a significant concern. This problem arises primarily from their dependence on a fixed number of parameters within linear projections. When architectural modifications (e.g., channel dimensions) are introduced, the entire model typically requires retraining from scratch. As model sizes continue growing, this strategy results in increasingly high computational costs and becomes unsustainable. To overcome this problem, we introduce Tokenformer, a natively scalable architecture that leverages the attention mechanism not only for computations among input tokens but also for interactions between tokens and model parameters, thereby enhancing architectural flexibility. By treating model parameters as tokens, we replace all the linear projections in Transformers with our token-parameter attention layer, where input tokens act as queries and model parameters as keys and values. This reformulation allows for progressive and efficient scaling without necessitating retraining from scratch. Our model scales from 124M to 1.4B parameters by incrementally adding new key-value parameter pairs, achieving performance comparable to Transformers trained from scratch while greatly reducing training costs. Code and models are available at {\color{red}\url{https://github.com/Haiyang-W/TokenFormer.git}} | Fully Attention-based Neural Network, Large Language Model, Model Scaling, Tokenized Model Parameters | Designing a fully attention-based neural network for efficient model scaling through treating model parameters as tokens. | 795 | 2410.23168 |
Simple yet Effective Incomplete Multi-view Clustering: Similarity-level Imputation and Intra-view Hybrid-group Prototype Construction | https://openreview.net/forum?id=KijslFbfOL | [
"Shengju Yu",
"Zhibin Dong",
"Siwei Wang",
"Pei Zhang",
"Yi Zhang",
"Xinwang Liu",
"Naiyang Guan",
"Tiejun Li",
"Yiu-ming Cheung"
] | Spotlight | Most of incomplete multi-view clustering (IMVC) methods typically choose to ignore the missing samples and only utilize observed unpaired samples to construct bipartite similarity. Moreover, they employ a single quantity of prototypes to extract the information of $\textbf{all}$ views. To eliminate these drawbacks, we present a simple yet effective IMVC approach, SIIHPC, in this work. It firstly transforms partial bipartition learning into original sample form by virtue of reconstruction concept to split out of observed similarity, and then loosens traditional non-negative constraints via regularizing samples to more freely characterize the similarity. Subsequently,
it learns to recover the incomplete parts by utilizing the connection built between the similarity exclusive on respective view and the consensus graph shared for all views. On this foundation, it further introduces a group of hybrid prototype quantities for each individual view to flexibly extract the data features belonging to each view itself. Accordingly, the resulting graphs are with various scales and describe the overall similarity more comprehensively. It is worth mentioning that these all are optimized in one unified learning framework,
which makes it possible for them to reciprocally promote. Then, to effectively solve the formulated optimization problem, we design an ingenious auxiliary function that is with theoretically proven monotonic-increasing properties. Finally, the clustering results are obtained by implementing spectral grouping action on the eigenvectors of stacked multi-scale consensus similarity. Experimental results confirm the effectiveness of SIIHPC. | incomplete multi-view clustering, mulit-view clustering, clustering | null | 778 | null |
Stem-OB: Generalizable Visual Imitation Learning with Stem-Like Convergent Observation through Diffusion Inversion | https://openreview.net/forum?id=xaYlO03tIk | [
"Kaizhe Hu",
"Zihang Rui",
"Yao He",
"Yuyao Liu",
"Pu Hua",
"Huazhe Xu"
] | Spotlight | Visual imitation learning methods demonstrate strong performance, yet they lack generalization when faced with visual input perturbations like variations in lighting and textures. This limitation hampers their practical application in real-world settings. To address this, we propose ***Stem-OB*** that leverages the inversion process of pretrained image diffusion models to suppress low-level visual differences while maintaining high-level scene structures. This image inversion process is akin to transforming the observation into a shared representation, from which other observations also stem. *Stem-OB* offers a simple yet effective plug-and-play solution that stands in contrast to data augmentation approaches. It demonstrates robustness to various unspecified appearance changes without the need for additional training. We provide theoretical insights and empirical results that validate the efficacy of our approach in simulated and real settings. *Stem-OB* shows an exceptionally significant improvement in real-world robotic tasks, where challenging light and appearance changes are present, with an average increase of **22.2%** in success rates compared to the best baseline. Please refer to [this link](https://stem-ob.github.io/) for more videos and details. | Robotics, Imitation Learning, Visual Imitation Learning, Robustness, Diffusion Model, Diffusion Inversion | null | 668 | null |
LiveBench: A Challenging, Contamination-Limited LLM Benchmark | https://openreview.net/forum?id=sKYHBTAxVa | [
"Colin White",
"Samuel Dooley",
"Manley Roberts",
"Arka Pal",
"Benjamin Feuer",
"Siddhartha Jain",
"Ravid Shwartz-Ziv",
"Neel Jain",
"Khalid Saifullah",
"Sreemanti Dey",
"Shubh-Agrawal",
"Sandeep Singh Sandha",
"Siddartha Venkat Naidu",
"Chinmay Hegde",
"Yann LeCun",
"Tom Goldstein",
"Willie Neiswanger",
"Micah Goldblum"
] | Spotlight | Test set contamination, wherein test data from a benchmark ends up in a newer model's training set, is a well-documented obstacle for fair LLM evaluation and can quickly render benchmarks obsolete. To mitigate this, many recent benchmarks crowdsource new prompts and evaluations from human or LLM judges; however, these can introduce significant biases, and break down when scoring hard questions. In this work, we introduce a new benchmark for LLMs designed to be resistant to both test set contamination and the pitfalls of LLM judging and human crowdsourcing. We release LiveBench, the first benchmark that (1) contains frequently-updated questions from recent information sources, (2) scores answers automatically according to objective ground-truth values, and (3) contains a wide variety of challenging tasks, spanning math, coding, reasoning, language, instruction following, and data analysis. To achieve this, LiveBench contains questions that are based on recently-released math competitions, arXiv papers, news articles, and datasets, and it contains harder, contamination-limited versions of tasks from previous benchmarks such as Big-Bench Hard, AMPS, and IFEval. We evaluate many prominent closed-source models, as well as dozens of open-source models ranging from 0.5B to 405B in size. LiveBench is difficult, with top models achieving below 70% accuracy. We release all questions, code, and model answers. Questions are added and updated on a monthly basis, and we release new tasks and harder versions of tasks over time so that LiveBench can distinguish between the capabilities of LLMs as they improve in the future. We welcome community engagement and collaboration for expanding the benchmark tasks and models. | large language models, benchmark | LiveBench is a difficult LLM benchmark consisting of contamination-limited tasks that employ verifiable ground truth answers on frequently-updated questions from recent information sources and procedural question generation techniques. | 646 | null |
Anti-Exposure Bias in Diffusion Models | https://openreview.net/forum?id=MtDd7rWok1 | [
"Junyu Zhang",
"Daochang Liu",
"Eunbyung Park",
"Shichao Zhang",
"Chang Xu"
] | Spotlight | Diffusion models (DMs) have achieved record-breaking performance in image generation tasks.
Nevertheless, in practice, the training-sampling discrepancy, caused by score estimation error and discretization error, limits the modeling ability of DMs, a phenomenon known as exposure bias.
To alleviate such exposure bias and further improve the generative performance, we put forward a prompt learning framework built upon a lightweight prompt prediction model.
Concretely, our model learns an anti-bias prompt for the generated sample at each sampling step, aiming to compensate for the exposure bias that arises.
Following this design philosophy, our framework rectifies the sampling trajectory to match the training trajectory, thereby reducing the divergence between the target data distribution and the modeling distribution.
To train the prompt prediction model, we simulate exposure bias by constructing training data and introduce a time-dependent weighting function for optimization.
Empirical results on various DMs demonstrate the superiority of our prompt learning framework across three benchmark datasets.
Importantly, the optimized prompt prediction model effectively improves image quality with only a 5\% increase in sampling overhead, which remains negligible. | Diffusion Models, Exposure Bias, Prompt Learning, Sampling Trajectory | null | 629 | null |
DynamicCity: Large-Scale 4D Occupancy Generation from Dynamic Scenes | https://openreview.net/forum?id=M7KyLjuN0A | [
"Hengwei Bian",
"Lingdong Kong",
"Haozhe Xie",
"Liang Pan",
"Yu Qiao",
"Ziwei Liu"
] | Spotlight | Urban scene generation has been developing rapidly recently. However, existing methods primarily focus on generating static and single-frame scenes, overlooking the inherently dynamic nature of real-world driving environments. In this work, we introduce DynamicCity, a novel 4D occupancy generation framework capable of generating large-scale, high-quality dynamic 4D scenes with semantics. DynamicCity mainly consists of two key models. **1)** A VAE model for learning HexPlane as the compact 4D representation. Instead of using naive averaging operations, DynamicCity employs a novel **Projection Module** to effectively compress 4D features into six 2D feature maps for HexPlane construction, which significantly enhances HexPlane fitting quality (up to **12.56** mIoU gain). Furthermore, we utilize an **Expansion & Squeeze Strategy** to reconstruct 3D feature volumes in parallel, which improves both network training efficiency and reconstruction accuracy than naively querying each 3D point (up to **7.05** mIoU gain, **2.06x** training speedup, and **70.84\%** memory reduction). **2)** A DiT-based diffusion model for HexPlane generation. To make HexPlane feasible for DiT generation, a **Padded Rollout Operation** is proposed to reorganize all six feature planes of the HexPlane as a squared 2D feature map. In particular, various conditions could be introduced in the diffusion or sampling process, supporting **versatile 4D generation applications**, such as trajectory- and command-driven generation, inpainting, and layout-conditioned generation. Extensive experiments on the CarlaSC and Waymo datasets demonstrate that DynamicCity significantly outperforms existing state-of-the-art 4D occupancy generation methods across multiple metrics. The code and models have been released to facilitate future research. | LiDAR Generation, Dynamic Scenes, 4D Generation | DynamicCity is a versatile 4D scene generation model that generate high-quality occupancy scenes from sensory driving data. | 552 | 2410.18084 |
Computational Explorations of Total Variation Distance | https://openreview.net/forum?id=xak8c9l1nu | [
"Arnab Bhattacharyya",
"Sutanu Gayen",
"Kuldeep S. Meel",
"Dimitrios Myrisiotis",
"A. Pavan",
"N. V. Vinodchandran"
] | Spotlight | We investigate some previously unexplored (or underexplored) computational aspects of total variation (TV) distance.
First, we give a simple deterministic polynomial-time algorithm for checking equivalence between mixtures of product distributions, over arbitrary alphabets.
This corresponds to a special case, whereby the TV distance between the two distributions is zero.
Second, we prove that unless $\mathsf{NP} \subseteq \mathsf{RP}$ it is impossible to efficiently estimate the TV distance between arbitrary Ising models, even in a bounded-error randomized setting. | total variation distance, TV distance, mixtures of products, equivalence checking, Ising models, computational complexity, FPRAS | null | 525 | 2412.10370 |
Reti-Diff: Illumination Degradation Image Restoration with Retinex-based Latent Diffusion Model | https://openreview.net/forum?id=kxFtMHItrf | [
"Chunming He",
"Chengyu Fang",
"Yulun Zhang",
"Longxiang Tang",
"Jinfa Huang",
"Kai Li",
"Zhenhua Guo",
"Xiu Li",
"Sina Farsiu"
] | Spotlight | Illumination degradation image restoration (IDIR) techniques aim to improve the visibility of degraded images and mitigate the adverse effects of deteriorated illumination. Among these algorithms, diffusion-based models (DM) have shown promising performance but are often burdened by heavy computational demands and pixel misalignment issues when predicting the image-level distribution. To tackle these problems, we propose to leverage DM within a compact latent space to generate concise guidance priors and introduce a novel solution called Reti-Diff for the IDIR task. Specifically, Reti-Diff comprises two significant components: the Retinex-based latent DM (RLDM) and the Retinex-guided transformer (RGformer). RLDM is designed to acquire Retinex knowledge, extracting reflectance and illumination priors to facilitate detailed reconstruction and illumination correction. RGformer subsequently utilizes these compact priors to guide the decomposition of image features into their respective reflectance and illumination components. Following this, RGformer further enhances and consolidates these decomposed features, resulting in the production of refined images with consistent content and robustness to handle complex degradation scenarios. Extensive experiments demonstrate that Reti-Diff outperforms existing methods on three IDIR tasks, as well as downstream applications. | Illumination degradation image restoration, Latent diffusion model, Retinex theory | The first latent diffusion model-based methods with strong generalizability in illumination degradation image restoration problems and promising performance in downstream tasks | 517 | null |
MaRS: A Fast Sampler for Mean Reverting Diffusion based on ODE and SDE Solvers | https://openreview.net/forum?id=yVeNBxwL5W | [
"Ao Li",
"Wei Fang",
"Hongbo Zhao",
"Le Lu",
"Ge Yang",
"Minfeng Xu"
] | Spotlight | In applications of diffusion models, controllable generation is of practical significance, but is also challenging. Current methods for controllable generation primarily focus on modifying the score function of diffusion models, while Mean Reverting (MR) Diffusion directly modifies the structure of the stochastic differential equation (SDE), making the incorporation of image conditions simpler and more natural. However, current training-free fast samplers are not directly applicable to MR Diffusion. And thus MR Diffusion requires hundreds of NFEs (number of function evaluations) to obtain high-quality samples. In this paper, we propose a new algorithm named MaRS (MR Sampler) to reduce the sampling NFEs of MR Diffusion. We solve the reverse-time SDE and the probability flow ordinary differential equation (PF-ODE) associated with MR Diffusion, and derive semi-analytical solutions. The solutions consist of an analytical function and an integral parameterized by a neural network. Based on this solution, we can generate high-quality samples in fewer steps. Our approach does not require training and supports all mainstream parameterizations, including noise prediction, data prediction and velocity prediction. Extensive experiments demonstrate that MR Sampler maintains high sampling quality with a speedup of 10 to 20 times across ten different image restoration tasks. Our algorithm accelerates the sampling procedure of MR Diffusion, making it more practical in controllable generation. | Fast Sampler, Mean Reverting Diffusion | We propose a fast sampler for Mean Reverting Diffusion based on both ODE and SDE solvers. | 514 | 2502.07856 |
SRSA: Skill Retrieval and Adaptation for Robotic Assembly Tasks | https://openreview.net/forum?id=RInisw1yin | [
"Yijie Guo",
"Bingjie Tang",
"Iretiayo Akinola",
"Dieter Fox",
"Abhishek Gupta",
"Yashraj Narang"
] | Spotlight | Enabling robots to learn novel tasks in a data-efficient manner is a long-standing challenge. Common strategies involve carefully leveraging prior experiences, especially transition data collected on related tasks. Although much progress has been made for general pick-and-place manipulation, far fewer studies have investigated contact-rich assembly tasks, where precise control is essential. We introduce SRSA} (Skill Retrieval and Skill Adaptation), a novel framework designed to address this problem by utilizing a pre-existing skill library containing policies for diverse assembly tasks. The challenge lies in identifying which skill from the library is most relevant for fine-tuning on a new task. Our key hypothesis is that skills showing higher zero-shot success rates on a new task are better suited for rapid and effective fine-tuning on that task. To this end, we propose to predict the transfer success for all skills in the skill library on a novel task, and then use this prediction to guide the skill retrieval process. We establish a framework that jointly captures features of object geometry, physical dynamics, and expert actions to represent the tasks, allowing us to efficiently learn the transfer success predictor. Extensive experiments demonstrate that SRSA significantly outperforms the leading baseline. When retrieving and fine-tuning skills on unseen tasks, SRSA achieves a 19% relative improvement in success rate, exhibits 2.6x lower standard deviation across random seeds, and requires 2.4x fewer transition samples to reach a satisfactory success rate, compared to the baseline. In a continual learning setup, SRSA efficiently learns policies for new tasks and incorporates them into the skill library, enhancing future policy learning. Furthermore, policies trained with SRSA in simulation achieve a 90% mean success rate when deployed in the real world. Please visit our project webpage https://srsa2024.github.io/. | Robotic Assembly Tasks; Skill Retrieval; Skill Adaptation; Sim-to-real Transfer; Reinforcement Learning Fine-tuning; | We introduce SRSA, a novel pipeline that retrieves relevant skills from a pre-existing skill library and adapts them to efficiently solve new robotic assembly tasks. | 510 | 2503.04538 |
SVDQuant: Absorbing Outliers by Low-Rank Component for 4-Bit Diffusion Models | https://openreview.net/forum?id=vWR3KuiQur | [
"Muyang Li",
"Yujun Lin",
"Zhekai Zhang",
"Tianle Cai",
"Xiuyu Li",
"Junxian Guo",
"Enze Xie",
"Chenlin Meng",
"Jun-Yan Zhu",
"Song Han"
] | Spotlight | Diffusion models can effectively generate high-quality images. However, as they scale, rising memory demands and higher latency pose substantial deployment challenges. In this work, we aim to accelerate diffusion models by quantizing their weights and activations to 4 bits. At such an aggressive level, both weights and activations are highly sensitive, where existing post-training quantization methods like smoothing become insufficient. To overcome this limitation, we propose *SVDQuant*, a new 4-bit quantization paradigm. Different from smoothing, which redistributes outliers between weights and activations, our approach *absorbs* these outliers using a low-rank branch. We first consolidate the outliers by shifting them from activations to weights. Then, we use a high-precision, low-rank branch to take in the weight outliers with Singular Value Decomposition (SVD), while a low-bit quantized branch handles the residuals. This process eases the quantization on both sides. However, naively running the low-rank branch independently incurs significant overhead due to extra data movement of activations, negating the quantization speedup. To address this, we co-design an inference engine *Nunchaku* that fuses the kernels of the low-rank branch into those of the low-bit branch to cut off redundant memory access. It can also seamlessly support off-the-shelf low-rank adapters (LoRAs) without re-quantization. Extensive experiments on SDXL, PixArt-$\Sigma$, and FLUX.1 validate the effectiveness of SVDQuant in preserving image quality. We reduce the memory usage for the 12B FLUX.1 models by 3.5×, achieving 3.0× speedup over the 4-bit weight-only quantization (W4A16) baseline on the 16GB laptop 4090 GPU with INT4 precision. On the latest RTX 5090 desktop with Blackwell architecture, we achieve a 3.1× speedup compared to the W4A16 model using NVFP4 precision. Our quantization library and inference engine are available at https://github.com/mit-han-lab/deepcompressor/ and https://github.com/mit-han-lab/nunchaku/, correspondingly. | Quantization, Diffusion Models, Efficiency, Acceleration | 4-bit Post-Training Quantization for Diffusion Models | 474 | null |
DenseMatcher: Learning 3D Semantic Correspondence for Category-Level Manipulation from a Single Demo | https://openreview.net/forum?id=8oFvUBvF1u | [
"Junzhe Zhu",
"Yuanchen Ju",
"Junyi Zhang",
"Muhan Wang",
"Zhecheng Yuan",
"Kaizhe Hu",
"Huazhe Xu"
] | Spotlight | Dense 3D correspondence can enhance robotic manipulation by enabling the generalization of spatial, functional, and dynamic information from one object to an unseen counterpart. Compared to shape correspondence, semantic correspondence is more effective in generalizing across different object categories. To this end, we present DenseMatcher, a method capable of computing 3D correspondences between in-the-wild objects that share similar structures. DenseMatcher first computes vertex features by projecting multiview 2D features onto meshes and refining them with a 3D network, and subsequently finds dense correspondences with the obtained features using functional map. In addition, we craft the first 3D matching dataset that contains colored object meshes across diverse categories. We demonstrate the downstream effectiveness of DenseMatcher in (i) robotic manipulation, where it achieves cross-instance and cross-category generalization on long-horizon complex manipulation tasks from observing only one demo; (ii) zero-shot color mapping between digital assets, where appearance can be transferred between different objects with relatable geometry. More details and demonstrations can be found at https://tea-lab.github.io/DenseMatcher/. | robotics, correspondence, computer vision, 3D vision | We develop a dataset and a model for dense 3D correspondence on colored meshes, and perform robotic manipulation and color transfer experiments. | 392 | 2412.05268 |
LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression | https://openreview.net/forum?id=PpYy0dR3Qw | [
"Laurent Condat",
"Arto Maranjyan",
"Peter Richtárik"
] | Spotlight | In $D$istributed optimization and $L$earning, and even more in the modern framework of federated learning, communication, which is slow and costly, is critical. We introduce LoCoDL, a communication-efficient algorithm that leverages the two popular and effective techniques of $Lo$cal training, which reduces the communication frequency, and $Co$mpression, in which short bitstreams are sent instead of full-dimensional vectors of floats. LoCoDL works with a large class of unbiased compressors that includes widely-used sparsification and quantization methods. LoCoDL provably benefits from local training and compression and enjoys a doubly-accelerated communication complexity, with respect to the condition number of the functions and the model dimension, in the general heterogeneous regime with strongly convex functions. This is confirmed in practice, with LoCoDL outperforming existing algorithms. | distributed optimization, local training, compression, communication-efficient algorithm, federated learning | null | 363 | 2403.04348 |
LeFusion: Controllable Pathology Synthesis via Lesion-Focused Diffusion Models | https://openreview.net/forum?id=3b9SKkRAKw | [
"Hantao Zhang",
"Yuhe Liu",
"Jiancheng Yang",
"Shouhong Wan",
"Xinyuan Wang",
"Wei Peng",
"Pascal Fua"
] | Spotlight | Patient data from real-world clinical practice often suffers from data scarcity and long-tail imbalances, leading to biased outcomes or algorithmic unfairness. This study addresses these challenges by generating lesion-containing image-segmentation pairs from lesion-free images. Previous efforts in medical imaging synthesis have struggled with separating lesion information from background, resulting in low-quality backgrounds and limited control over the synthetic output. Inspired by diffusion-based image inpainting, we propose LeFusion, a lesion-focused diffusion model. By redesigning the diffusion learning objectives to focus on lesion areas, we simplify the learning process and improve control over the output while preserving high-fidelity backgrounds by integrating forward-diffused background contexts into the reverse diffusion process. Additionally, we tackle two major challenges in lesion texture synthesis: 1) multi-peak and 2) multi-class lesions. We introduce two effective strategies: histogram-based texture control and multi-channel decomposition, enabling the controlled generation of high-quality lesions in difficult scenarios. Furthermore, we incorporate lesion mask diffusion, allowing control over lesion size, location, and boundary, thus increasing lesion diversity. Validated on 3D cardiac lesion MRI and lung nodule CT datasets, LeFusion-generated data significantly improves the performance of state-of-the-art segmentation models, including nnUNet and SwinUNETR. | data synthesis, diffusion models, cardiac MRI, lung nodule CT, segmentation | We propose LeFusion, a lesion-focused diffusion model that synthesizes diverse lesion image-mask pairs from lesion-free images, enabling controllable multi-peak and multi-class lesion generation, significantly improving segmentation models. | 361 | 2403.14066 |
Beyond Next Token Prediction: Patch-Level Training for Large Language Models | https://openreview.net/forum?id=dDpB23VbVa | [
"Chenze Shao",
"Fandong Meng",
"Jie Zhou"
] | Spotlight | The prohibitive training costs of Large Language Models (LLMs) have emerged as a significant bottleneck in the development of next-generation LLMs. In this paper, we show that it is possible to significantly reduce the training costs of LLMs without sacrificing their performance. Specifically, we introduce patch-level training for LLMs, in which multiple tokens are aggregated into a unit of higher information density, referred to as a `patch', to serve as the fundamental text unit for training LLMs. During patch-level training, we feed the language model shorter sequences of patches and train it to predict the next patch, thereby processing the majority of the training data at a significantly reduced cost. Following this, the model continues token-level training on the remaining training data to align with the inference mode. Experiments on a diverse range of models (370M-2.7B parameters) demonstrate that patch-level training can reduce the overall training costs to 0.5$\times$, without compromising the model performance compared to token-level training. Source code: \url{https://github.com/shaochenze/PatchTrain}. | large language models, patch-level training | This paper introduces patch-level training to reduce the number of text units for training LLMs, where every consecutive K tokens are aggregated into a patch unit. | 276 | null |
LOKI: A Comprehensive Synthetic Data Detection Benchmark using Large Multimodal Models | https://openreview.net/forum?id=z8sxoCYgmd | [
"Junyan Ye",
"Baichuan Zhou",
"Zilong Huang",
"Junan Zhang",
"Tianyi Bai",
"Hengrui Kang",
"Jun He",
"Honglin Lin",
"Zihao Wang",
"Tong Wu",
"Zhizheng Wu",
"Yiping Chen",
"Dahua Lin",
"Conghui He",
"Weijia Li"
] | Spotlight | With the rapid development of AI-generated content, the future internet may be inundated with synthetic data, making the discrimination of authentic and credible multimodal data increasingly challenging. Synthetic data detection has thus garnered widespread attention, and the performance of large multimodal models (LMMs) in this task has attracted significant interest. LMMs can provide natural language explanations for their authenticity judgments, enhancing the explainability of synthetic content detection. Simultaneously, the task of distinguishing between real and synthetic data effectively tests the perception, knowledge, and reasoning capabilities of LMMs. In response, we introduce LOKI, a novel benchmark designed to evaluate the ability of LMMs to detect synthetic data across multiple modalities. LOKI encompasses video, image, 3D, text, and audio modalities, comprising 18K carefully curated questions across 26 subcategories with clear difficulty levels. The benchmark includes coarse-grained judgment and multiple-choice questions, as well as fine-grained anomaly selection and explanation tasks, allowing for a comprehensive analysis of LMMs. We evaluated 22 open-source LMMs and 6 closed-source models on LOKI, highlighting their potential as synthetic data detectors and also revealing some limitations in the development of LMM capabilities. More information about LOKI can be found at https://opendatalab.github.io/LOKI/. | LMMs;Deepfake;Multimodality | A Comprehensive Synthetic Data Detection Benchmark using Large Multimodal Models | 264 | 2410.09732 |
Recognize Any Surgical Object: Unleashing the Power of Weakly-Supervised Data | https://openreview.net/forum?id=iuxaCU3DI7 | [
"Jiajie Li",
"Brian R Quaranto",
"Chenhui Xu",
"Ishan Mishra",
"Ruiyang Qin",
"Dancheng Liu",
"Peter C W Kim",
"Jinjun Xiong"
] | Spotlight | We present RASO, a foundation model designed to Recognize Any Surgical Object, offering robust open-set recognition capabilities across a broad range of surgical procedures and object classes, in both surgical images and videos. RASO leverages a novel weakly-supervised learning framework that generates tag-image-text pairs automatically from large-scale unannotated surgical lecture videos, significantly reducing the need for manual annotations. Our scalable data generation pipeline gathers 2,200 surgical procedures and produces 3.6 million tag annotations across 2,066 unique surgical tags. Our experiments show that RASO achieves improvements of 2.9 mAP, 4.5 mAP, 10.6 mAP, and 7.2 mAP on four standard surgical benchmarks respectively in zero-shot settings, and surpasses state-of-the-art models in supervised surgical action recognition tasks. We will open-source our code, model, and dataset to facilitate further research. | Image Recognition, Vision-Language Pretraining, Image Tagging, Medical Imaging, Surgery | null | 119 | 2501.15326 |
Subsets and Splits