Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Libraries:
Datasets
pandas
bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
197
792
abstract
stringlengths
303
3.45k
title
stringlengths
10
159
authors
sequencelengths
1
28
id
stringclasses
44 values
type
stringclasses
16 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
444 values
n_linked_authors
int64
-1
9
upvotes
int64
-1
42
num_comments
int64
-1
13
n_authors
int64
-1
92
paper_page_exists_pre_conf
int64
0
1
Models
sequencelengths
0
100
Datasets
sequencelengths
0
11
Spaces
sequencelengths
0
100
null
https://openreview.net/forum?id=zyhxRc9bew
@inproceedings{ sun2023what, title={What is Flagged in Uncertainty Quantification? Latent Density Models for Uncertainty Categorization}, author={Hao Sun and Boris van Breugel and Jonathan Crabb{\'e} and Nabeel Seedat and Mihaela van der Schaar}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zyhxRc9bew} }
Uncertainty quantification (UQ) is essential for creating trustworthy machine learning models. Recent years have seen a steep rise in UQ methods that can flag suspicious examples, however, it is often unclear what exactly these methods identify. In this work, we propose a framework for categorizing uncertain examples flagged by UQ methods. We introduce the confusion density matrix---a kernel-based approximation of the misclassification density---and use this to categorize suspicious examples identified by a given uncertainty method into three classes: out-of-distribution (OOD) examples, boundary (Bnd) examples, and examples in regions of high in-distribution misclassification (IDM). Through extensive experiments, we show that our framework provides a new and distinct perspective for assessing differences between uncertainty quantification methods, thereby forming a valuable assessment benchmark.
What is Flagged in Uncertainty Quantification? Latent Density Models for Uncertainty Categorization
[ "Hao Sun", "Boris van Breugel", "Jonathan Crabbé", "Nabeel Seedat", "Mihaela van der Schaar" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zyZkaqNnpa
@inproceedings{ puli2023dont, title={Don{\textquoteright}t blame Dataset Shift! Shortcut Learning due to Gradients and Cross Entropy}, author={Aahlad Manas Puli and Lily H Zhang and Yoav Wald and Rajesh Ranganath}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zyZkaqNnpa} }
Common explanations for shortcut learning assume that the shortcut improves prediction only under the training distribution. Thus, models trained in the typical way by minimizing log-loss using gradient descent, which we call default-ERM, should utilize the shortcut. However, even when the stable feature determines the label in the training distribution and the shortcut does not provide any additional information, like in perception tasks, default-ERM exhibits shortcut learning. Why are such solutions preferred when the loss can be driven to zero when using the stable feature alone? By studying a linear perception task, we show that default-ERM’s preference for maximizing the margin, even without overparameterization, leads to models that depend more on the shortcut than the stable feature. This insight suggests that default-ERM’s implicit inductive bias towards max-margin may be unsuitable for perception tasks. Instead, we consider inductive biases toward uniform margins. We show that uniform margins guarantee sole dependence on the perfect stable feature in the linear perception task and suggest alternative loss functions, termed margin control (MARG-CTRL), that encourage uniform-margin solutions. MARG-CTRL techniques mitigate shortcut learning on a variety of vision and language tasks, showing that changing inductive biases can remove the need for complicated shortcut-mitigating methods in perception tasks.
Don’t blame Dataset Shift! Shortcut Learning due to Gradients and Cross Entropy
[ "Aahlad Manas Puli", "Lily H Zhang", "Yoav Wald", "Rajesh Ranganath" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zuXyQsXVLF
@inproceedings{ xu2023enhancing, title={Enhancing Adversarial Contrastive Learning via Adversarial Invariant Regularization}, author={Xilie Xu and Jingfeng Zhang and Feng Liu and Masashi Sugiyama and Mohan Kankanhalli}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zuXyQsXVLF} }
Adversarial contrastive learning (ACL) is a technique that enhances standard contrastive learning (SCL) by incorporating adversarial data to learn a robust representation that can withstand adversarial attacks and common corruptions without requiring costly annotations. To improve transferability, the existing work introduced the standard invariant regularization (SIR) to impose style-independence property to SCL, which can exempt the impact of nuisance style factors in the standard representation. However, it is unclear how the style-independence property benefits ACL-learned robust representations. In this paper, we leverage the technique of causal reasoning to interpret the ACL and propose adversarial invariant regularization (AIR) to enforce independence from style factors. We regulate the ACL using both SIR and AIR to output the robust representation. Theoretically, we show that AIR implicitly encourages the representational distance between different views of natural data and their adversarial variants to be independent of style factors. Empirically, our experimental results show that invariant regularization significantly improves the performance of state-of-the-art ACL methods in terms of both standard generalization and robustness on downstream tasks. To the best of our knowledge, we are the first to apply causal reasoning to interpret ACL and develop AIR for enhancing ACL-learned robust representations. Our source code is at https://github.com/GodXuxilie/Enhancing_ACL_via_AIR.
Enhancing Adversarial Contrastive Learning via Adversarial Invariant Regularization
[ "Xilie Xu", "Jingfeng Zhang", "Feng Liu", "Masashi Sugiyama", "Mohan Kankanhalli" ]
Conference
poster
2305.00374
[ "https://github.com/godxuxilie/enhancing_acl_via_air" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ztDxO15N7f
@inproceedings{ scholkemper2023an, title={An Optimization-based Approach To Node Role Discovery in Networks: Approximating Equitable Partitions}, author={Michael Scholkemper and Michael T Schaub}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=ztDxO15N7f} }
Similar to community detection, partitioning the nodes of a complex network according to their structural roles aims to identify fundamental building blocks of a network, which can be used, e.g., to find simplified descriptions of the network connectivity, to derive reduced order models for dynamical processes unfolding on processes, or as ingredients for various network analysis and graph mining tasks. In this work, we offer a fresh look on the problem of role extraction and its differences to community detection and present a definition of node roles and two associated optimization problems (cost functions) grounded in ideas related to graph-isomorphism tests, the Weisfeiler-Leman algorithm and equitable partitions. We present theoretical guarantees and validate our approach via a novel “role-infused partition benchmark”, a network model from which we can sample networks in which nodes are endowed with different roles in a stochastic way.
An Optimization-based Approach To Node Role Discovery in Networks: Approximating Equitable Partitions
[ "Michael Scholkemper", "Michael T Schaub" ]
Conference
poster
2305.19087
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zsOOqjaj2z
@inproceedings{ wang2023generator, title={Generator Identification for Linear {SDE}s with Additive and Multiplicative Noise}, author={Yuanyuan Wang and Xi Geng and Wei Huang and Biwei Huang and Mingming Gong}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zsOOqjaj2z} }
In this paper, we present conditions for identifying the generator of a linear stochastic differential equation (SDE) from the distribution of its solution process with a given fixed initial state. These identifiability conditions are crucial in causal inference using linear SDEs as they enable the identification of the post-intervention distributions from its observational distribution. Specifically, we derive a sufficient and necessary condition for identifying the generator of linear SDEs with additive noise, as well as a sufficient condition for identifying the generator of linear SDEs with multiplicative noise. We show that the conditions derived for both types of SDEs are generic. Moreover, we offer geometric interpretations of the derived identifiability conditions to enhance their understanding. To validate our theoretical results, we perform a series of simulations, which support and substantiate the established findings.
Generator Identification for Linear SDEs with Additive and Multiplicative Noise
[ "Yuanyuan Wang", "Xi Geng", "Wei Huang", "Biwei Huang", "Mingming Gong" ]
Conference
poster
2310.19491
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zrUEHZ6s9C
@inproceedings{ zhang2023algorithm, title={Algorithm Selection for Deep Active Learning with Imbalanced Datasets}, author={Jifan Zhang and Shuai Shao and saurabh verma and Robert D Nowak}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zrUEHZ6s9C} }
Label efficiency has become an increasingly important objective in deep learning applications. Active learning aims to reduce the number of labeled examples needed to train deep networks, but the empirical performance of active learning algorithms can vary dramatically across datasets and applications. It is difficult to know in advance which active learning strategy will perform well or best in a given application. To address this, we propose the first adaptive algorithm selection strategy for deep active learning. For any unlabeled dataset, our (meta) algorithm TAILOR (Thompson ActIve Learning algORithm selection) iteratively and adaptively chooses among a set of candidate active learning algorithms. TAILOR uses novel reward functions aimed at gathering class-balanced examples. Extensive experiments in multi-class and multi-label applications demonstrate TAILOR's effectiveness in achieving accuracy comparable or better than that of the best of the candidate algorithms. Our implementation of TAILOR is open-sourced at https://github.com/jifanz/TAILOR.
Algorithm Selection for Deep Active Learning with Imbalanced Datasets
[ "Jifan Zhang", "Shuai Shao", "saurabh verma", "Robert D Nowak" ]
Conference
poster
2302.07317
[ "https://github.com/jifanz/tailor" ]
https://huggingface.co/papers/2302.07317
1
0
0
4
1
[]
[]
[]
null
https://openreview.net/forum?id=zrLxHYvIFL
@inproceedings{ wang2023discover, title={Discover and Align Taxonomic Context Priors for Open-world Semi-Supervised Learning}, author={Yu Wang and Zhun Zhong and Pengchong Qiao and Xuxin Cheng and Xiawu Zheng and Chang Liu and Nicu Sebe and Rongrong Ji and Jie Chen}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zrLxHYvIFL} }
Open-world Semi-Supervised Learning (OSSL) is a realistic and challenging task, aiming to classify unlabeled samples from both seen and novel classes using partially labeled samples from the seen classes. Previous works typically explore the relationship of samples as priors on the pre-defined single-granularity labels to help novel class recognition. In fact, classes follow a taxonomy and samples can be classified at multiple levels of granularity, which contains more underlying relationships for supervision. We thus argue that learning with single-granularity labels results in sub-optimal representation learning and inaccurate pseudo labels, especially with unknown classes. In this paper, we take the initiative to explore and propose a uniformed framework, called Taxonomic context prIors Discovering and Aligning (TIDA), which exploits the relationship of samples under various granularity. It allows us to discover multi-granularity semantic concepts as taxonomic context priors (i.e., sub-class, target-class, and super-class), and then collaboratively leverage them to enhance representation learning and improve the quality of pseudo labels. Specifically, TIDA comprises two components: i) A taxonomic context discovery module that constructs a set of hierarchical prototypes in the latent space to discover the underlying taxonomic context priors; ii) A taxonomic context-based prediction alignment module that enforces consistency across hierarchical predictions to build the reliable relationship between classes among various granularity and provide additions supervision. We demonstrate that these two components are mutually beneficial for an effective OSSL framework, which is theoretically explained from the perspective of the EM algorithm. Extensive experiments on seven commonly used datasets show that TIDA can significantly improve the performance and achieve a new state of the art. The source codes are publicly available at https://github.com/rain305f/TIDA.
Discover and Align Taxonomic Context Priors for Open-world Semi-Supervised Learning
[ "Yu Wang", "Zhun Zhong", "Pengchong Qiao", "Xuxin Cheng", "Xiawu Zheng", "Chang Liu", "Nicu Sebe", "Rongrong Ji", "Jie Chen" ]
Conference
poster
[ "https://github.com/rain305f/tida" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zrCmeqV3Sz
@inproceedings{ xia2023learning, title={Learning Invariant Representations of Graph Neural Networks via Cluster Generalization}, author={Donglin Xia and Xiao Wang and Nian Liu and Chuan Shi}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zrCmeqV3Sz} }
Graph neural networks (GNNs) have become increasingly popular in modeling graph-structured data due to their ability to learn node representations by aggregating local structure information. However, it is widely acknowledged that the test graph structure may differ from the training graph structure, resulting in a structure shift. In this paper, we experimentally find that the performance of GNNs drops significantly when the structure shift happens, suggesting that the learned models may be biased towards specific structure patterns. To address this challenge, we propose the Cluster Information Transfer (\textbf{CIT}) mechanism, which can learn invariant representations for GNNs, thereby improving their generalization ability to various and unknown test graphs with structure shift. The CIT mechanism achieves this by combining different cluster information with the nodes while preserving their cluster-independent information. By generating nodes across different clusters, the mechanism significantly enhances the diversity of the nodes and helps GNNs learn the invariant representations. We provide a theoretical analysis of the CIT mechanism, showing that the impact of changing clusters during structure shift can be mitigated after transfer. Additionally, the proposed mechanism is a plug-in that can be easily used to improve existing GNNs. We comprehensively evaluate our proposed method on three typical structure shift scenarios, demonstrating its effectiveness in enhancing GNNs' performance.
Learning Invariant Representations of Graph Neural Networks via Cluster Generalization
[ "Donglin Xia", "Xiao Wang", "Nian Liu", "Chuan Shi" ]
Conference
poster
2403.03599
[ "https://github.com/bupt-gamma/citgnn" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zqyVjCjhYD
@inproceedings{ bianchi2023the, title={The expressive power of pooling in Graph Neural Networks}, author={Filippo Maria Bianchi and Veronica Lachi}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zqyVjCjhYD} }
In Graph Neural Networks (GNNs), hierarchical pooling operators generate local summaries of the data by coarsening the graph structure and the vertex features. Considerable attention has been devoted to analyzing the expressive power of message-passing (MP) layers in GNNs, while a study on how graph pooling affects the expressiveness of a GNN is still lacking. Additionally, despite the recent advances in the design of pooling operators, there is not a principled criterion to compare them. In this work, we derive sufficient conditions for a pooling operator to fully preserve the expressive power of the MP layers before it. These conditions serve as a universal and theoretically-grounded criterion for choosing among existing pooling operators or designing new ones. Based on our theoretical findings, we analyze several existing pooling operators and identify those that fail to satisfy the expressiveness conditions. Finally, we introduce an experimental setup to verify empirically the expressive power of a GNN equipped with pooling layers, in terms of its capability to perform a graph isomorphism test.
The expressive power of pooling in Graph Neural Networks
[ "Filippo Maria Bianchi", "Veronica Lachi" ]
Conference
poster
2304.01575
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zqOcW3R9rd
@inproceedings{ wei2023shared, title={Shared Adversarial Unlearning: Backdoor Mitigation by Unlearning Shared Adversarial Examples}, author={Shaokui Wei and Mingda Zhang and Hongyuan Zha and Baoyuan Wu}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zqOcW3R9rd} }
Backdoor attacks are serious security threats to machine learning models where an adversary can inject poisoned samples into the training set, causing a backdoored model which predicts poisoned samples with particular triggers to particular target classes, while behaving normally on benign samples. In this paper, we explore the task of purifying a backdoored model using a small clean dataset. By establishing the connection between backdoor risk and adversarial risk, we derive a novel upper bound for backdoor risk, which mainly captures the risk on the shared adversarial examples (SAEs) between the backdoored model and the purified model. This upper bound further suggests a novel bi-level optimization problem for mitigating backdoor using adversarial training techniques. To solve it, we propose Shared Adversarial Unlearning (SAU). Specifically, SAU first generates SAEs, and then, unlearns the generated SAEs such that they are either correctly classified by the purified model and/or differently classified by the two models, such that the backdoor effect in the backdoored model will be mitigated in the purified model. Experiments on various benchmark datasets and network architectures show that our proposed method achieves state-of-the-art performance for backdoor defense. The code is available at https://github.com/SCLBD/BackdoorBench (PyTorch) and https://github.com/shawkui/MindTrojan (MindSpore).
Shared Adversarial Unlearning: Backdoor Mitigation by Unlearning Shared Adversarial Examples
[ "Shaokui Wei", "Mingda Zhang", "Hongyuan Zha", "Baoyuan Wu" ]
Conference
poster
2307.10562
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zq4vFneRiA
@inproceedings{ dai2023the, title={The Crucial Role of Normalization in Sharpness-Aware Minimization}, author={Yan Dai and Kwangjun Ahn and Suvrit Sra}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zq4vFneRiA} }
Sharpness-Aware Minimization (SAM) is a recently proposed gradient-based optimizer (Foret et al., ICLR 2021) that greatly improves the prediction performance of deep neural networks. Consequently, there has been a surge of interest in explaining its empirical success. We focus, in particular, on understanding ***the role played by normalization***, a key component of the SAM updates. We theoretically and empirically study the effect of normalization in SAM for both convex and non-convex functions, revealing two key roles played by normalization: i) it helps in stabilizing the algorithm; and ii) it enables the algorithm to drift along a continuum (manifold) of minima -- a property identified by recent theoretical works that is the key to better performance. We further argue that these two properties of normalization make SAM robust against the choice of hyper-parameters, supporting the practicality of SAM. Our conclusions are backed by various experiments.
The Crucial Role of Normalization in Sharpness-Aware Minimization
[ "Yan Dai", "Kwangjun Ahn", "Suvrit Sra" ]
Conference
poster
2305.15287
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zpVCITHknd
@inproceedings{ wang2023towards, title={Towards Personalized Federated Learning via Heterogeneous Model Reassembly}, author={Jiaqi Wang and Xingyi Yang and Suhan Cui and Liwei Che and Lingjuan Lyu and Dongkuan Xu and Fenglong Ma}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zpVCITHknd} }
This paper focuses on addressing the practical yet challenging problem of model heterogeneity in federated learning, where clients possess models with different network structures. To track this problem, we propose a novel framework called pFedHR, which leverages heterogeneous model reassembly to achieve personalized federated learning. In particular, we approach the problem of heterogeneous model personalization as a model-matching optimization task on the server side. Moreover, pFedHR automatically and dynamically generates informative and diverse personalized candidates with minimal human intervention. Furthermore, our proposed heterogeneous model reassembly technique mitigates the adverse impact introduced by using public data with different distributions from the client data to a certain extent. Experimental results demonstrate that pFedHR outperforms baselines on three datasets under both IID and Non-IID settings. Additionally, pFedHR effectively reduces the adverse impact of using different public data and dynamically generates diverse personalized models in an automated manner.
Towards Personalized Federated Learning via Heterogeneous Model Reassembly
[ "Jiaqi Wang", "Xingyi Yang", "Suhan Cui", "Liwei Che", "Lingjuan Lyu", "Dongkuan Xu", "Fenglong Ma" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=znudaK78u8
@inproceedings{ hwang2023active, title={Active Learning for Semantic Segmentation with Multi-class Label Query}, author={Sehyun Hwang and Sohyun Lee and Hoyoung Kim and Minhyeon Oh and Jungseul Ok and Suha Kwak}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=znudaK78u8} }
This paper proposes a new active learning method for semantic segmentation. The core of our method lies in a new annotation query design. It samples informative local image regions ($\textit{e.g.}$, superpixels), and for each of such regions, asks an oracle for a multi-hot vector indicating all classes existing in the region. This multi-class labeling strategy is substantially more efficient than existing ones like segmentation, polygon, and even dominant class labeling in terms of annotation time per click. However, it introduces the class ambiguity issue in training as it assigns partial labels ($\textit{i.e.}$, a set of candidate classes) to individual pixels. We thus propose a new algorithm for learning semantic segmentation while disambiguating the partial labels in two stages. In the first stage, it trains a segmentation model directly with the partial labels through two new loss functions motivated by partial label learning and multiple instance learning. In the second stage, it disambiguates the partial labels by generating pixel-wise pseudo labels, which are used for supervised learning of the model. Equipped with a new acquisition function dedicated to the multi-class labeling, our method outperforms previous work on Cityscapes and PASCAL VOC 2012 while spending less annotation cost. Our code and results are available at [https://github.com/sehyun03/MulActSeg](https://github.com/sehyun03/MulActSeg).
Active Learning for Semantic Segmentation with Multi-class Label Query
[ "Sehyun Hwang", "Sohyun Lee", "Hoyoung Kim", "Minhyeon Oh", "Jungseul Ok", "Suha Kwak" ]
Conference
poster
2309.09319
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=znY173SCxu
@inproceedings{ kim2023timereversed, title={Time-Reversed Dissipation Induces Duality Between Minimizing Gradient Norm and Function Value}, author={Jaeyeon Kim and Asuman E. Ozdaglar and Chanwoo Park and Ernest K. Ryu}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=znY173SCxu} }
In convex optimization, first-order optimization methods efficiently minimizing function values have been a central subject study since Nesterov's seminal work of 1983. Recently, however, Kim and Fessler's OGM-G and Lee et al.'s FISTA-G have been presented as alternatives that efficiently minimize the gradient magnitude instead. In this paper, we present H-duality, which represents a surprising one-to-one correspondence between methods efficiently minimizing function values and methods efficiently minimizing gradient magnitude. In continuous-time formulations, H-duality corresponds to reversing the time dependence of the dissipation/friction term. To the best of our knowledge, H-duality is different from Lagrange/Fenchel duality and is distinct from any previously known duality or symmetry relations. Using H-duality, we obtain a clearer understanding of the symmetry between Nesterov's method and OGM-G, derive a new class of methods efficiently reducing gradient magnitudes of smooth convex functions, and find a new composite minimization method that is simpler and faster than FISTA-G.
Time-Reversed Dissipation Induces Duality Between Minimizing Gradient Norm and Function Value
[ "Jaeyeon Kim", "Asuman E. Ozdaglar", "Chanwoo Park", "Ernest K. Ryu" ]
Conference
poster
2305.06628
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=znW5jNIOED
@inproceedings{ zhang2023optimizing, title={Optimizing over trained {GNN}s via symmetry breaking}, author={Shiqiang Zhang and Juan S Campos and Christian Wolfgang Feldmann and David Walz and Frederik Sandfort and Miriam Mathea and Calvin Tsay and Ruth Misener}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=znW5jNIOED} }
Optimization over trained machine learning models has applications including: verification, minimizing neural acquisition functions, and integrating a trained surrogate into a larger decision-making problem. This paper formulates and solves optimization problems constrained by trained graph neural networks (GNNs). To circumvent the symmetry issue caused by graph isomorphism, we propose two types of symmetry-breaking constraints: one indexing a node 0 and one indexing the remaining nodes by lexicographically ordering their neighbor sets. To guarantee that adding these constraints will not remove all symmetric solutions, we construct a graph indexing algorithm and prove that the resulting graph indexing satisfies the proposed symmetry-breaking constraints. For the classical GNN architectures considered in this paper, optimizing over a GNN with a fixed graph is equivalent to optimizing over a dense neural network. Thus, we study the case where the input graph is not fixed, implying that each edge is a decision variable, and develop two mixed-integer optimization formulations. To test our symmetry-breaking strategies and optimization formulations, we consider an application in molecular design.
Optimizing over trained GNNs via symmetry breaking
[ "Shiqiang Zhang", "Juan S Campos", "Christian Wolfgang Feldmann", "David Walz", "Frederik Sandfort", "Miriam Mathea", "Calvin Tsay", "Ruth Misener" ]
Conference
poster
2305.09420
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zn5ihqknGj
@inproceedings{ xiao2023an, title={An Alternating Optimization Method for Bilevel Problems under the Polyak-{\L}ojasiewicz Condition}, author={Quan Xiao and Songtao Lu and Tianyi Chen}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zn5ihqknGj} }
Bilevel optimization has recently regained interest owing to its applications in emerging machine learning fields such as hyperparameter optimization, meta-learning, and reinforcement learning. Recent results have shown that simple alternating (implicit) gradient-based algorithms can match the convergence rate of single-level gradient descent (GD) when addressing bilevel problems with a strongly convex lower-level objective. However, it remains unclear whether this result can be generalized to bilevel problems beyond this basic setting. In this paper, we first introduce a stationary metric for the considered bilevel problems, which generalizes the existing metric, for a nonconvex lower-level objective that satisfies the Polyak-Łojasiewicz (PL) condition. We then propose a Generalized ALternating mEthod for bilevel opTimization (GALET) tailored to BLO with convex PL LL problem and establish that GALET achieves an $\epsilon$-stationary point for the considered problem within $\tilde{\cal O}(\epsilon^{-1})$ iterations, which matches the iteration complexity of GD for single-level smooth nonconvex problems.
An Alternating Optimization Method for Bilevel Problems under the Polyak-Łojasiewicz Condition
[ "Quan Xiao", "Songtao Lu", "Tianyi Chen" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zmWNe1V6jg
@inproceedings{ rui2023scalable, title={Scalable Fair Influence Maximization}, author={Xiaobin Rui and Zhixiao Wang and Jiayu Zhao and Lichao Sun and Wei Chen}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zmWNe1V6jg} }
Given a graph $G$, a community structure $\mathcal{C}$, and a budget $k$, the fair influence maximization problem aims to select a seed set $S$ ($|S|\leq k$) that maximizes the influence spread while narrowing the influence gap between different communities. While various fairness notions exist, the welfare fairness notion, which balances fairness level and influence spread, has shown promising effectiveness. However, the lack of efficient algorithms for optimizing the welfare fairness objective function restricts its application to small-scale networks with only a few hundred nodes. In this paper, we adopt the objective function of welfare fairness to maximize the exponentially weighted summation over the influenced fraction of all communities. We first introduce an unbiased estimator for the fractional power of the arithmetic mean. Then, by adapting the reverse influence sampling (RIS) approach, we convert the optimization problem to a weighted maximum coverage problem. We also analyze the number of reverse reachable sets needed to approximate the fair influence at a high probability. Further, we present an efficient algorithm that guarantees $1-1/e - \varepsilon$ approximation.
Scalable Fair Influence Maximization
[ "Xiaobin Rui", "Zhixiao Wang", "Jiayu Zhao", "Lichao Sun", "Wei Chen" ]
Conference
poster
2306.06820
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zkfyOkBVpz
@inproceedings{ sheybani2023curriculum, title={Curriculum Learning With Infant Egocentric Videos}, author={Saber Sheybani and Himanshu Hansaria and Justin Newell Wood and Linda B. Smith and Zoran Tiganj}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zkfyOkBVpz} }
Infants possess a remarkable ability to rapidly learn and process visual inputs. As an infant's mobility increases, so does the variety and dynamics of their visual inputs. Is this change in the properties of the visual inputs beneficial or even critical for the proper development of the visual system? To address this question, we used video recordings from infants wearing head-mounted cameras to train a variety of self-supervised learning models. Critically, we separated the infant data by age group and evaluated the importance of training with a curriculum aligned with developmental order. We found that initiating learning with the data from the youngest age group provided the strongest learning signal and led to the best learning outcomes in terms of downstream task performance. We then showed that the benefits of the data from the youngest age group are due to the slowness and simplicity of the visual experience. The results provide strong empirical evidence for the importance of the properties of the early infant experience and developmental progression in training. More broadly, our approach and findings take a noteworthy step towards reverse engineering the learning mechanisms in newborn brains using image-computable models from artificial intelligence.
Curriculum Learning With Infant Egocentric Videos
[ "Saber Sheybani", "Himanshu Hansaria", "Justin Newell Wood", "Linda B. Smith", "Zoran Tiganj" ]
Conference
spotlight
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zjpjsJeVJZ
@inproceedings{ tsepenekas2023comparing, title={Comparing Apples to Oranges: Learning Similarity Functions for Data Produced by Different Distributions}, author={Leonidas Tsepenekas and Ivan Brugere and Freddy Lecue and Daniele Magazzeni}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zjpjsJeVJZ} }
Similarity functions measure how comparable pairs of elements are, and play a key role in a wide variety of applications, e.g., notions of Individual Fairness abiding by the seminal paradigm of Dwork et al., as well as Clustering problems. However, access to an accurate similarity function should not always be considered guaranteed, and this point was even raised by Dwork et al. For instance, it is reasonable to assume that when the elements to be compared are produced by different distributions, or in other words belong to different ``demographic'' groups, knowledge of their true similarity might be very difficult to obtain. In this work, we present an efficient sampling framework that learns these across-groups similarity functions, using only a limited amount of experts' feedback. We show analytical results with rigorous theoretical bounds, and empirically validate our algorithms via a large suite of experiments.
Comparing Apples to Oranges: Learning Similarity Functions for Data Produced by Different Distributions
[ "Leonidas Tsepenekas", "Ivan Brugere", "Freddy Lecue", "Daniele Magazzeni" ]
Conference
poster
2208.12731
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zfHCKDzzC8
@inproceedings{ h{\i}zl{\i}2023temporal, title={Temporal Causal Mediation through a Point Process: Direct and Indirect Effects of Healthcare Interventions}, author={{\c{C}}a{\u{g}}lar H{\i}zl{\i} and S. T. John and Anne Tuulikki Juuti and Tuure Tapani Saarinen and Kirsi Hannele Pietil{\"a}inen and Pekka Marttinen}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zfHCKDzzC8} }
Deciding on an appropriate intervention requires a causal model of a treatment, the outcome, and potential mediators. Causal mediation analysis lets us distinguish between direct and indirect effects of the intervention, but has mostly been studied in a static setting. In healthcare, data come in the form of complex, irregularly sampled time-series, with dynamic interdependencies between a treatment, outcomes, and mediators across time. Existing approaches to dynamic causal mediation analysis are limited to regular measurement intervals, simple parametric models, and disregard long-range mediator--outcome interactions. To address these limitations, we propose a non-parametric mediator--outcome model where the mediator is assumed to be a temporal point process that interacts with the outcome process. With this model, we estimate the direct and indirect effects of an external intervention on the outcome, showing how each of these affects the whole future trajectory. We demonstrate on semi-synthetic data that our method can accurately estimate direct and indirect effects. On real-world healthcare data, our model infers clinically meaningful direct and indirect effect trajectories for blood glucose after a surgery.
Temporal Causal Mediation through a Point Process: Direct and Indirect Effects of Healthcare Interventions
[ "Çağlar Hızlı", "S. T. John", "Anne Tuulikki Juuti", "Tuure Tapani Saarinen", "Kirsi Hannele Pietiläinen", "Pekka Marttinen" ]
Conference
poster
2306.09656
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zfCNwRQ569
@inproceedings{ li2023interpreting, title={Interpreting Unsupervised Anomaly Detection in Security via Rule Extraction}, author={Ruoyu Li and Qing Li and Yu Zhang and Dan Zhao and Yong Jiang and Yong Yang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zfCNwRQ569} }
Many security applications require unsupervised anomaly detection, as malicious data are extremely rare and often only unlabeled normal data are available for training (i.e., zero-positive). However, security operators are concerned about the high stakes of trusting black-box models due to their lack of interpretability. In this paper, we propose a post-hoc method to globally explain a black-box unsupervised anomaly detection model via rule extraction. First, we propose the concept of distribution decomposition rules that decompose the complex distribution of normal data into multiple compositional distributions. To find such rules, we design an unsupervised Interior Clustering Tree that incorporates the model prediction into the splitting criteria. Then, we propose the Compositional Boundary Exploration (CBE) algorithm to obtain the boundary inference rules that estimate the decision boundary of the original model on each compositional distribution. By merging these two types of rules into a rule set, we can present the inferential process of the unsupervised black-box model in a human-understandable way, and build a surrogate rule-based model for online deployment at the same time. We conduct comprehensive experiments on the explanation of four distinct unsupervised anomaly detection models on various real-world datasets. The evaluation shows that our method outperforms existing methods in terms of diverse metrics including fidelity, correctness and robustness.
Interpreting Unsupervised Anomaly Detection in Security via Rule Extraction
[ "Ruoyu Li", "Qing Li", "Yu Zhang", "Dan Zhao", "Yong Jiang", "Yong Yang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zdli6OxpWd
@inproceedings{ steinke2023counting, title={Counting Distinct Elements Under Person-Level Differential Privacy}, author={Thomas Steinke and Alexander Knop}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zdli6OxpWd} }
We study the problem of counting the number of distinct elements in a dataset subject to the constraint of differential privacy. We consider the challenging setting of person-level DP (a.k.a. user-level DP) where each person may contribute an unbounded number of items and hence the sensitivity is unbounded. Our approach is to compute a bounded-sensitivity version of this query, which reduces to solving a max-flow problem. The sensitivity bound is optimized to balance the noise we must add to privatize the answer against the error of the approximation of the bounded-sensitivity query to the true number of unique elements.
Counting Distinct Elements Under Person-Level Differential Privacy
[ "Thomas Steinke", "Alexander Knop" ]
Conference
poster
2308.12947
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zaQ7wV9NOg
@inproceedings{ liu2023optimistic, title={Optimistic Natural Policy Gradient: a Simple Efficient Policy Optimization Framework for Online {RL}}, author={Qinghua Liu and Gell{\'e}rt Weisz and Andr{\'a}s Gy{\"o}rgy and Chi Jin and Csaba Szepesvari}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zaQ7wV9NOg} }
While policy optimization algorithms have played an important role in recent empirical success of Reinforcement Learning (RL), the existing theoretical understanding of policy optimization remains rather limited---they are either restricted to tabular MDPs or suffer from highly suboptimal sample complexity, especial in online RL where exploration is necessary. This paper proposes a simple efficient policy optimization framework---Optimistic NPG for online RL. Optimistic NPG can be viewed as simply combining of the classic natural policy gradient (NPG) algorithm [Kakade, 2001] with optimistic policy evaluation subroutines to encourage exploration. For $d$-dimensional linear MDPs, Optimistic NPG is computationally efficient, and learns an $\epsilon$-optimal policy within $\tilde{\mathcal{O}}(d^2/\epsilon^3)$ samples, which is the first computationally efficient algorithm whose sample complexity has the optimal dimension dependence $\tilde{\Theta}(d^2)$. It also improves over state-of-the-art results of policy optimization algorithms [Zanette et al., 2021] by a factor of $d$. For general function approximation that subsumes linear MDPs, Optimistic NPG, to our best knowledge, is also the first policy optimization algorithm that achieves the polynomial sample complexity for learning near-optimal policies.
Optimistic Natural Policy Gradient: a Simple Efficient Policy Optimization Framework for Online RL
[ "Qinghua Liu", "Gellért Weisz", "András György", "Chi Jin", "Csaba Szepesvari" ]
Conference
spotlight
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zXckveawHa
@inproceedings{ lin2023statistical, title={Statistical Limits of Adaptive Linear Models: Low-Dimensional Estimation and Inference}, author={Licong Lin and Mufang Ying and Suvrojit Ghosh and Koulik Khamaru and Cun-Hui Zhang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zXckveawHa} }
Estimation and inference in statistics pose significant challenges when data are collected adaptively. Even in linear models, the Ordinary Least Squares (OLS) estimator may fail to exhibit asymptotic normality for single coordinate estimation and have inflated error. This issue is highlighted by a recent minimax lower bound, which shows that the error of estimating a single coordinate can be enlarged by a multiple of $\sqrt{d}$ when data are allowed to be arbitrarily adaptive, compared with the case when they are i.i.d. Our work explores this striking difference in estimation performance between utilizing i.i.d. and adaptive data. We investigate how the degree of adaptivity in data collection impacts the performance of estimating a low-dimensional parameter component in high-dimensional linear models. We identify conditions on the data collection mechanism under which the estimation error for a low-dimensional parameter component matches its counterpart in the i.i.d. setting, up to a factor that depends on the degree of adaptivity. We show that OLS or OLS on centered data can achieve this matching error. In addition, we propose a novel estimator for single coordinate inference via solving a Two-stage Adaptive Linear Estimating equation (TALE). Under a weaker form of adaptivity in data collection, we establish an asymptotic normality property of the proposed estimator.
Statistical Limits of Adaptive Linear Models: Low-Dimensional Estimation and Inference
[ "Licong Lin", "Mufang Ying", "Suvrojit Ghosh", "Koulik Khamaru", "Cun-Hui Zhang" ]
Conference
poster
2310.00532
[ "https://github.com/licong-lin/low-dim-debias" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zWxKYyW9ik
@inproceedings{ wang2023universality, title={Universality and Limitations of Prompt Tuning}, author={Yihan Wang and Jatin Chauhan and Wei Wang and Cho-Jui Hsieh}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zWxKYyW9ik} }
Despite the demonstrated empirical efficacy of prompt tuning to adapt a pretrained language model for a new task, the theoretical underpinnings of the difference between "tuning parameters before the input" against "the tuning of model weights" are limited. We thus take one of the first steps to understand the role of soft-prompt tuning for transformer-based architectures. By considering a general purpose architecture, we analyze prompt tuning from the lens of both: universal approximation and limitations with finite-depth fixed-weight pretrained transformers for continuous-valued functions. Our universality result guarantees the existence of a strong transformer with a prompt to approximate any sequence-to-sequence function in the set of Lipschitz functions. The limitations of prompt tuning for limited-depth transformers are first proved by constructing a set of datasets, that cannot be memorized by a prompt of any length for a given single encoder layer. We also provide a lower bound on the required number of tunable prompt parameters and compare the result with the number of parameters required for a low-rank update (based on LoRA) for a single-layer setting. We finally extend our analysis to multi-layer settings by providing sufficient conditions under which the transformer can at best learn datasets from invertible functions only. Our theoretical claims are also corroborated by empirical results.
Universality and Limitations of Prompt Tuning
[ "Yihan Wang", "Jatin Chauhan", "Wei Wang", "Cho-Jui Hsieh" ]
Conference
poster
2305.18787
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zW1uVN6Mbv
@inproceedings{ sturma2023unpaired, title={Unpaired Multi-Domain Causal Representation Learning}, author={Nils Sturma and Chandler Squires and Mathias Drton and Caroline Uhler}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zW1uVN6Mbv} }
The goal of causal representation learning is to find a representation of data that consists of causally related latent variables. We consider a setup where one has access to data from multiple domains that potentially share a causal representation. Crucially, observations in different domains are assumed to be unpaired, that is, we only observe the marginal distribution in each domain but not their joint distribution. In this paper, we give sufficient conditions for identifiability of the joint distribution and the shared causal graph in a linear setup. Identifiability holds if we can uniquely recover the joint distribution and the shared causal representation from the marginal distributions in each domain. We transform our results into a practical method to recover the shared latent causal graph.
Unpaired Multi-Domain Causal Representation Learning
[ "Nils Sturma", "Chandler Squires", "Mathias Drton", "Caroline Uhler" ]
Conference
spotlight
2302.00993
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zUYfbdNl1m
@inproceedings{ jin2023s, title={\$S{\textasciicircum}3\$: Increasing {GPU} Utilization during Generative Inference for Higher Throughput}, author={Yunho Jin and Chun-Feng Wu and David Brooks and Gu-Yeon Wei}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zUYfbdNl1m} }
Generating texts with a large language model (LLM) consumes massive amounts of memory. Apart from the already-large model parameters, the key/value (KV) cache that holds information about previous tokens in a sequence can grow to be even larger than the model itself. This problem is exacerbated in one of the current LLM serving frameworks which reserves the maximum sequence length of memory for the KV cache to guarantee generating a complete sequence as they do not know the output sequence length. This restricts us to use a smaller batch size leading to lower GPU utilization and above all, lower throughput. We argue that designing a system with a priori knowledge of the output sequence can mitigate this problem. To this end, we propose $S^3$, which predicts the output sequence length, schedules generation queries based on the prediction to increase device resource utilization and throughput, and handle mispredictions. Our proposed method achieves 6.49× throughput over those systems that assume the worst case for the output sequence length.
S^3: Increasing GPU Utilization during Generative Inference for Higher Throughput
[ "Yunho Jin", "Chun-Feng Wu", "David Brooks", "Gu-Yeon Wei" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zTSlm4nmlH
@inproceedings{ zhou2023beta, title={Beta Diffusion}, author={Mingyuan Zhou and Tianqi Chen and Zhendong Wang and Huangjie Zheng}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zTSlm4nmlH} }
We introduce beta diffusion, a novel generative modeling method that integrates demasking and denoising to generate data within bounded ranges. Using scaled and shifted beta distributions, beta diffusion utilizes multiplicative transitions over time to create both forward and reverse diffusion processes, maintaining beta distributions in both the forward marginals and the reverse conditionals, given the data at any point in time. Unlike traditional diffusion-based generative models relying on additive Gaussian noise and reweighted evidence lower bounds (ELBOs), beta diffusion is multiplicative and optimized with KL-divergence upper bounds (KLUBs) derived from the convexity of the KL divergence. We demonstrate that the proposed KLUBs are more effective for optimizing beta diffusion compared to negative ELBOs, which can also be derived as the KLUBs of the same KL divergence with its two arguments swapped. The loss function of beta diffusion, expressed in terms of Bregman divergence, further supports the efficacy of KLUBs for optimization. Experimental results on both synthetic data and natural images demonstrate the unique capabilities of beta diffusion in generative modeling of range-bounded data and validate the effectiveness of KLUBs in optimizing diffusion models, thereby making them valuable additions to the family of diffusion-based generative models and the optimization techniques used to train them.
Beta Diffusion
[ "Mingyuan Zhou", "Tianqi Chen", "Zhendong Wang", "Huangjie Zheng" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zQTi3pziFp
@inproceedings{ xu2023sounding, title={Sounding Bodies: Modeling 3D Spatial Sound of Humans Using Body Pose and Audio}, author={Xudong XU and Dejan Markovic and Jacob Sandakly and Todd Keebler and Steven Krenn and Alexander Richard}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zQTi3pziFp} }
While 3D human body modeling has received much attention in computer vision, modeling the acoustic equivalent, i.e. modeling 3D spatial audio produced by body motion and speech, has fallen short in the community. To close this gap, we present a model that can generate accurate 3D spatial audio for full human bodies. The system consumes, as input, audio signals from headset microphones and body pose, and produces, as output, a 3D sound field surrounding the transmitter's body, from which spatial audio can be rendered at any arbitrary position in the 3D space. We collect a first-of-its-kind multimodal dataset of human bodies, recorded with multiple cameras and a spherical array of 345 microphones. In an empirical evaluation, we demonstrate that our model can produce accurate body-induced sound fields when trained with a suitable loss. Dataset and code are available online.
Sounding Bodies: Modeling 3D Spatial Sound of Humans Using Body Pose and Audio
[ "Xudong XU", "Dejan Markovic", "Jacob Sandakly", "Todd Keebler", "Steven Krenn", "Alexander Richard" ]
Conference
spotlight
2311.06285
[ "https://github.com/facebookresearch/soundingbodies" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zQOYGDc9pu
@inproceedings{ chen2023optimized, title={Optimized Covariance Design for {AB} Test on Social Network under Interference}, author={Qianyi Chen and Bo Li and LU DENG and Yong Wang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zQOYGDc9pu} }
Online A/B tests have become increasingly popular and important for social platforms. However, accurately estimating the global average treatment effect (GATE) has proven to be challenging due to network interference, which violates the Stable Unit Treatment Value Assumption (SUTVA) and poses great challenge to experimental design. Existing network experimental design research was mostly based on the unbiased Horvitz-Thompson (HT) estimator with substantial data trimming to ensure unbiasedness at the price of high resultant estimation variance. In this paper, we strive to balance the bias and variance in designing randomized network experiments. Under a potential outcome model with 1-hop interference, we derive the bias and variance of the standard HT estimator and reveal their relation to the network topological structure and the covariance of the treatment assignment vector. We then propose to formulate the experimental design problem as to optimize the covariance matrix of the treatment assignment vector to achieve the bias and variance balance by minimizing the mean squared error (MSE) of the estimator. An efficient projected gradient descent algorithm is presented to the implement of the desired randomization scheme. Finally, we carry out extensive simulation studies to demonstrate the advantages of our proposed method over other existing methods in many settings, with different levels of model misspecification.
Optimized Covariance Design for AB Test on Social Network under Interference
[ "Qianyi Chen", "Bo Li", "LU DENG", "Yong Wang" ]
Conference
poster
2311.14042
[ "https://github.com/cqyiiii/optimized_covariance_design-nips2023" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zQ4yraDiRe
@inproceedings{ jeong2023multiscale, title={Multi-scale Diffusion Denoised Smoothing}, author={Jongheon Jeong and Jinwoo Shin}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zQ4yraDiRe} }
Along with recent diffusion models, randomized smoothing has become one of a few tangible approaches that offers adversarial robustness to models at scale, e.g., those of large pre-trained models. Specifically, one can perform randomized smoothing on any classifier via a simple "denoise-and-classify" pipeline, so-called denoised smoothing, given that an accurate denoiser is available - such as diffusion model. In this paper, we present scalable methods to address the current trade-off between certified robustness and accuracy in denoised smoothing. Our key idea is to "selectively" apply smoothing among multiple noise scales, coined multi-scale smoothing, which can be efficiently implemented with a single diffusion model. This approach also suggests a new objective to compare the collective robustness of multi-scale smoothed classifiers, and questions which representation of diffusion model would maximize the objective. To address this, we propose to further fine-tune diffusion model (a) to perform consistent denoising whenever the original image is recoverable, but (b) to generate rather diverse outputs otherwise. Our experiments show that the proposed multi-scale smoothing scheme, combined with diffusion fine-tuning, not only allows strong certified robustness at high noise scales but also maintains accuracy close to non-smoothed classifiers. Code is available at https://github.com/jh-jeong/smoothing-multiscale.
Multi-scale Diffusion Denoised Smoothing
[ "Jongheon Jeong", "Jinwoo Shin" ]
Conference
poster
2310.16779
[ "https://github.com/jh-jeong/smoothing-multiscale" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zPYeYv6YYs
@inproceedings{ angelopoulos2023conformal, title={Conformal {PID} Control for Time Series Prediction}, author={Anastasios Nikolas Angelopoulos and Emmanuel Candes and Ryan Tibshirani}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zPYeYv6YYs} }
We study the problem of uncertainty quantification for time series prediction, with the goal of providing easy-to-use algorithms with formal guarantees. The algorithms we present build upon ideas from conformal prediction and control theory, are able to prospectively model conformal scores in an online setting, and adapt to the presence of systematic errors due to seasonality, trends, and general distribution shifts. Our theory both simplifies and strengthens existing analyses in online conformal prediction. Experiments on 4-week-ahead forecasting of statewide COVID-19 death counts in the U.S. show an improvement in coverage over the ensemble forecaster used in official CDC communications. We also run experiments on predicting electricity demand, market returns, and temperature using autoregressive, Theta, Prophet, and Transformer models. We provide an extendable codebase for testing our methods and for the integration of new algorithms, data sets, and forecasting rules at [this link](http://github.com/aangelopoulos/conformal-time-series).
Conformal PID Control for Time Series Prediction
[ "Anastasios Nikolas Angelopoulos", "Emmanuel Candes", "Ryan Tibshirani" ]
Conference
poster
2307.16895
[ "https://github.com/aangelopoulos/conformal-time-series" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zOCIKYVaF5
@inproceedings{ li2023residual, title={Residual Alignment: Uncovering the Mechanisms of Residual Networks}, author={Jianing Li and Vardan Papyan}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zOCIKYVaF5} }
The ResNet architecture has been widely adopted in deep learning due to its significant boost to performance through the use of simple skip connections, yet the underlying mechanisms leading to its success remain largely unknown. In this paper, we conduct a thorough empirical study of the ResNet architecture in classification tasks by linearizing its constituent residual blocks using Residual Jacobians and measuring their singular value decompositions. Our measurements ([code](https://colab.research.google.com/drive/1yKjEg2yF616tnZFAfuN0aQ-E9v3JmyjN?usp=sharing)) reveal a process called Residual Alignment (RA) characterized by four properties: - **(RA1):** intermediate representations of a given input are *equispaced* on a *line*, embedded in high dimensional space, as observed by Gai and Zhang [2021]; - **(RA2):** top left and right singular vectors of Residual Jacobians align with each other and across different depths; - **(RA3):** Residual Jacobians are at most rank $C$ for fully-connected ResNets, where $C$ is the number of classes; and - **(RA4):** top singular values of Residual Jacobians scale inversely with depth. RA consistently occurs in models that generalize well, in both fully-connected and convolutional architectures, across various depths and widths, for varying numbers of classes, on all tested benchmark datasets, but ceases to occur once the skip connections are removed. It also provably occurs in a novel mathematical model we propose. This phenomenon reveals a strong alignment between residual branches of a ResNet (RA2+4), imparting a highly rigid geometric structure to the intermediate representations as they progress *linearly* through the network (RA1) up to the final layer, where they undergo Neural Collapse.
Residual Alignment: Uncovering the Mechanisms of Residual Networks
[ "Jianing Li", "Vardan Papyan" ]
Conference
poster
2401.09018
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zO2dAQfvHf
@inproceedings{ white2023stabilized, title={Stabilized Neural Differential Equations for Learning Dynamics with Explicit Constraints}, author={Alistair White and Niki Kilbertus and Maximilian Gelbrecht and Niklas Boers}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zO2dAQfvHf} }
Many successful methods to learn dynamical systems from data have recently been introduced. However, ensuring that the inferred dynamics preserve known constraints, such as conservation laws or restrictions on the allowed system states, remains challenging. We propose stabilized neural differential equations (SNDEs), a method to enforce arbitrary manifold constraints for neural differential equations. Our approach is based on a stabilization term that, when added to the original dynamics, renders the constraint manifold provably asymptotically stable. Due to its simplicity, our method is compatible with all common neural differential equation (NDE) models and broadly applicable. In extensive empirical evaluations, we demonstrate that SNDEs outperform existing methods while broadening the types of constraints that can be incorporated into NDE training.
Stabilized Neural Differential Equations for Learning Dynamics with Explicit Constraints
[ "Alistair White", "Niki Kilbertus", "Maximilian Gelbrecht", "Niklas Boers" ]
Conference
poster
2306.09739
[ "https://github.com/white-alistair/Stabilized-Neural-Differential-Equations" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zNA7u7wtIN
@inproceedings{ kim2023pflow, title={P-Flow: A Fast and Data-Efficient Zero-Shot {TTS} through Speech Prompting}, author={Sungwon Kim and Kevin J. Shih and Rohan Badlani and Joao Felipe Santos and Evelina Bakhturina and Mikyas T. Desta and Rafael Valle and Sungroh Yoon and Bryan Catanzaro}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zNA7u7wtIN} }
While recent large-scale neural codec language models have shown significant improvement in zero-shot TTS by training on thousands of hours of data, they suffer from drawbacks such as a lack of robustness, slow sampling speed similar to previous autoregressive TTS methods, and reliance on pre-trained neural codec representations. Our work proposes P-Flow, a fast and data-efficient zero-shot TTS model that uses speech prompts for speaker adaptation. P-Flow comprises a speech-prompted text encoder for speaker adaptation and a flow matching generative decoder for high-quality and fast speech synthesis. Our speech-prompted text encoder uses speech prompts and text input to generate speaker-conditional text representation. The flow matching generative decoder uses the speaker-conditional output to synthesize high-quality personalized speech significantly faster than in real-time. Unlike the neural codec language models, we specifically train P-Flow on LibriTTS dataset using a continuous mel-representation. Through our training method using continuous speech prompts, P-Flow matches the speaker similarity performance of the large-scale zero-shot TTS models with two orders of magnitude less training data and has more than 20$\times$ faster sampling speed. Our results show that P-Flow has better pronunciation and is preferred in human likeness and speaker similarity to its recent state-of-the-art counterparts, thus defining P-Flow as an attractive and desirable alternative. We provide audio samples on our demo page: [https://research.nvidia.com/labs/adlr/projects/pflow](https://research.nvidia.com/labs/adlr/projects/pflow)
P-Flow: A Fast and Data-Efficient Zero-Shot TTS through Speech Prompting
[ "Sungwon Kim", "Kevin J. Shih", "Rohan Badlani", "Joao Felipe Santos", "Evelina Bakhturina", "Mikyas T. Desta", "Rafael Valle", "Sungroh Yoon", "Bryan Catanzaro" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zMeemcUeXL
@inproceedings{ liu2023famo, title={{FAMO}: Fast Adaptive Multitask Optimization}, author={Bo Liu and Yihao Feng and Peter Stone and qiang liu}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zMeemcUeXL} }
One of the grand enduring goals of AI is to create generalist agents that can learn multiple different tasks from diverse data via multitask learning (MTL). However, in practice, applying gradient descent (GD) on the average loss across all tasks may yield poor multitask performance due to severe under-optimization of certain tasks. Previous approaches that manipulate task gradients for a more balanced loss decrease require storing and computing all task gradients ($\mathcal{O}(k)$ space and time where $k$ is the number of tasks), limiting their use in large-scale scenarios. In this work, we introduce Fast Adaptive Multitask Optimization (FAMO), a dynamic weighting method that decreases task losses in a balanced way using $\mathcal{O}(1)$ space and time. We conduct an extensive set of experiments covering multi-task supervised and reinforcement learning problems. Our results indicate that FAMO achieves comparable or superior performance to state-of-the-art gradient manipulation techniques while offering significant improvements in space and computational efficiency. Code is available at \url{https://github.com/Cranial-XIX/FAMO}.
FAMO: Fast Adaptive Multitask Optimization
[ "Bo Liu", "Yihao Feng", "Peter Stone", "qiang liu" ]
Conference
poster
2306.03792
[ "https://github.com/cranial-xix/famo" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zMNUNd9zs1
@inproceedings{ halvagal2023implicit, title={Implicit variance regularization in non-contrastive {SSL}}, author={Manu Srinath Halvagal and Axel Laborieux and Friedemann Zenke}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zMNUNd9zs1} }
Non-contrastive SSL methods like BYOL and SimSiam rely on asymmetric predictor networks to avoid representational collapse without negative samples. Yet, how predictor networks facilitate stable learning is not fully understood. While previous theoretical analyses assumed Euclidean losses, most practical implementations rely on cosine similarity. To gain further theoretical insight into non-contrastive SSL, we analytically study learning dynamics in conjunction with Euclidean and cosine similarity in the eigenspace of closed-form linear predictor networks. We show that both avoid collapse through implicit variance regularization albeit through different dynamical mechanisms. Moreover, we find that the eigenvalues act as effective learning rate multipliers and propose a family of isotropic loss functions (IsoLoss) that equalize convergence rates across eigenmodes. Empirically, IsoLoss speeds up the initial learning dynamics and increases robustness, thereby allowing us to dispense with the EMA target network typically used with non-contrastive methods. Our analysis sheds light on the variance regularization mechanisms of non-contrastive SSL and lays the theoretical grounds for crafting novel loss functions that shape the learning dynamics of the predictor's spectrum.
Implicit variance regularization in non-contrastive SSL
[ "Manu Srinath Halvagal", "Axel Laborieux", "Friedemann Zenke" ]
Conference
poster
2212.04858
[ "https://github.com/fmi-basel/implicit-var-reg" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zIEaOZ0saA
@inproceedings{ brand2023new, title={New Complexity-Theoretic Frontiers of Tractability for Neural Network Training}, author={Cornelius Brand and Robert Ganian and Mathis Rocton}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zIEaOZ0saA} }
In spite of the fundamental role of neural networks in contemporary machine learning research, our understanding of the computational complexity of optimally training neural networks remains limited even when dealing with the simplest kinds of activation functions. Indeed, while there has been a number of very recent results that establish ever-tighter lower bounds for the problem under linear and ReLU activation functions, little progress has been made towards the identification of novel polynomial-time tractable network architectures. In this article we obtain novel algorithmic upper bounds for training linear- and ReLU-activated neural networks to optimality which push the boundaries of tractability for these problems beyond the previous state of the art.
New Complexity-Theoretic Frontiers of Tractability for Neural Network Training
[ "Cornelius Brand", "Robert Ganian", "Mathis Rocton" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zGdH4tKtOW
@inproceedings{ shen2023optimal, title={Optimal Treatment Regimes for Proximal Causal Learning}, author={Tao Shen and Yifan Cui}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zGdH4tKtOW} }
A common concern when a policymaker draws causal inferences from and makes decisions based on observational data is that the measured covariates are insufficiently rich to account for all sources of confounding, i.e., the standard no confoundedness assumption fails to hold. The recently proposed proximal causal inference framework shows that proxy variables that abound in real-life scenarios can be leveraged to identify causal effects and therefore facilitate decision-making. Building upon this line of work, we propose a novel optimal individualized treatment regime based on so-called outcome and treatment confounding bridges. We then show that the value function of this new optimal treatment regime is superior to that of existing ones in the literature. Theoretical guarantees, including identification, superiority, excess value bound, and consistency of the estimated regime, are established. Furthermore, we demonstrate the proposed optimal regime via numerical experiments and a real data application.
Optimal Treatment Regimes for Proximal Causal Learning
[ "Tao Shen", "Yifan Cui" ]
Conference
poster
2212.09494
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zGRWp7yRqd
@inproceedings{ beretta2023multiswap, title={Multi-Swap k-Means++}, author={Lorenzo Beretta and Vincent Cohen-Addad and Silvio Lattanzi and Nikos Parotsidis}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zGRWp7yRqd} }
The $k$-means++ algorithm of Arthur and Vassilvitskii (SODA 2007) is often the practitioners' choice algorithm for optimizing the popular $k$-means clustering objective and is known to give an $O(\log k)$-approximation in expectation. To obtain higher quality solutions, Lattanzi and Sohler (ICML 2019) proposed augmenting $k$-means++ with $O(k \log \log k)$ local-search steps obtained through the $k$-means++ sampling distribution to yield a $c$-approximation to the $k$-means clustering problem, where $c$ is a large absolute constant. Here we generalize and extend their local-search algorithm by considering larger and more sophisticated local-search neighborhoods hence allowing to swap multiple centers at the same time. Our algorithm achieves a $9 + \varepsilon$ approximation ratio, which is the best possible for local search. Importantly we show that our algorithm is practical, namely easy to implement and fast enough to run on a variety of classic datasets, and outputs solutions of better cost.
Multi-Swap k-Means++
[ "Lorenzo Beretta", "Vincent Cohen-Addad", "Silvio Lattanzi", "Nikos Parotsidis" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zEoP4vzFKy
@inproceedings{ peychev2023automated, title={Automated Classification of Model Errors on ImageNet}, author={Momchil Peychev and Mark Niklas Mueller and Marc Fischer and Martin Vechev}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zEoP4vzFKy} }
While the ImageNet dataset has been driving computer vision research over the past decade, significant label noise and ambiguity have made top-1 accuracy an insufficient measure of further progress. To address this, new label-sets and evaluation protocols have been proposed for ImageNet showing that state-of-the-art models already achieve over 95% accuracy and shifting the focus on investigating why the remaining errors persist. Recent work in this direction employed a panel of experts to manually categorize all remaining classification errors for two selected models. However, this process is time-consuming, prone to inconsistencies, and requires trained experts, making it unsuitable for regular model evaluation thus limiting its utility. To overcome these limitations, we propose the first automated error classification framework, a valuable tool to study how modeling choices affect error distributions. We use our framework to comprehensively evaluate the error distribution of over 900 models. Perhaps surprisingly, we find that across model architectures, scales, and pre-training corpora, top-1 accuracy is a strong predictor for the *portion* of all error types. In particular, we observe that the portion of severe errors drops significantly with top-1 accuracy indicating that, while it underreports a model's true performance, it remains a valuable performance metric. We release all our code at https://github.com/eth-sri/automated-error-analysis.
Automated Classification of Model Errors on ImageNet
[ "Momchil Peychev", "Mark Niklas Mueller", "Marc Fischer", "Martin Vechev" ]
Conference
poster
2401.02430
[ "https://github.com/eth-sri/automated-error-analysis" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zEm6hF97Pz
@inproceedings{ choquette-choo2023amplified, title={(Amplified) Banded Matrix Factorization: A unified approach to private training}, author={Christopher A. Choquette-Choo and Arun Ganesh and Ryan McKenna and Hugh Brendan McMahan and J Keith Rush and Abhradeep Guha Thakurta and Zheng Xu}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zEm6hF97Pz} }
Matrix factorization (MF) mechanisms for differential privacy (DP) have substantially improved the state-of-the-art in privacy-utility-computation tradeoffs for ML applications in a variety of scenarios, but in both the centralized and federated settings there remain instances where either MF cannot be easily applied, or other algorithms provide better tradeoffs (typically, as $\epsilon$ becomes small). In this work, we show how MF can subsume prior state-of-the-art algorithms in both federated and centralized training settings, across all privacy budgets. The key technique throughout is the construction of MF mechanisms with banded matrices (lower-triangular matrices with at most $\hat{b}$ nonzero bands including the main diagonal). For cross-device federated learning (FL), this enables multiple-participations with a relaxed device participation schema compatible with practical FL infrastructure (as demonstrated by a production deployment). In the centralized setting, we prove that banded matrices enjoy the same privacy amplification results as the ubiquitous DP-SGD algorithm, but can provide strictly better performance in most scenarios---this lets us always at least match DP-SGD, and often outperform it
(Amplified) Banded Matrix Factorization: A unified approach to private training
[ "Christopher A. Choquette-Choo", "Arun Ganesh", "Ryan McKenna", "Hugh Brendan McMahan", "J Keith Rush", "Abhradeep Guha Thakurta", "Zheng Xu" ]
Conference
poster
2306.08153
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zDbsSscmuj
@inproceedings{ guan2023leveraging, title={Leveraging Pre-trained Large Language Models to Construct and Utilize World Models for Model-based Task Planning}, author={Lin Guan and Karthik Valmeekam and Sarath Sreedharan and Subbarao Kambhampati}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zDbsSscmuj} }
There is a growing interest in applying pre-trained large language models (LLMs) to planning problems. However, methods that use LLMs directly as planners are currently impractical due to several factors, including limited correctness of plans, strong reliance on feedback from interactions with simulators or even the actual environment, and the inefficiency in utilizing human feedback. In this work, we introduce a novel alternative paradigm that constructs an explicit world (domain) model in planning domain definition language (PDDL) and then uses it to plan with sound domain-independent planners. To address the fact that LLMs may not generate a fully functional PDDL model initially, we employ LLMs as an interface between PDDL and sources of corrective feedback, such as PDDL validators and humans. For users who lack a background in PDDL, we show that LLMs can translate PDDL into natural language and effectively encode corrective feedback back to the underlying domain model. Our framework not only enjoys the correctness guarantee offered by the external planners but also reduces human involvement by allowing users to correct domain models at the beginning, rather than inspecting and correcting (through interactive prompting) every generated plan as in previous work. On two IPC domains and a Household domain that is more complicated than commonly used benchmarks such as ALFWorld, we demonstrate that GPT-4 can be leveraged to produce high-quality PDDL models for over 40 actions, and the corrected PDDL models are then used to successfully solve 48 challenging planning tasks. Resources, including the source code, are released at: https://guansuns.github.io/pages/llm-dm.
Leveraging Pre-trained Large Language Models to Construct and Utilize World Models for Model-based Task Planning
[ "Lin Guan", "Karthik Valmeekam", "Sarath Sreedharan", "Subbarao Kambhampati" ]
Conference
poster
2305.14909
[ "" ]
https://huggingface.co/papers/2305.14909
1
1
0
4
1
[]
[]
[]
null
https://openreview.net/forum?id=zD6lXmTPPh
@inproceedings{ alfano2023a, title={A Novel Framework for Policy Mirror Descent with General Parameterization and Linear Convergence}, author={Carlo Alfano and Rui Yuan and Patrick Rebeschini}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zD6lXmTPPh} }
Modern policy optimization methods in reinforcement learning, such as TRPO and PPO, owe their success to the use of parameterized policies. However, while theoretical guarantees have been established for this class of algorithms, especially in the tabular setting, the use of general parameterization schemes remains mostly unjustified. In this work, we introduce a novel framework for policy optimization based on mirror descent that naturally accommodates general parameterizations. The policy class induced by our scheme recovers known classes, e.g., softmax, and generates new ones depending on the choice of mirror map. Using our framework, we obtain the first result that guarantees linear convergence for a policy-gradient-based method involving general parameterization. To demonstrate the ability of our framework to accommodate general parameterization schemes, we provide its sample complexity when using shallow neural networks, show that it represents an improvement upon the previous best results, and empirically validate the effectiveness of our theoretical claims on classic control tasks.
A Novel Framework for Policy Mirror Descent with General Parameterization and Linear Convergence
[ "Carlo Alfano", "Rui Yuan", "Patrick Rebeschini" ]
Conference
poster
2301.13139
[ "https://github.com/c-alfano/approximate-mirror-policy-optimization" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zCFfv49MjE
@inproceedings{ reid2023quasimonte, title={Quasi-Monte Carlo Graph Random Features}, author={Isaac Reid and Adrian Weller and Krzysztof Marcin Choromanski}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zCFfv49MjE} }
We present a novel mechanism to improve the accuracy of the recently-introduced class of graph random features (GRFs). Our method induces negative correlations between the lengths of the algorithm's random walks by imposing antithetic termination: a procedure to sample more diverse random walks which may be of independent interest. It has a trivial drop-in implementation. We derive strong theoretical guarantees on the properties of these quasi-Monte Carlo GRFs (q-GRFs), proving that they yield lower-variance estimators of the $2$-regularised Laplacian kernel under mild conditions. Remarkably, our results hold for any graph topology. We demonstrate empirical accuracy improvements on a variety of tasks including a new practical application: time-efficient approximation of the graph diffusion process. To our knowledge, q-GRFs constitute the first rigorously studied quasi-Monte Carlo scheme for kernels defined on combinatorial objects, inviting new research on correlations between graph random walks.
Quasi-Monte Carlo Graph Random Features
[ "Isaac Reid", "Krzysztof Marcin Choromanski", "Adrian Weller" ]
Conference
spotlight
2305.12470
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zAXg8dW8ZO
@inproceedings{ tran2023onelineofcode, title={One-Line-of-Code Data Mollification Improves Optimization of Likelihood-based Generative Models}, author={Ba-Hien Tran and Giulio Franzese and Pietro Michiardi and Maurizio Filippone}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zAXg8dW8ZO} }
Generative Models (GMs) have attracted considerable attention due to their tremendous success in various domains, such as computer vision where they are capable to generate impressive realistic-looking images. Likelihood-based GMs are attractive due to the possibility to generate new data by a single model evaluation. However, they typically achieve lower sample quality compared to state-of-the-art score-based Diffusion Models (DMs). This paper provides a significant step in the direction of addressing this limitation. The idea is to borrow one of the strengths of score-based DMs, which is the ability to perform accurate density estimation in low-density regions and to address manifold overfitting by means of data mollification. We propose a view of data mollification within likelihood-based GMs as a continuation method, whereby the optimization objective smoothly transitions from simple-to-optimize to the original target. Crucially, data mollification can be implemented by adding one line of code in the optimization loop, and we demonstrate that this provides a boost in generation quality of likelihood-based GMs, without computational overheads. We report results on real-world image data sets and UCI benchmarks with popular likelihood-based GMs, including variants of variational autoencoders and normalizing flows, showing large improvements in FID score and density estimation.
One-Line-of-Code Data Mollification Improves Optimization of Likelihood-based Generative Models
[ "Ba-Hien Tran", "Giulio Franzese", "Pietro Michiardi", "Maurizio Filippone" ]
Conference
poster
2305.18900
[ "https://github.com/tranbahien/data-mollification" ]
https://huggingface.co/papers/2305.18900
0
0
0
4
1
[]
[]
[]
null
https://openreview.net/forum?id=zAQK5r1enm
@inproceedings{ zhao2023decision, title={Decision Stacks: Flexible Reinforcement Learning via Modular Generative Models}, author={Siyan Zhao and Aditya Grover}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zAQK5r1enm} }
Reinforcement learning presents an attractive paradigm to reason about several distinct aspects of sequential decision making, such as specifying complex goals, planning future observations and actions, and critiquing their utilities. However, the combined integration of these capabilities poses competing algorithmic challenges in retaining maximal expressivity while allowing for flexibility in modeling choices for efficient learning and inference. We present Decision Stacks, a generative framework that decomposes goal-conditioned policy agents into 3 generative modules. These modules simulate the temporal evolution of observations, rewards, and actions via independent generative models that can be learned in parallel via teacher forcing. Our framework guarantees both expressivity and flexibility in designing individual modules to account for key factors such as architectural bias, optimization objective and dynamics, transferrability across domains, and inference speed. Our empirical results demonstrate the effectiveness of Decision Stacks for offline policy optimization for several MDP and POMDP environments, outperforming existing methods and enabling flexible generative decision making.
Decision Stacks: Flexible Reinforcement Learning via Modular Generative Models
[ "Siyan Zhao", "Aditya Grover" ]
Conference
poster
2306.06253
[ "https://github.com/siyan-zhao/decision-stacks" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zANxvzflMl
@inproceedings{ subramanian2023towards, title={Towards Foundation Models for Scientific Machine Learning: Characterizing Scaling and Transfer Behavior}, author={Shashank Subramanian and Peter Harrington and Kurt Keutzer and Wahid Bhimji and Dmitriy Morozov and Michael W. Mahoney and Amir Gholami}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=zANxvzflMl} }
Pre-trained machine learning (ML) models have shown great performance for a wide range of applications, in particular in natural language processing (NLP) and computer vision (CV). Here, we study how pre-training could be used for scientific machine learning (SciML) applications, specifically in the context of transfer learning. We study the transfer behavior of these models as (i) the pretrained model size is scaled, (ii) the downstream training dataset size is scaled, (iii) the physics parameters are systematically pushed out of distribution, and (iv) how a single model pre-trained on a mixture of different physics problems can be adapted to various downstream applications. We find that—when fine-tuned appropriately—transfer learning can help reach desired accuracy levels with orders of magnitude fewer downstream examples (across different tasks that can even be out-of-distribution) than training from scratch, with consistent behaviour across a wide range of downstream examples. We also find that fine-tuning these models yields more performance gains as model size increases, compared to training from scratch on new downstream tasks. These results hold for a broad range of PDE learning tasks. All in all, our results demonstrate the potential of the “pre-train and fine-tune” paradigm for SciML problems, demonstrating a path towards building SciML foundation models. Our code is available as open-source.
Towards Foundation Models for Scientific Machine Learning: Characterizing Scaling and Transfer Behavior
[ "Shashank Subramanian", "Peter Harrington", "Kurt Keutzer", "Wahid Bhimji", "Dmitriy Morozov", "Michael W. Mahoney", "Amir Gholami" ]
Conference
poster
2306.00258
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=z9d9DsjAPH
@inproceedings{ xu2023cyclenet, title={CycleNet: Rethinking Cycle Consistency in Text-Guided Diffusion for Image Manipulation}, author={Sihan Xu and Ziqiao Ma and Yidong Huang and Honglak Lee and Joyce Chai}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=z9d9DsjAPH} }
Diffusion models (DMs) have enabled breakthroughs in image synthesis tasks but lack an intuitive interface for consistent image-to-image (I2I) translation. Various methods have been explored to address this issue, including mask-based methods, attention-based methods, and image-conditioning. However, it remains a critical challenge to enable unpaired I2I translation with pre-trained DMs while maintaining satisfying consistency. This paper introduces Cyclenet, a novel but simple method that incorporates cycle consistency into DMs to regularize image manipulation. We validate Cyclenet on unpaired I2I tasks of different granularities. Besides the scene and object level translation, we additionally contribute a multi-domain I2I translation dataset to study the physical state changes of objects. Our empirical studies show that Cyclenet is superior in translation consistency and quality, and can generate high-quality images for out-of-domain distributions with a simple change of the textual prompt. Cyclenet is a practical framework, which is robust even with very limited training data (around 2k) and requires minimal computational resources (1 GPU) to train. Project homepage: https://cyclenetweb.github.io/
CycleNet: Rethinking Cycle Consistency in Text-Guided Diffusion for Image Manipulation
[ "Sihan Xu", "Ziqiao Ma", "Yidong Huang", "Honglak Lee", "Joyce Chai" ]
Conference
poster
2310.13165
[ "https://github.com/sled-group/cyclenet" ]
https://huggingface.co/papers/2310.13165
2
0
0
5
1
[]
[]
[]
null
https://openreview.net/forum?id=z4vKRmq7UO
@inproceedings{ bendel2023a, title={A Regularized Conditional {GAN} for Posterior Sampling in Image Recovery Problems}, author={Matthew C Bendel and Rizwan Ahmad and Philip Schniter}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=z4vKRmq7UO} }
In image recovery problems, one seeks to infer an image from distorted, incomplete, and/or noise-corrupted measurements. Such problems arise in magnetic resonance imaging (MRI), computed tomography, deblurring, super-resolution, inpainting, phase retrieval, image-to-image translation, and other applications. Given a training set of signal/measurement pairs, we seek to do more than just produce one good image estimate. Rather, we aim to rapidly and accurately sample from the posterior distribution. To do this, we propose a regularized conditional Wasserstein GAN that generates dozens of high-quality posterior samples per second. Our regularization comprises an $\ell_1$ penalty and an adaptively weighted standard-deviation reward. Using quantitative evaluation metrics like conditional Fréchet inception distance, we demonstrate that our method produces state-of-the-art posterior samples in both multicoil MRI and large-scale inpainting applications. The code for our model can be found here: https://github.com/matt-bendel/rcGAN.
A Regularized Conditional GAN for Posterior Sampling in Image Recovery Problems
[ "Matthew C Bendel", "Rizwan Ahmad", "Philip Schniter" ]
Conference
poster
2210.13389
[ "https://github.com/matt-bendel/rcgan" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=z3HACY5CMa
@inproceedings{ gui2023joint, title={Joint Learning of Label and Environment Causal Independence for Graph Out-of-Distribution Generalization}, author={Shurui Gui and Meng Liu and Xiner Li and Youzhi Luo and Shuiwang Ji}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=z3HACY5CMa} }
We tackle the problem of graph out-of-distribution (OOD) generalization. Existing graph OOD algorithms either rely on restricted assumptions or fail to exploit environment information in training data. In this work, we propose to simultaneously incorporate label and environment causal independence (LECI) to fully make use of label and environment information, thereby addressing the challenges faced by prior methods on identifying causal and invariant subgraphs. We further develop an adversarial training strategy to jointly optimize these two properties for casual subgraph discovery with theoretical guarantees. Extensive experiments and analysis show that LECI significantly outperforms prior methods on both synthetic and real-world datasets, establishing LECI as a practical and effective solution for graph OOD generalization.
Joint Learning of Label and Environment Causal Independence for Graph Out-of-Distribution Generalization
[ "Shurui Gui", "Meng Liu", "Xiner Li", "Youzhi Luo", "Shuiwang Ji" ]
Conference
poster
2306.01103
[ "https://github.com/divelab/LECI" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=z37ki6nqAY
@inproceedings{ mccauley2023online, title={Online List Labeling with Predictions}, author={Samuel McCauley and Benjamin Moseley and Aidin Niaparast and Shikha Singh}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=z37ki6nqAY} }
A growing line of work shows how learned predictions can be used to break through worst-case barriers to improve the running time of an algorithm. However, incorporating predictions into data structures with strong theoretical guarantees remains underdeveloped. This paper takes a step in this direction by showing that predictions can be leveraged in the fundamental online list labeling problem. In the problem, $n$ items arrive over time and must be stored in sorted order in an array of size $\Theta(n)$. The array slot of an element is its label and the goal is to maintain sorted order while minimizing the total number of elements moved (i.e., relabeled). We design a new list labeling data structure and bound its performance in two models. In the worst-case learning-augmented model, we give guarantees in terms of the error in the predictions. Our data structure provides strong guarantees: it is optimal for any prediction error and guarantees the best-known worst-case bound even when the predictions are entirely erroneous. We also consider a stochastic error model and bound the performance in terms of the expectation and variance of the error. Finally, the theoretical results are demonstrated empirically. In particular, we show that our data structure has strong performance on real temporal data sets where predictions are constructed from elements that arrived in the past, as is typically done in a practical use case.
Online List Labeling with Predictions
[ "Samuel McCauley", "Benjamin Moseley", "Aidin Niaparast", "Shikha Singh" ]
Conference
spotlight
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=z2BHMLA8pM
@inproceedings{ souza2023thin, title={Thin and deep Gaussian processes}, author={Daniel Augusto de Souza and Alexander V Nikitin and S. T. John and Magnus Ross and Mauricio A {\'A}lvarez and Marc Peter Deisenroth and Jo{\~a}o Paulo Pordeus Gomes and Diego Mesquita and C{\'e}sar Lincoln Mattos}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=z2BHMLA8pM} }
Gaussian processes (GPs) can provide a principled approach to uncertainty quantification with easy-to-interpret kernel hyperparameters, such as the lengthscale, which controls the correlation distance of function values.However, selecting an appropriate kernel can be challenging. Deep GPs avoid manual kernel engineering by successively parameterizing kernels with GP layers, allowing them to learn low-dimensional embeddings of the inputs that explain the output data. Following the architecture of deep neural networks, the most common deep GPs warp the input space layer-by-layer but lose all the interpretability of shallow GPs. An alternative construction is to successively parameterize the lengthscale of a kernel, improving the interpretability but ultimately giving away the notion of learning lower-dimensional embeddings. Unfortunately, both methods are susceptible to particular pathologies which may hinder fitting and limit their interpretability. This work proposes a novel synthesis of both previous approaches: {Thin and Deep GP} (TDGP). Each TDGP layer defines locally linear transformations of the original input data maintaining the concept of latent embeddings while also retaining the interpretation of lengthscales of a kernel. Moreover, unlike the prior solutions, TDGP induces non-pathological manifolds that admit learning lower-dimensional representations. We show with theoretical and experimental results that i) TDGP is, unlike previous models, tailored to specifically discover lower-dimensional manifolds in the input data, ii) TDGP behaves well when increasing the number of layers, and iii) TDGP performs well in standard benchmark datasets.
Thin and deep Gaussian processes
[ "Daniel Augusto de Souza", "Alexander V Nikitin", "S. T. John", "Magnus Ross", "Mauricio A Álvarez", "Marc Peter Deisenroth", "João Paulo Pordeus Gomes", "Diego Mesquita", "César Lincoln Mattos" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=z06npyCwDq
@inproceedings{ jiang2023prermsnorm, title={Pre-{RMSN}orm and Pre-{CRMSN}orm Transformers: Equivalent and Efficient Pre-{LN} Transformers}, author={Zixuan Jiang and Jiaqi Gu and Hanqing Zhu and David Z. Pan}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=z06npyCwDq} }
Transformers have achieved great success in machine learning applications. Normalization techniques, such as Layer Normalization (LayerNorm, LN) and Root Mean Square Normalization (RMSNorm), play a critical role in accelerating and stabilizing the training of Transformers. While LayerNorm recenters and rescales input vectors, RMSNorm only rescales the vectors by their RMS value. Despite being more computationally efficient, RMSNorm may compromise the representation ability of Transformers. There is currently no consensus regarding the preferred normalization technique, as some models employ LayerNorm while others utilize RMSNorm, especially in recent large language models. It is challenging to convert Transformers with one normalization to the other type. While there is an ongoing disagreement between the two normalization types, we propose a solution to unify two mainstream Transformer architectures, Pre-LN and Pre-RMSNorm Transformers. By removing the inherent redundant mean information in the main branch of Pre-LN Transformers, we can reduce LayerNorm to RMSNorm, achieving higher efficiency. We further propose the Compressed RMSNorm (CRMSNorm) and Pre-CRMSNorm Transformer based on a lossless compression of the zero-mean vectors. We formally establish the equivalence of Pre-LN, Pre-RMSNorm, and Pre-CRMSNorm Transformer variants in both training and inference. It implies that Pre-LN Transformers can be substituted with Pre-(C)RMSNorm counterparts at almost no cost, offering the same arithmetic functionality along with free efficiency improvement. Experiments demonstrate that we can reduce the training and inference time of Pre-LN Transformers by 1% - 10%.
Pre-RMSNorm and Pre-CRMSNorm Transformers: Equivalent and Efficient Pre-LN Transformers
[ "Zixuan Jiang", "Jiaqi Gu", "Hanqing Zhu", "David Z. Pan" ]
Conference
spotlight
2305.14858
[ "https://github.com/zixuanjiang/pre-rmsnorm-transformer" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=yzZbwQPkmP
@inproceedings{ engelken2023sparseprop, title={SparseProp: Efficient Event-Based Simulation and Training of Sparse Recurrent Spiking Neural Networks}, author={Rainer Engelken}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=yzZbwQPkmP} }
Spiking Neural Networks (SNNs) are biologically-inspired models that are capable of processing information in streams of action potentials. However, simulating and training SNNs is computationally expensive due to the need to solve large systems of coupled differential equations. In this paper, we propose a novel event-based algorithm called SparseProp for simulating and training sparse SNNs. Our algorithm reduces the computational cost of both forward pass and backward pass operations from O(N) to O(log(N)) per network spike, enabling numerically exact simulations of large spiking networks and their efficient training using backpropagation through time. By exploiting the sparsity of the network, SparseProp avoids iterating through all neurons at every spike and uses efficient state updates. We demonstrate the effectiveness of SparseProp for several classical integrate-and-fire neuron models, including simulating a sparse SNN with one million LIF neurons, which is sped up by more than four orders of magnitude compared to previous implementations. Our work provides an efficient and exact solution for training large-scale spiking neural networks and opens up new possibilities for building more sophisticated brain-inspired models.
SparseProp: Efficient Event-Based Simulation and Training of Sparse Recurrent Spiking Neural Networks
[ "Rainer Engelken" ]
Conference
poster
2312.17216
[ "https://github.com/rainerengelken/sparseprop" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=yyLFUPNEiT
@inproceedings{ suya2023what, title={What Distributions are Robust to Indiscriminate Poisoning Attacks for Linear Learners?}, author={Fnu Suya and Xiao Zhang and Yuan Tian and David Evans}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=yyLFUPNEiT} }
We study indiscriminate poisoning for linear learners where an adversary injects a few crafted examples into the training data with the goal of forcing the induced model to incur higher test error. Inspired by the observation that linear learners on some datasets are able to resist the best known attacks even without any defenses, we further investigate whether datasets can be inherently robust to indiscriminate poisoning attacks for linear learners. For theoretical Gaussian distributions, we rigorously characterize the behavior of an optimal poisoning attack, defined as the poisoning strategy that attains the maximum risk of the induced model at a given poisoning budget. Our results prove that linear learners can indeed be robust to indiscriminate poisoning if the class-wise data distributions are well-separated with low variance and the size of the constraint set containing all permissible poisoning points is also small. These findings largely explain the drastic variation in empirical attack performance of the state-of-the-art poisoning attacks on linear learners across benchmark datasets, making an important initial step towards understanding the underlying reasons some learning tasks are vulnerable to data poisoning attacks.
What Distributions are Robust to Indiscriminate Poisoning Attacks for Linear Learners?
[ "Fnu Suya", "Xiao Zhang", "Yuan Tian", "David Evans" ]
Conference
poster
2307.01073
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ywrPcBEXdC
@inproceedings{ yue2023revisiting, title={Revisiting Adversarial Robustness Distillation from the Perspective of Robust Fairness}, author={Xinli Yue and Ningping Mou and Qian Wang and Lingchen Zhao}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=ywrPcBEXdC} }
Adversarial Robustness Distillation (ARD) aims to transfer the robustness of large teacher models to small student models, facilitating the attainment of robust performance on resource-limited devices. However, existing research on ARD primarily focuses on the overall robustness of student models, overlooking the crucial aspect of $\textit{robust fairness}$. Specifically, these models may demonstrate strong robustness on some classes of data while exhibiting high vulnerability on other classes. Unfortunately, the "buckets effect" implies that the robustness of the deployed model depends on the classes with the lowest level of robustness. In this paper, we first investigate the inheritance of robust fairness during ARD and reveal that student models only partially inherit robust fairness from teacher models. We further validate this issue through fine-grained experiments with various model capacities and find that it may arise due to the gap in capacity between teacher and student models, as well as the existing methods treating each class equally during distillation. Based on these observations, we propose $\textbf{Fair}$ $\textbf{A}$dversarial $\textbf{R}$obustness $\textbf{D}$istillation (Fair-ARD), a novel framework for enhancing the robust fairness of student models by increasing the weights of difficult classes, and design a geometric perspective-based method to quantify the difficulty of different classes for determining the weights. Extensive experiments show that Fair-ARD surpasses both state-of-the-art ARD methods and existing robust fairness algorithms in terms of robust fairness (e.g., the worst-class robustness under AutoAttack is improved by at most 12.3\% and 5.3\% using ResNet18 on CIFAR10, respectively), while also slightly improving overall robustness. Our code is available at: [https://github.com/NISP-official/Fair-ARD](https://github.com/NISP-official/Fair-ARD).
Revisiting Adversarial Robustness Distillation from the Perspective of Robust Fairness
[ "Xinli Yue", "Ningping Mou", "Qian Wang", "Lingchen Zhao" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=yw1v4RqvPk
@inproceedings{ zhang2023computing, title={Computing Optimal Equilibria and Mechanisms via Learning in Zero-Sum Extensive-Form Games}, author={Brian Hu Zhang and Gabriele Farina and Ioannis Anagnostides and Federico Cacciamani and Stephen Marcus McAleer and Andreas Alexander Haupt and Andrea Celli and Nicola Gatti and Vincent Conitzer and Tuomas Sandholm}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=yw1v4RqvPk} }
We introduce a new approach for computing optimal equilibria via learning in games. It applies to extensive-form settings with any number of players, including mechanism design, information design, and solution concepts such as correlated, communication, and certification equilibria. We observe that optimal equilibria are minimax equilibrium strategies of a player in an extensive-form zero-sum game. This reformulation allows to apply techniques for learning in zero-sum games, yielding the first learning dynamics that converge to optimal equilibria, not only in empirical averages, but also in iterates. We demonstrate the practical scalability and flexibility of our approach by attaining state-of-the-art performance in benchmark tabular games, and by computing an optimal mechanism for a sequential auction design problem using deep reinforcement learning.
Computing Optimal Equilibria and Mechanisms via Learning in Zero-Sum Extensive-Form Games
[ "Brian Hu Zhang", "Gabriele Farina", "Ioannis Anagnostides", "Federico Cacciamani", "Stephen Marcus McAleer", "Andreas Alexander Haupt", "Andrea Celli", "Nicola Gatti", "Vincent Conitzer", "Tuomas Sandholm" ]
Conference
poster
2306.05216
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=yvqqkOn9Pi
@inproceedings{ meulemans2023would, title={Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis}, author={Alexander Meulemans and Simon Schug and Seijin Kobayashi and Nathaniel Daw and Greg Wayne}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=yvqqkOn9Pi} }
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action’s influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: ‘Would the agent still have reached this reward if it had taken another action?’. We show that measuring contributions w.r.t. rewarding _states_, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
[ "Alexander Meulemans", "Simon Schug", "Seijin Kobayashi", "Nathaniel Daw", "Greg Wayne" ]
Conference
spotlight
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=yubwSWol6K
@inproceedings{ flouris2023canonical, title={Canonical normalizing flows for manifold learning}, author={Kyriakos Flouris and Ender Konukoglu}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=yubwSWol6K} }
Manifold learning flows are a class of generative modelling techniques that assume a low-dimensional manifold description of the data. The embedding of such a manifold into the high-dimensional space of the data is achieved via learnable invertible transformations. Therefore, once the manifold is properly aligned via a reconstruction loss, the probability density is tractable on the manifold and maximum likelihood can be used to optimize the network parameters. Naturally, the lower-dimensional representation of the data requires an injective-mapping. Recent approaches were able to enforce that the density aligns with the modelled manifold, while efficiently calculating the density volume-change term when embedding to the higher-dimensional space. However, unless the injective-mapping is analytically predefined, the learned manifold is not necessarily an \emph{efficient representation} of the data. Namely, the latent dimensions of such models frequently learn an entangled intrinsic basis, with degenerate information being stored in each dimension. Alternatively, if a locally orthogonal and/or sparse basis is to be learned, here coined canonical intrinsic basis, it can serve in learning a more compact latent space representation. Toward this end, we propose a canonical manifold learning flow method, where a novel optimization objective enforces the transformation matrix to have few prominent and non-degenerate basis functions. We demonstrate that by minimizing the off-diagonal manifold metric elements $\ell_1$-norm, we can achieve such a basis, which is simultaneously sparse and/or orthogonal. Canonical manifold flow yields a more efficient use of the latent space, automatically generating fewer prominent and distinct dimensions to represent data, and consequently a better approximation of target distributions than other manifold flow methods in most experiments we conducted, resulting in lower FID scores.
Canonical normalizing flows for manifold learning
[ "Kyriakos Flouris", "Ender Konukoglu" ]
Conference
poster
2310.12743
[ "https://github.com/k-flouris/cmf" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ytrhsvGP0r
@inproceedings{ vos2023epidemic, title={Epidemic Learning: Boosting Decentralized Learning with Randomized Communication}, author={Martijn De Vos and Sadegh Farhadkhani and Rachid Guerraoui and Anne-marie Kermarrec and Rafael Pires and Rishi Sharma}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=ytrhsvGP0r} }
We present Epidemic Learning (EL), a simple yet powerful decentralized learning (DL) algorithm that leverages changing communication topologies to achieve faster model convergence compared to conventional DL approaches. At each round of EL, each node sends its model updates to a random sample of $s$ other nodes (in a system of $n$ nodes). We provide an extensive theoretical analysis of EL, demonstrating that its changing topology culminates in superior convergence properties compared to the state-of-the-art (static and dynamic) topologies. Considering smooth non-convex loss functions, the number of transient iterations for EL, i.e., the rounds required to achieve asymptotic linear speedup, is in $O(n^3/s^2)$ which outperforms the best-known bound $O(n^3)$ by a factor of $s^2$, indicating the benefit of randomized communication for DL. We empirically evaluate EL in a 96-node network and compare its performance with state-of-the-art DL approaches. Our results illustrate that EL converges up to $ 1.7\times$ quicker than baseline DL algorithms and attains $2.2 $\% higher accuracy for the same communication volume.
Epidemic Learning: Boosting Decentralized Learning with Randomized Communication
[ "Martijn De Vos", "Sadegh Farhadkhani", "Rachid Guerraoui", "Anne-marie Kermarrec", "Rafael Pires", "Rishi Sharma" ]
Conference
poster
2310.01972
[ "https://github.com/sacs-epfl/decentralizepy" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ytTfonl9Wd
@inproceedings{ jogl2023expressivitypreserving, title={Expressivity-Preserving {GNN} Simulation}, author={Fabian Jogl and Maximilian Thiessen and Thomas G{\"a}rtner}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=ytTfonl9Wd} }
We systematically investigate graph transformations that enable standard message passing to simulate state-of-the-art graph neural networks (GNNs) without loss of expressivity. Using these, many state-of-the-art GNNs can be implemented with message passing operations from standard libraries, eliminating many sources of implementation issues and allowing for better code optimization. We distinguish between weak and strong simulation: weak simulation achieves the same expressivity only after several message passing steps while strong simulation achieves this after every message passing step. Our contribution leads to a direct way to translate common operations of non-standard GNNs to graph transformations that allow for strong or weak simulation. Our empirical evaluation shows competitive predictive performance of message passing on transformed graphs for various molecular benchmark datasets, in several cases surpassing the original GNNs.
Expressivity-Preserving GNN Simulation
[ "Fabian Jogl", "Maximilian Thiessen", "Thomas Gärtner" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ysqlhW0v26
@inproceedings{ haghtalab2023a, title={A Unifying Perspective on Multi-Calibration: Game Dynamics for Multi-Objective Learning}, author={Nika Haghtalab and Michael Jordan and Eric Zhao}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=ysqlhW0v26} }
We provide a unifying framework for the design and analysis of multi-calibrated predictors. By placing the multi-calibration problem in the general setting of multi-objective learning---where learning guarantees must hold simultaneously over a set of distributions and loss functions---we exploit connections to game dynamics to achieve state-of-the-art guarantees for a diverse set of multi-calibration learning problems. In addition to shedding light on existing multi-calibration guarantees and greatly simplifying their analysis, our approach also yields improved guarantees, such as error tolerances that scale with the square-root of group size versus the constant tolerances guaranteed by prior works, and improving the complexity of $k$-class multi-calibration by an exponential factor of $k$ versus Gopalan et al.. Beyond multi-calibration, we use these game dynamics to address emerging considerations in the study of group fairness and multi-distribution learning.
A Unifying Perspective on Multi-Calibration: Game Dynamics for Multi-Objective Learning
[ "Nika Haghtalab", "Michael Jordan", "Eric Zhao" ]
Conference
poster
2302.10863
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ypOiXjdfnU
@inproceedings{ tang2023emergent, title={Emergent Correspondence from Image Diffusion}, author={Luming Tang and Menglin Jia and Qianqian Wang and Cheng Perng Phoo and Bharath Hariharan}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=ypOiXjdfnU} }
Finding correspondences between images is a fundamental problem in computer vision. In this paper, we show that correspondence emerges in image diffusion models without any explicit supervision. We propose a simple strategy to extract this implicit knowledge out of diffusion networks as image features, namely DIffusion FeaTures (DIFT), and use them to establish correspondences between real images. Without any additional fine-tuning or supervision on the task-specific data or annotations, DIFT is able to outperform both weakly-supervised methods and competitive off-the-shelf features in identifying semantic, geometric, and temporal correspondences. Particularly for semantic correspondence, DIFT from Stable Diffusion is able to outperform DINO and OpenCLIP by 19 and 14 accuracy points respectively on the challenging SPair-71k benchmark. It even outperforms the state-of-the-art supervised methods on 9 out of 18 categories while remaining on par for the overall performance. Project page: https://diffusionfeatures.github.io.
Emergent Correspondence from Image Diffusion
[ "Luming Tang", "Menglin Jia", "Qianqian Wang", "Cheng Perng Phoo", "Bharath Hariharan" ]
Conference
poster
2306.03881
[ "" ]
https://huggingface.co/papers/2306.03881
3
6
2
5
1
[]
[]
[]
null
https://openreview.net/forum?id=yoZTVn0T50
@inproceedings{ wang2023camp, title={Ca{MP}: Causal Multi-policy Planning for Interactive Navigation in Multi-room Scenes}, author={Xiaohan Wang and Yuehu Liu and Xinhang Song and Beibei Wang and Shuqiang Jiang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=yoZTVn0T50} }
Visual navigation has been widely studied under the assumption that there may be several clear routes to reach the goal. However, in more practical scenarios such as a house with several messy rooms, there may not. Interactive Navigation (InterNav) considers agents navigating to their goals more effectively with object interactions, posing new challenges of learning interaction dynamics and extra action space. Previous works learn single vision-to-action policy with the guidance of designed representations. However, the causality between actions and outcomes is prone to be confounded when the attributes of obstacles are diverse and hard to measure. Learning policy for long-term action planning in complex scenes also leads to extensive inefficient exploration. In this paper, we introduce a causal diagram of InterNav clarifying the confounding bias caused by obstacles. To address the problem, we propose a multi-policy model that enables the exploration of counterfactual interactions as well as reduces unnecessary exploration. We develop a large-scale dataset containing 600k task episodes in 12k multi-room scenes based on the ProcTHOR simulator and showcase the effectiveness of our method with the evaluations on our dataset.
CaMP: Causal Multi-policy Planning for Interactive Navigation in Multi-room Scenes
[ "Xiaohan Wang", "Yuehu Liu", "Xinhang Song", "Beibei Wang", "Shuqiang Jiang" ]
Conference
poster
[ "https://github.com/polkalian/internav" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=yoAmURKDJi
@inproceedings{ xing2023toa, title={{TOA}: Task-oriented Active {VQA}}, author={Xiaoying Xing and Mingfu Liang and Ying Wu}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=yoAmURKDJi} }
Knowledge-based visual question answering (VQA) requires external knowledge to answer the question about an image. Early methods explicitly retrieve knowledge from external knowledge bases, which often introduce noisy information. Recently large language models like GPT-3 have shown encouraging performance as implicit knowledge source and revealed planning abilities. However, current large language models can not effectively understand image inputs, thus it remains an open problem to extract the image information and input to large language models. Prior works have used image captioning and object descriptions to represent the image. However, they may either drop the essential visual information to answer the question correctly or involve irrelevant objects to the task-of-interest. To address this problem, we propose to let large language models make an initial hypothesis according to their knowledge, then actively collect the visual evidence required to verify the hypothesis. In this way, the model can attend to the essential visual information in a task-oriented manner. We leverage several vision modules from the perspectives of spatial attention (i.e., Where to look) and attribute attention (i.e., What to look), which is similar to human cognition. The experiments show that our proposed method outperforms the baselines on open-ended knowledge-based VQA datasets and presents clear reasoning procedure with better interpretability.
TOA: Task-oriented Active VQA
[ "Xiaoying Xing", "Mingfu Liang", "Ying Wu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ymHM1qRUeb
@inproceedings{ saad2023covarianceadaptive, title={Covariance-adaptive best arm identification}, author={El Mehdi Saad and Gilles Blanchard and Nicolas Verzelen}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=ymHM1qRUeb} }
We consider the problem of best arm identification in the multi-armed bandit model, under fixed confidence. Given a confidence input $\delta$, the goal is to identify the arm with the highest mean reward with a probability of at least $1 - \delta$, while minimizing the number of arm pulls. While the literature provides solutions to this problem under the assumption of independent arms distributions, we propose a more flexible scenario where arms can be dependent and rewards can be sampled simultaneously. This framework allows the learner to estimate the covariance among the arms distributions, enabling a more efficient identification of the best arm. The relaxed setting we propose is relevant in various applications, such as clinical trials, where similarities between patients or drugs suggest underlying correlations in the outcomes. We introduce new algorithms that adapt to the unknown covariance of the arms and demonstrate through theoretical guarantees that substantial improvement can be achieved over the standard setting. Additionally, we provide new lower bounds for the relaxed setting and present numerical simulations that support their theoretical findings.
Covariance-adaptive best arm identification
[ "El Mehdi Saad", "Gilles Blanchard", "Nicolas Verzelen" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ymBG2xs9Zf
@inproceedings{ liu2023modelbased, title={Model-Based Control with Sparse Neural Dynamics}, author={Ziang Liu and Genggeng Zhou and Jeff He and Tobia Marcucci and Li Fei-Fei and Jiajun Wu and Yunzhu Li}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=ymBG2xs9Zf} }
Learning predictive models from observations using deep neural networks (DNNs) is a promising new approach to many real-world planning and control problems. However, common DNNs are too unstructured for effective planning, and current control methods typically rely on extensive sampling or local gradient descent. In this paper, we propose a new framework for integrated model learning and predictive control that is amenable to efficient optimization algorithms. Specifically, we start with a ReLU neural model of the system dynamics and, with minimal losses in prediction accuracy, we gradually sparsify it by removing redundant neurons. This discrete sparsification process is approximated as a continuous problem, enabling an end-to-end optimization of both the model architecture and the weight parameters. The sparsified model is subsequently used by a mixed-integer predictive controller, which represents the neuron activations as binary variables and employs efficient branch-and-bound algorithms. Our framework is applicable to a wide variety of DNNs, from simple multilayer perceptrons to complex graph neural dynamics. It can efficiently handle tasks involving complicated contact dynamics, such as object pushing, compositional object sorting, and manipulation of deformable objects. Numerical and hardware experiments show that, despite the aggressive sparsification, our framework can deliver better closed-loop performance than existing state-of-the-art methods.
Model-Based Control with Sparse Neural Dynamics
[ "Ziang Liu", "Genggeng Zhou", "Jeff He", "Tobia Marcucci", "Li Fei-Fei", "Jiajun Wu", "Yunzhu Li" ]
Conference
poster
2312.12791
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ylPX5D7It7
@inproceedings{ sun2023understanding, title={Understanding How Consistency Works in Federated Learning via Stage-wise Relaxed Initialization}, author={Yan Sun and Li Shen and Dacheng Tao}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=ylPX5D7It7} }
Federated learning (FL) is a distributed paradigm that coordinates massive local clients to collaboratively train a global model via stage-wise local training processes on the heterogeneous dataset. Previous works have implicitly studied that FL suffers from the "client-drift" problem, which is caused by the inconsistent optimum across local clients. However, till now it still lacks solid theoretical analysis to explain the impact of this local inconsistency. To alleviate the negative impact of the "client drift" and explore its substance in FL, in this paper, we first design an efficient FL algorithm FedInit, which allows employing the personalized relaxed initialization state at the beginning of each local training stage. Specifically, FedInit initializes the local state by moving away from the current global state towards the reverse direction of the latest local state. This relaxed initialization helps to revise the local divergence and enhance the local consistency level. Moreover, to further understand how inconsistency disrupts performance in FL, we introduce the excess risk analysis and study the divergence term to investigate the test error of the proposed FedInit method. Our studies show that on the non-convex objectives, optimization error is not sensitive to this local inconsistency, while it mainly affects the generalization error bound in FedInit. Extensive experiments are conducted to validate this conclusion. Our proposed FedInit could achieve state-of-the-art (SOTA) results compared to several advanced benchmarks without any additional costs. Meanwhile, stage-wise relaxed initialization could also be incorporated into the current advanced algorithms to achieve higher performance in the FL paradigm.
Understanding How Consistency Works in Federated Learning via Stage-wise Relaxed Initialization
[ "Yan Sun", "Li Shen", "Dacheng Tao" ]
Conference
poster
2306.05706
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ykvvv0gc4R
@inproceedings{ chen2023deep, title={Deep Momentum Multi-Marginal Schr\"odinger Bridge}, author={Tianrong Chen and Guan-Horng Liu and Molei Tao and Evangelos Theodorou}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=ykvvv0gc4R} }
It is a crucial challenge to reconstruct population dynamics using unlabeled samples from distributions at coarse time intervals. Recent approaches such as flow-based models or Schrödinger Bridge (SB) models have demonstrated appealing performance, yet the inferred sample trajectories either fail to account for the underlying stochasticity or are unnecessarily rigid. In this article, we extend SB into phase space and propose $\underline{D}$eep $\underline{M}$omentum Multi-Marginal $\underline{S}$chrödinger $\underline{B}$ridge (DMSB), a novel computational framework that learns the smooth measure-valued spline for stochastic systems that satisfy position marginal constraints across time. By tailoring the celebrated Bregman Iteration and extending the Iteration Proportional Fitting to phase space, we manage to handle high-dimensional multi-marginal trajectory inference tasks efficiently. Our algorithm outperforms baselines significantly, as evidenced by experiments for synthetic datasets and a real-world single-cell RNA sequence dataset. Additionally, the proposed approach can reasonably reconstruct the evolution of velocity distribution, from position snapshots only, when there is a ground truth velocity that is nevertheless inaccessible.
Deep Momentum Multi-Marginal Schrödinger Bridge
[ "Tianrong Chen", "Guan-Horng Liu", "Molei Tao", "Evangelos Theodorou" ]
Conference
poster
2303.01751
[ "https://github.com/TianrongChen/DMSB" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ykMdzevPkJ
@inproceedings{ zhu2023difftraj, title={DiffTraj: Generating {GPS} Trajectory with Diffusion Probabilistic Model}, author={Yuanshao Zhu and Yongchao Ye and Shiyao Zhang and Xiangyu Zhao and James Yu}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=ykMdzevPkJ} }
Pervasive integration of GPS-enabled devices and data acquisition technologies has led to an exponential increase in GPS trajectory data, fostering advancements in spatial-temporal data mining research. Nonetheless, GPS trajectories contain personal geolocation information, rendering serious privacy concerns when working with raw data. A promising approach to address this issue is trajectory generation, which involves replacing original data with generated, privacy-free alternatives. Despite the potential of trajectory generation, the complex nature of human behavior and its inherent stochastic characteristics pose challenges in generating high-quality trajectories. In this work, we propose a spatial-temporal diffusion probabilistic model for trajectory generation (DiffTraj). This model effectively combines the generative abilities of diffusion models with the spatial-temporal features derived from real trajectories. The core idea is to reconstruct and synthesize geographic trajectories from white noise through a reverse trajectory denoising process. Furthermore, we propose a Trajectory UNet (Traj-UNet) deep neural network to embed conditional information and accurately estimate noise levels during the reverse process. Experiments on two real-world datasets show that DiffTraj can be intuitively applied to generate high-fidelity trajectories while retaining the original distributions. Moreover, the generated results can support downstream trajectory analysis tasks and significantly outperform other methods in terms of geo-distribution evaluations.
DiffTraj: Generating GPS Trajectory with Diffusion Probabilistic Model
[ "Yuanshao Zhu", "Yongchao Ye", "Shiyao Zhang", "Xiangyu Zhao", "James Yu" ]
Conference
poster
2304.11582
[ "https://github.com/Yasoz/DiffTraj" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=yjYwbZBJyl
@inproceedings{ haas2023mind, title={Mind the spikes: Benign overfitting of kernels and neural networks in fixed dimension}, author={Moritz Haas and David Holzm{\"u}ller and Ulrike von Luxburg and Ingo Steinwart}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=yjYwbZBJyl} }
The success of over-parameterized neural networks trained to near-zero training error has caused great interest in the phenomenon of benign overfitting, where estimators are statistically consistent even though they interpolate noisy training data. While benign overfitting in fixed dimension has been established for some learning methods, current literature suggests that for regression with typical kernel methods and wide neural networks, benign overfitting requires a high-dimensional setting, where the dimension grows with the sample size. In this paper, we show that the smoothness of the estimators, and not the dimension, is the key: benign overfitting is possible if and only if the estimator's derivatives are large enough. We generalize existing inconsistency results to non-interpolating models and more kernels to show that benign overfitting with moderate derivatives is impossible in fixed dimension. Conversely, we show that benign overfitting is possible for regression with a sequence of spiky-smooth kernels with large derivatives. Using neural tangent kernels, we translate our results to wide neural networks. We prove that while infinite-width networks do not overfit benignly with the ReLU activation, this can be fixed by adding small high-frequency fluctuations to the activation function. Our experiments verify that such neural networks, while overfitting, can indeed generalize well even on low-dimensional data sets.
Mind the spikes: Benign overfitting of kernels and neural networks in fixed dimension
[ "Moritz Haas", "David Holzmüller", "Ulrike von Luxburg", "Ingo Steinwart" ]
Conference
poster
2305.14077
[ "https://github.com/moritzhaas/mind-the-spikes" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=yjWVd8Fhqt
@inproceedings{ michel2023object, title={{OBJECT} 3{DIT}: Language-guided 3D-aware Image Editing}, author={Oscar Michel and Anand Bhattad and Eli VanderBilt and Ranjay Krishna and Aniruddha Kembhavi and Tanmay Gupta}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=yjWVd8Fhqt} }
Existing image editing tools, while powerful, typically disregard the underlying 3D geometry from which the image is projected. As a result, edits made using these tools may become detached from the geometry and lighting conditions that are at the foundation of the image formation process; such edits break the portrayal of a coherent 3D world. 3D-aware generative models are a promising solution, but currently only succeed on small datasets or at the level of a single object. In this work, we formulate the new task of language-guided 3D-aware editing, where objects in an image should be edited according to a language instruction while remaining consistent with the underlying 3D scene. To promote progress towards this goal, we release OBJect: a benchmark dataset of 400K editing examples created from procedurally generated 3D scenes. Each example consists of an input image, editing instruction in language, and the edited image. We also introduce 3DIT: single and multi-task models for four editing tasks. Our models show impressive abilities to understand the 3D composition of entire scenes, factoring in surrounding objects, surfaces, lighting conditions, shadows, and physically-plausible object configurations. Surprisingly, training on only synthetic scenes from \dataset, editing capabilities of 3DIT generalize to real-world images.
OBJECT 3DIT: Language-guided 3D-aware Image Editing
[ "Oscar Michel", "Anand Bhattad", "Eli VanderBilt", "Ranjay Krishna", "Aniruddha Kembhavi", "Tanmay Gupta" ]
Conference
poster
2307.11073
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=yiehppUCO2
@inproceedings{ lin2023epnet, title={E2{PN}et: Event to Point Cloud Registration with Spatio-Temporal Representation Learning}, author={Xiuhong Lin and Changjie Qiu and zhipeng cai and Siqi Shen and Yu Zang and Weiquan Liu and Xuesheng Bian and Matthias M{\"u}ller and Cheng Wang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=yiehppUCO2} }
Event cameras have emerged as a promising vision sensor in recent years due to their unparalleled temporal resolution and dynamic range. While registration of 2D RGB images to 3D point clouds is a long-standing problem in computer vision, no prior work studies 2D-3D registration for event cameras. To this end, we propose E2PNet, the first learning-based method for event-to-point cloud registration. The core of E2PNet is a novel feature representation network called Event-Points-to-Tensor (EP2T), which encodes event data into a 2D grid-shaped feature tensor. This grid-shaped feature enables matured RGB-based frameworks to be easily used for event-to-point cloud registration, without changing hyper-parameters and the training procedure. EP2T treats the event input as spatio-temporal point clouds. Unlike standard 3D learning architectures that treat all dimensions of point clouds equally, the novel sampling and information aggregation modules in EP2T are designed to handle the inhomogeneity of the spatial and temporal dimensions. Experiments on the MVSEC and VECtor datasets demonstrate the superiority of E2PNet over hand-crafted and other learning-based methods. Compared to RGB-based registration, E2PNet is more robust to extreme illumination or fast motion due to the use of event data. Beyond 2D-3D registration, we also show the potential of EP2T for other vision tasks such as flow estimation, event-to-image reconstruction and object recognition. The source code can be found at: https://github.com/Xmu-qcj/E2PNet.
E2PNet: Event to Point Cloud Registration with Spatio-Temporal Representation Learning
[ "Xiuhong Lin", "Changjie Qiu", "zhipeng cai", "Siqi Shen", "Yu Zang", "Weiquan Liu", "Xuesheng Bian", "Matthias Müller", "Cheng Wang" ]
Conference
poster
2311.18433
[ "https://github.com/xmu-qcj/e2pnet" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=yhNHpLWJDl
@inproceedings{ xiong2023finitetime, title={Finite-Time Analysis of Whittle Index based Q-Learning for Restless Multi-Armed Bandits with Neural Network Function Approximation}, author={GUOJUN XIONG and Jian Li}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=yhNHpLWJDl} }
Whittle index policy is a heuristic to the intractable restless multi-armed bandits (RMAB) problem. Although it is provably asymptotically optimal, finding Whittle indices remains difficult. In this paper, we present Neural-Q-Whittle, a Whittle index based Q-learning algorithm for RMAB with neural network function approximation, which is an example of nonlinear two-timescale stochastic approximation with Q-function values updated on a faster timescale and Whittle indices on a slower timescale. Despite the empirical success of deep Q-learning, the non-asymptotic convergence rate of Neural-Q-Whittle, which couples neural networks with two-timescale Q-learning largely remains unclear. This paper provides a finite-time analysis of Neural-Q-Whittle, where data are generated from a Markov chain, and Q-function is approximated by a ReLU neural network. Our analysis leverages a Lyapunov drift approach to capture the evolution of two coupled parameters, and the nonlinearity in value function approximation further requires us to characterize the approximation error. Combing these provide Neural-Q-Whittle with $\mathcal{O}(1/k^{2/3})$ convergence rate, where $k$ is the number of iterations.
Finite-Time Analysis of Whittle Index based Q-Learning for Restless Multi-Armed Bandits with Neural Network Function Approximation
[ "GUOJUN XIONG", "Jian Li" ]
Conference
poster
2310.02147
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=yhBFG9Y85R
@inproceedings{ cho2023visual, title={Visual Programming for Step-by-Step Text-to-Image Generation and Evaluation}, author={Jaemin Cho and Abhay Zala and Mohit Bansal}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=yhBFG9Y85R} }
As large language models have demonstrated impressive performance in many domains, recent works have adopted language models (LMs) as controllers of visual modules for vision-and-language tasks. While existing work focuses on equipping LMs with visual understanding, we propose two novel interpretable/explainable visual programming frameworks for text-to-image (T2I) generation and evaluation. First, we introduce VPGen, an interpretable step-by-step T2I generation framework that decomposes T2I generation into three steps: object/count generation, layout generation, and image generation. We employ an LM to handle the first two steps (object/count generation and layout generation), by finetuning it on text-layout pairs. Our step-by-step T2I generation framework provides stronger spatial control than end-to-end models, the dominant approach for this task. Furthermore, we leverage the world knowledge of pretrained LMs, overcoming the limitation of previous layout-guided T2I works that can only handle predefined object classes. We demonstrate that our VPGen has improved control in counts/spatial relations/scales of objects than state-of-the-art T2I generation models. Second, we introduce VPEval, an interpretable and explainable evaluation framework for T2I generation based on visual programming. Unlike previous T2I evaluations with a single scoring model that is accurate in some skills but unreliable in others, VPEval produces evaluation programs that invoke a set of visual modules that are experts in different skills, and also provides visual+textual explanations of the evaluation results. Our analysis shows that VPEval provides a more human-correlated evaluation for skill-specific and open-ended prompts than widely used single model-based evaluation. We hope that our work encourages future progress on interpretable/explainable generation and evaluation for T2I models.
Visual Programming for Step-by-Step Text-to-Image Generation and Evaluation
[ "Jaemin Cho", "Abhay Zala", "Mohit Bansal" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=yh0OkiUk5h
@inproceedings{ ekbote2023figure, title={Fi{GUR}e: Simple and Efficient Unsupervised Node Representations with Filter Augmentations}, author={Chanakya Ekbote and Ajinkya Deshpande and Arun Iyer and SUNDARARAJAN SELLAMANICKAM and Ramakrishna B Bairi}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=yh0OkiUk5h} }
Unsupervised node representations learnt using contrastive learning-based methods have shown good performance on downstream tasks. However, these methods rely on augmentations that mimic low-pass filters, limiting their performance on tasks requiring different eigen-spectrum parts. This paper presents a simple filter-based augmentation method to capture different parts of the eigen-spectrum. We show significant improvements using these augmentations. Further, we show that sharing the same weights across these different filter augmentations is possible, reducing the computational load. In addition, previous works have shown that good performance on downstream tasks requires high dimensional representations. Working with high dimensions increases the computations, especially when multiple augmentations are involved. We mitigate this problem and recover good performance through lower dimensional embeddings using simple random Fourier feature projections. Our method, FiGURe, achieves an average gain of up to 4.4\%, compared to the state-of-the-art unsupervised models, across all datasets in consideration, both homophilic and heterophilic. Our code can be found at: https://github.com/Microsoft/figure.
FiGURe: Simple and Efficient Unsupervised Node Representations with Filter Augmentations
[ "Chanakya Ekbote", "Ajinkya Deshpande", "Arun Iyer", "SUNDARARAJAN SELLAMANICKAM", "Ramakrishna B Bairi" ]
Conference
poster
2310.01892
[ "https://github.com/microsoft/figure" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ygjQCOyNfh
@inproceedings{ huang2023uncertainty, title={Uncertainty Quantification over Graph with Conformalized Graph Neural Networks}, author={Kexin Huang and Ying Jin and Emmanuel Candes and Jure Leskovec}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=ygjQCOyNfh} }
Graph Neural Networks (GNNs) are powerful machine learning prediction models on graph-structured data. However, GNNs lack rigorous uncertainty estimates, limiting their reliable deployment in settings where the cost of errors is significant. We propose conformalized GNN (CF-GNN), extending conformal prediction (CP) to graph-based models for guaranteed uncertainty estimates. Given an entity in the graph, CF-GNN produces a prediction set/interval that provably contains the true label with pre-defined coverage probability (e.g. 90%). We establish a permutation invariance condition that enables the validity of CP on graph data and provide an exact characterization of the test-time coverage. Moreover, besides valid coverage, it is crucial to reduce the prediction set size/interval length for practical use. We observe a key connection between non-conformity scores and network structures, which motivates us to develop a topology-aware output correction model that learns to update the prediction and produces more efficient prediction sets/intervals. Extensive experiments show that CF-GNN achieves any pre-defined target marginal coverage while significantly reducing the prediction set/interval size by up to 74% over the baselines. It also empirically achieves satisfactory conditional coverage over various raw and network features.
Uncertainty Quantification over Graph with Conformalized Graph Neural Networks
[ "Kexin Huang", "Ying Jin", "Emmanuel Candes", "Jure Leskovec" ]
Conference
spotlight
2305.14535
[ "https://github.com/snap-stanford/conformalized-gnn" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=yft4JlxsRf
@inproceedings{ george2023a, title={A generative model of the hippocampal formation trained with theta driven local learning rules}, author={Tom George and Kim Stachenfeld and Caswell Barry and Claudia Clopath and Tomoki Fukai}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=yft4JlxsRf} }
Advances in generative models have recently revolutionised machine learning. Meanwhile, in neuroscience, generative models have long been thought fundamental to animal intelligence. Understanding the biological mechanisms that support these processes promises to shed light on the relationship between biological and artificial intelligence. In animals, the hippocampal formation is thought to learn and use a generative model to support its role in spatial and non-spatial memory. Here we introduce a biologically plausible model of the hippocampal formation tantamount to a Helmholtz machine that we apply to a temporal stream of inputs. A novel component of our model is that fast theta-band oscillations (5-10 Hz) gate the direction of information flow throughout the network, training it akin to a high-frequency wake-sleep algorithm. Our model accurately infers the latent state of high-dimensional sensory environments and generates realistic sensory predictions. Furthermore, it can learn to path integrate by developing a ring attractor connectivity structure matching previous theoretical proposals and flexibly transfer this structure between environments. Whereas many models trade-off biological plausibility with generality, our model captures a variety of hippocampal cognitive functions under one biologically plausible local learning rule.
A generative model of the hippocampal formation trained with theta driven local learning rules
[ "Tom George", "Kim Stachenfeld", "Caswell Barry", "Claudia Clopath", "Tomoki Fukai" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ydKWoqWZ3t
@inproceedings{ xiao2023pacbayesian, title={{PAC}-Bayesian Spectrally-Normalized Bounds for Adversarially Robust Generalization}, author={Jiancong Xiao and Ruoyu Sun and Zhi-Quan Luo}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=ydKWoqWZ3t} }
Deep neural networks (DNNs) are vulnerable to adversarial attacks. It is found empirically that adversarially robust generalization is crucial in establishing defense algorithms against adversarial attacks. Therefore, it is interesting to study the theoretical guarantee of robust generalization. This paper focuses on norm-based complexity, based on a PAC-Bayes approach (Neyshabur et al., 2017). The main challenge lies in extending the key ingredient, which is a weight perturbation bound in standard settings, to the robust settings. Existing attempts heavily rely on additional strong assumptions, leading to loose bounds. In this paper, we address this issue and provide a spectrally-normalized robust generalization bound for DNNs. Compared to existing bounds, our bound offers two significant advantages: Firstly, it does not depend on additional assumptions. Secondly, it is considerably tighter, aligning with the bounds of standard generalization. Therefore, our result provides a different perspective on understanding robust generalization: The mismatch terms between standard and robust generalization bounds shown in previous studies do not contribute to the poor robust generalization. Instead, these disparities solely due to mathematical issues. Finally, we extend the main result to adversarial robustness against general non-$\ell_p$ attacks and other neural network architectures.
PAC-Bayesian Spectrally-Normalized Bounds for Adversarially Robust Generalization
[ "Jiancong Xiao", "Ruoyu Sun", "Zhi-Quan Luo" ]
Conference
poster
2310.06182
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=yageaKlk7S
@inproceedings{ kirichenko2023understanding, title={Understanding the detrimental class-level effects of data augmentation}, author={Polina Kirichenko and Mark Ibrahim and Randall Balestriero and Diane Bouchacourt and Shanmukha Ramakrishna Vedantam and Hamed Firooz and Andrew Gordon Wilson}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=yageaKlk7S} }
Data augmentation (DA) encodes invariance and provides implicit regularization critical to a model's performance in image classification tasks. However, while DA improves average accuracy, recent studies have shown that its impact can be highly class dependent: achieving optimal average accuracy comes at the cost of significantly hurting individual class accuracy by as much as 20% on ImageNet. There has been little progress in resolving class-level accuracy drops due to a limited understanding of these effects. In this work, we present a framework for understanding how DA interacts with class-level learning dynamics. Using higher-quality multi-label annotations on ImageNet, we systematically categorize the affected classes and find that the majority are inherently ambiguous, co-occur, or involve fine-grained distinctions, while DA controls the model's bias towards one of the closely related classes. While many of the previously reported performance drops are explained by multi-label annotations, we identify other sources of accuracy degradations by analyzing class confusions. We show that simple class-conditional augmentation strategies informed by our framework improve performance on the negatively affected classes.
Understanding the detrimental class-level effects of data augmentation
[ "Polina Kirichenko", "Mark Ibrahim", "Randall Balestriero", "Diane Bouchacourt", "Shanmukha Ramakrishna Vedantam", "Hamed Firooz", "Andrew Gordon Wilson" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=yaJ4vZPnHX
@inproceedings{ guo2023complexity, title={Complexity of Derivative-Free Policy Optimization for Structured \${\textbackslash}mathcal\{H\}\_{\textbackslash}infty\$ Control}, author={Xingang Guo and Darioush Keivan and Geir Dullerud and Peter Seiler and Bin Hu}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=yaJ4vZPnHX} }
The applications of direct policy search in reinforcement learning and continuous control have received increasing attention. In this work, we present novel theoretical results on the complexity of derivative-free policy optimization on an important class of robust control tasks, namely the structured $H_\infty$ synthesis with static output feedback. Optimal $H_\infty$ synthesis under structural constraints leads to a constrained nonconvex nonsmooth problem and is typically addressed using subgradient-based policy search techniques that are built upon the concept of Goldstein subdifferential or other notions of enlarged subdifferential. In this paper, we study the complexity of finding $(\delta,\epsilon)$-stationary points for such nonsmooth robust control design tasks using policy optimization methods which can only access the zeroth-order oracle (i.e. the $H_\infty$ norm of the closed-loop system). First, we study the exact oracle setting and identify the coerciveness of the cost function to prove high-probability feasibility/complexity bounds for derivative-free policy optimization on this problem. Next, we derive a sample complexity result for the multi-input multi-output (MIMO) $H_\infty$-norm estimation. We combine this with our analysis to obtain the first sample complexity of model-free, trajectory-based, zeroth-order policy optimization on finding $(\delta,\epsilon)$-stationary points for structured $H_\infty$ control. Numerical results are also provided to demonstrate our theory.
Complexity of Derivative-Free Policy Optimization for Structured ℋ_∞ Control
[ "Xingang Guo", "Darioush Keivan", "Geir Dullerud", "Peter Seiler", "Bin Hu" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=yYUdgbmhh9
@inproceedings{ okhotin2023starshaped, title={Star-Shaped Denoising Diffusion Probabilistic Models}, author={Andrey Okhotin and Dmitry Molchanov and Arkhipkin Sergeevich Vladimir and Grigory Bartosh and Viktor Ohanesian and Aibek Alanov and Dmitry P. Vetrov}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=yYUdgbmhh9} }
Denoising Diffusion Probabilistic Models (DDPMs) provide the foundation for the recent breakthroughs in generative modeling. Their Markovian structure makes it difficult to define DDPMs with distributions other than Gaussian or discrete. In this paper, we introduce Star-Shaped DDPM (SS-DDPM). Its *star-shaped diffusion process* allows us to bypass the need to define the transition probabilities or compute posteriors. We establish duality between star-shaped and specific Markovian diffusions for the exponential family of distributions and derive efficient algorithms for training and sampling from SS-DDPMs. In the case of Gaussian distributions, SS-DDPM is equivalent to DDPM. However, SS-DDPMs provide a simple recipe for designing diffusion models with distributions such as Beta, von Mises–Fisher, Dirichlet, Wishart and others, which can be especially useful when data lies on a constrained manifold. We evaluate the model in different settings and find it competitive even on image data, where Beta SS-DDPM achieves results comparable to a Gaussian DDPM. Our implementation is available at https://github.com/andrey-okhotin/star-shaped
Star-Shaped Denoising Diffusion Probabilistic Models
[ "Andrey Okhotin", "Dmitry Molchanov", "Arkhipkin Sergeevich Vladimir", "Grigory Bartosh", "Viktor Ohanesian", "Aibek Alanov", "Dmitry P. Vetrov" ]
Conference
poster
2302.05259
[ "https://github.com/andrey-okhotin/star-shaped" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=yVMlYSL1Bp
@inproceedings{ khademi2023diverse, title={Diverse Shape Completion via Style Modulated Generative Adversarial Networks}, author={Wesley Khademi and Li Fuxin}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=yVMlYSL1Bp} }
Shape completion aims to recover the full 3D geometry of an object from a partial observation. This problem is inherently multi-modal since there can be many ways to plausibly complete the missing regions of a shape. Such diversity would be indicative of the underlying uncertainty of the shape and could be preferable for downstream tasks such as planning. In this paper, we propose a novel conditional generative adversarial network that can produce many diverse plausible completions of a partially observed point cloud. To enable our network to produce multiple completions for the same partial input, we introduce stochasticity into our network via style modulation. By extracting style codes from complete shapes during training, and learning a distribution over them, our style codes can explicitly carry shape category information leading to better completions. We further introduce diversity penalties and discriminators at multiple scales to prevent conditional mode collapse and to train without the need for multiple ground truth completions for each partial input. Evaluations across several synthetic and real datasets demonstrate that our method achieves significant improvements in respecting the partial observations while obtaining greater diversity in completions.
Diverse Shape Completion via Style Modulated Generative Adversarial Networks
[ "Wesley Khademi", "Li Fuxin" ]
Conference
poster
2311.11184
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=yThjbzhIUP
@inproceedings{ yang2023pgdiff, title={{PGD}iff: Guiding Diffusion Models for Versatile Face Restoration via Partial Guidance}, author={Peiqing Yang and Shangchen Zhou and Qingyi Tao and Chen Change Loy}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=yThjbzhIUP} }
Exploiting pre-trained diffusion models for restoration has recently become a favored alternative to the traditional task-specific training approach. Previous works have achieved noteworthy success by limiting the solution space using explicit degradation models. However, these methods often fall short when faced with complex degradations as they generally cannot be precisely modeled. In this paper, we introduce $\textit{partial guidance}$, a fresh perspective that is more adaptable to real-world degradations compared to existing works. Rather than specifically defining the degradation process, our approach models the desired properties, such as image structure and color statistics of high-quality images, and applies this guidance during the reverse diffusion process. These properties are readily available and make no assumptions about the degradation process. When combined with a diffusion prior, this partial guidance can deliver appealing results across a range of restoration tasks. Additionally, our method can be extended to handle composite tasks by consolidating multiple high-quality image properties, achieved by integrating the guidance from respective tasks. Experimental results demonstrate that our method not only outperforms existing diffusion-prior-based approaches but also competes favorably with task-specific models.
PGDiff: Guiding Diffusion Models for Versatile Face Restoration via Partial Guidance
[ "Peiqing Yang", "Shangchen Zhou", "Qingyi Tao", "Chen Change Loy" ]
Conference
poster
2309.10810
[ "https://github.com/pq-yang/pgdiff" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=yT0f93CeTw
@inproceedings{ cheng2023fast, title={Fast Conditional Mixing of {MCMC} Algorithms for Non-log-concave Distributions}, author={Xiang Cheng and Bohan Wang and Jingzhao Zhang and Yusong Zhu}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=yT0f93CeTw} }
MCMC algorithms offer empirically efficient tools for sampling from a target distribution $\pi(x) \propto \exp(-V(x))$. However, on the theory side, MCMC algorithms suffer from slow mixing rate when $\pi(x)$ is non-log-concave. Our work examines this gap and shows that when Poincar\'e-style inequality holds on a subset $\mathcal{X}$ of the state space, the conditional distribution of MCMC iterates over $\mathcal{X}$ mixes fast to the true conditional distribution. This fast mixing guarantee can hold in cases when global mixing is provably slow. We formalize the statement and quantify the conditional mixing rate. We further show that conditional mixing can have interesting implications for sampling from mixtures of Gaussians, parameter estimation for Gaussian mixture models, and Gibbs-sampling with well-connected local minima.
Fast Conditional Mixing of MCMC Algorithms for Non-log-concave Distributions
[ "Xiang Cheng", "Bohan Wang", "Jingzhao Zhang", "Yusong Zhu" ]
Conference
poster
2306.10506
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=yPkbdJxQ0o
@inproceedings{ chen2023threeway, title={Three-Way Trade-Off in Multi-Objective Learning: Optimization, Generalization and Conflict-Avoidance}, author={Lisha Chen and Heshan Devaka Fernando and Yiming Ying and Tianyi Chen}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=yPkbdJxQ0o} }
Multi-objective learning (MOL) often arises in emerging machine learning problems when multiple learning criteria or tasks need to be addressed. Recent works have developed various _dynamic weighting_ algorithms for MOL, including MGDA and its variants, whose central idea is to find an update direction that _avoids conflicts_ among objectives. Albeit its appealing intuition, empirical studies show that dynamic weighting methods may not always outperform static alternatives. To bridge this gap between theory and practice, we focus on a new variant of stochastic MGDA - the Multi-objective gradient with Double sampling (MoDo) algorithm and study its generalization performance and the interplay with optimization through the lens of algorithm stability. We find that the rationale behind MGDA -- updating along conflict-avoidant direction - may \emph{impede} dynamic weighting algorithms from achieving the optimal ${\cal O}(1/\sqrt{n})$ population risk, where $n$ is the number of training samples. We further highlight the variability of dynamic weights and their impact on the three-way trade-off among optimization, generalization, and conflict avoidance that is unique in MOL. Code is available at https://github.com/heshandevaka/Trade-Off-MOL.
Three-Way Trade-Off in Multi-Objective Learning: Optimization, Generalization and Conflict-Avoidance
[ "Lisha Chen", "Heshan Devaka Fernando", "Yiming Ying", "Tianyi Chen" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=yN6NHZOXkg
@inproceedings{ huang2023generalized, title={Generalized Information-theoretic Multi-view Clustering}, author={Weitian Huang and Sirui Yang and Hongmin Cai}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=yN6NHZOXkg} }
In an era of more diverse data modalities, multi-view clustering has become a fundamental tool for comprehensive data analysis and exploration. However, existing multi-view unsupervised learning methods often rely on strict assumptions on semantic consistency among samples. In this paper, we reformulate the multi-view clustering problem from an information-theoretic perspective and propose a general theoretical model. In particular, we define three desiderata under multi-view unsupervised learning in terms of mutual information, namely, comprehensiveness, concentration, and cross-diversity. The multi-view variational lower bound is then obtained by approximating the samples' high-dimensional mutual information. The Kullback–Leibler divergence is utilized to deduce sample assignments. Ultimately the information-based multi-view clustering model leverages deep neural networks and Stochastic Gradient Variational Bayes to achieve representation learning and clustering simultaneously. Extensive experiments on both synthetic and real datasets with wide types demonstrate that the proposed method exhibits a more stable and superior clustering performance than state-of-the-art algorithms.
Generalized Information-theoretic Multi-view Clustering
[ "Weitian Huang", "Sirui Yang", "Hongmin Cai" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=yKCLfOOIL7
@inproceedings{ chen2023mechanism, title={Mechanism Design for Collaborative Normal Mean Estimation}, author={Yiding Chen and Jerry Zhu and Kirthevasan Kandasamy}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=yKCLfOOIL7} }
We study collaborative normal mean estimation, where $m$ strategic agents collect i.i.d samples from a normal distribution $\mathcal{N}(\mu, \sigma^2)$ at a cost. They all wish to estimate the mean $\mu$. By sharing data with each other, agents can obtain better estimates while keeping the cost of data collection small. To facilitate this collaboration, we wish to design mechanisms that encourage agents to collect a sufficient amount of data and share it truthfully, so that they are all better off than working alone. In naive mechanisms, such as simply pooling and sharing all the data, an individual agent might find it beneficial to under-collect and/or fabricate data, which can lead to poor social outcomes. We design a novel mechanism that overcomes these challenges via two key techniques: first, when sharing the others' data with an agent, the mechanism corrupts this dataset proportional to how much the data reported by the agent differs from the others; second, we design minimax optimal estimators for the corrupted dataset. Our mechanism, which is Nash incentive compatible and individually rational, achieves a social penalty (sum of all agents' estimation errors and data collection costs) that is at most a factor 2 of the global minimum. When applied to high dimensional (non-Gaussian) distributions with bounded variance, this mechanism retains these three properties, but with slightly weaker results. Finally, in two special cases where we restrict the strategy space of the agents, we design mechanisms that essentially achieve the global minimum.
Mechanism Design for Collaborative Normal Mean Estimation
[ "Yiding Chen", "Jerry Zhu", "Kirthevasan Kandasamy" ]
Conference
spotlight
2306.06351
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=yIcCkMUCtL
@inproceedings{ feng2023towards, title={Towards a Unified Analysis of Kernel-based Methods Under Covariate Shift}, author={Xingdong Feng and Xin HE and Caixing Wang and Chao Wang and Jingnan Zhang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=yIcCkMUCtL} }
Covariate shift occurs prevalently in practice, where the input distributions of the source and target data are substantially different. Despite its practical importance in various learning problems, most of the existing methods only focus on some specific learning tasks and are not well validated theoretically and numerically. To tackle this problem, we propose a unified analysis of general nonparametric methods in a reproducing kernel Hilbert space (RKHS) under covariate shift. Our theoretical results are established for a general loss belonging to a rich loss function family, which includes many commonly used methods as special cases, such as mean regression, quantile regression, likelihood-based classification, and margin-based classification. Two types of covariate shift problems are the focus of this paper and the sharp convergence rates are established for a general loss function to provide a unified theoretical analysis, which concurs with the optimal results in literature where the squared loss is used. Extensive numerical studies on synthetic and real examples confirm our theoretical findings and further illustrate the effectiveness of our proposed method.
Towards a Unified Analysis of Kernel-based Methods Under Covariate Shift
[ "Xingdong Feng", "Xin HE", "Caixing Wang", "Chao Wang", "Jingnan Zhang" ]
Conference
poster
2310.08237
[ "https://github.com/WangCaixing-96/Kernel_CS" ]
-1
-1
-1
-1
0
[]
[]
[]
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
112
Edit dataset card