bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
197
792
abstract
stringlengths
303
3.45k
title
stringlengths
10
159
authors
sequencelengths
1
28
id
stringclasses
44 values
type
stringclasses
16 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
444 values
n_linked_authors
int64
-1
9
upvotes
int64
-1
42
num_comments
int64
-1
13
n_authors
int64
-1
92
paper_page_exists_pre_conf
int64
0
1
Models
sequencelengths
0
100
Datasets
sequencelengths
0
11
Spaces
sequencelengths
0
100
null
https://openreview.net/forum?id=JhTn1Lt04U
@inproceedings{ hofmann2023hopfield, title={Hopfield Boosting for Out-of-Distribution Detection}, author={Claus Hofmann and Simon Lucas Schmid and Bernhard Lehner and Daniel Klotz and Sepp Hochreiter}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=JhTn1Lt04U} }
Out-of-distribution (OOD) detection is crucial for real-world machine learning. Outlier exposure methods, which use auxiliary outlier data, can significantly enhance OOD detection. We present Hopfield Boosting, a boosting technique employing modern Hopfield energy (MHE) to refine the boundary between in-distribution (ID) and OOD data. Our method focuses on challenging outlier examples near the decision boundary, achieving a 40% improvement in FPR95 on CIFAR-10, setting a new OOD detection state-of-the-art with outlier exposure.
Hopfield Boosting for Out-of-Distribution Detection
[ "Claus Hofmann", "Simon Lucas Schmid", "Bernhard Lehner", "Daniel Klotz", "Sepp Hochreiter" ]
Workshop/AMHN
oral
[ "https://github.com/ml-jku/hopfield-boosting" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=Fhx7nVoCQW
@inproceedings{ bai2023saliencyguided, title={Saliency-Guided Hidden Associative Replay for Continual Learning}, author={Guangji Bai and Qilong Zhao and Xiaoyang Jiang and Liang Zhao}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=Fhx7nVoCQW} }
Continual Learning (CL) is a burgeoning domain in next-generation AI, focusing on training neural networks over a sequence of tasks akin to human learning. Amongst various strategies, replay-based methods have emerged as preeminent, echoing biological memory mechanisms. However, these methods are memory-intensive, often preserving entire data samples—an approach inconsistent with humans' selective memory retention of salient experiences. While some recent works have explored the storage of only significant portions of data in episodic memory, the inherent nature of partial data necessitates innovative retrieval mechanisms. Addressing these nuances, this paper presents the **S**aliency-Guided **H**idden **A**ssociative **R**eplay for **C**ontinual Learning (**SHARC**). This novel framework synergizes associative memory with replay-based strategies. SHARC primarily archives salient data segments via sparse memory encoding. Importantly, by harnessing associative memory paradigms, it introduces a content-focused memory retrieval mechanism, promising swift and near-perfect recall, bringing CL a step closer to authentic human memory processes. Extensive experimental results demonstrate the effectiveness of our proposed method for various continual learning tasks. Anonymous code can be found at: https://anonymous.4open.science/r/SHARC-6319.
Saliency-Guided Hidden Associative Replay for Continual Learning
[ "Guangji Bai", "Qilong Zhao", "Xiaoyang Jiang", "Liang Zhao" ]
Workshop/AMHN
poster
2310.04334
[ "https://github.com/baithebest/sharc" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=EJmgk8vXMQ
@inproceedings{ xie2023skip, title={Skip Connections Increase the Capacity of Associative Memories in Variable Binding Mechanisms}, author={Yi Xie and Yichen Li and Akshay Rangamani}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=EJmgk8vXMQ} }
The flexibility of intelligent behavior is fundamentally attributed to the ability to separate and assign structural information from content in sensory inputs. Variable binding is the atomic computation that underlies this ability. In this work, we investigate the implementation of variable binding via pointers of assemblies of neurons, which are sets of excitatory neurons that fire together. The Assembly Calculus is a framework that describes a set of operations to create and modify assemblies of neurons. We focus on the $\texttt{project}$ (which creates assemblies) and $\texttt{reciprocal-project}$ (which performs variable binding) operations and study the capacity of networks in terms of the number of assemblies that can be reliably created and retrieved. We find that assembly calculus networks implemented through Hebbian plasticity resemble associative memories in their structure and behavior. However, for networks with $N$ neurons per brain area, the capacity of variable binding networks ($0.01N$) is an order of magnitude lower than the capacity of assembly creation networks ($0.22N$). To alleviate this drop in capacity, we propose a $\textit{skip connection}$ between the input and variable assembly, which boosts the capacity to a similar order of magnitude ($0.1N$) as the $\texttt{Project}$ operation, while maintaining its biological plausibility.
Skip Connections Increase the Capacity of Associative Memories in Variable Binding Mechanisms
[ "Yi Xie", "Yichen Li", "Akshay Rangamani" ]
Workshop/AMHN
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=E6RCLm6mqr
@inproceedings{ abudy2023minimum, title={Minimum Description Length Hopfield Networks}, author={Matan Abudy and Nur Lan and Emmanuel Chemla and Roni Katzir}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=E6RCLm6mqr} }
Associative memory architectures are designed for memorization but also offer, through their retrieval method, a form of generalization to unseen inputs: stored memories can be seen as prototypes from this point of view. Focusing on Modern Hopfield Networks (MHN), we show that a large memorization capacity undermines the generalization opportunity. We offer a solution to better optimize this tradeoff. It relies on Minimum Description Length (MDL) to determine during training which memories to store, as well as how many of them.
Minimum Description Length Hopfield Networks
[ "Matan Abudy", "Nur Lan", "Emmanuel Chemla", "Roni Katzir" ]
Workshop/AMHN
poster
2311.06518
[ "https://github.com/matanabudy/mdl-hn" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=B1BL9go65H
@inproceedings{ hoover2023memory, title={Memory in Plain Sight: A Survey of the Uncanny Resemblances between Diffusion Models and Associative Memories}, author={Benjamin Hoover and Hendrik Strobelt and Dmitry Krotov and Judy Hoffman and Zsolt Kira and Duen Horng Chau}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=B1BL9go65H} }
Diffusion Models (DMs) have recently set state-of-the-art on many generation benchmarks. However, there are myriad ways to describe them mathematically, which makes it difficult to develop a simple understanding of how they work. In this submission, we provide a concise overview of DMs from the perspective of dynamical systems and Ordinary Differential Equations (ODEs) which exposes a mathematical connection to the highly related yet often overlooked class of energy-based models, called Associative Memories (AMs). Energy-based AMs are a theoretical framework that behave much like denoising DMs, but they enable us to directly compute a Lyapunov energy function on which we can perform gradient descent to denoise data. We finally identify the similarities and differences between AMs and DMs, discussing new research directions revealed by the extent of their similarities.
Memory in Plain Sight: A Survey of the Uncanny Resemblances between Diffusion Models and Associative Memories
[ "Benjamin Hoover", "Hendrik Strobelt", "Dmitry Krotov", "Judy Hoffman", "Zsolt Kira", "Duen Horng Chau" ]
Workshop/AMHN
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=AXiMq2k4cb
@inproceedings{ koulischer2023exploring, title={Exploring the Temperature-Dependent Phase Transition in Modern Hopfield Networks}, author={Felix Koulischer and C{\'e}dric Goemaere and Tom Van Der Meersch and Johannes Deleu and Thomas Demeester}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=AXiMq2k4cb} }
The recent discovery of a connection between Transformers and Modern Hopfield Networks (MHNs) has reignited the study of neural networks from a physical energy-based perspective. This paper focuses on the pivotal effect of the inverse temperature hyperparameter $\beta$ on the distribution of energy minima of the MHN. To achieve this, the distribution of energy minima is tracked in a simplified MHN in which equidistant normalised patterns are stored. This network demonstrates a phase transition at a critical temperature $\beta_{\text{c}}$, from a single global attractor towards highly pattern specific minima as $\beta$ is increased. Importantly, the dynamics are not solely governed by the hyperparameter $\beta$ but are instead determined by an effective inverse temperature $\beta_{\text{eff}}$ which also depends on the distribution and size of the stored patterns. Recognizing the role of hyperparameters in the MHN could, in the future, aid researchers in the domain of Transformers to optimise their initial choices, potentially reducing the necessity for time and energy expensive hyperparameter fine-tuning.
Exploring the Temperature-Dependent Phase Transition in Modern Hopfield Networks
[ "Felix Koulischer", "Cédric Goemaere", "Tom Van Der Meersch", "Johannes Deleu", "Thomas Demeester" ]
Workshop/AMHN
poster
2311.18434
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=463RlISt9t
@inproceedings{ rasul2023probabilistic, title={Probabilistic Forecasting via Modern Hopfield Networks}, author={Kashif Rasul and Pablo Vicente and Anderson Schneider and Alexander M{\"a}rz}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=463RlISt9t} }
Hopfield networks, originally introduced as associative memory models, have shown promise in pattern recognition, optimization problems, and tabular datasets. However, their application to time series data has been limited. We introduce a temporal version that leverages the associative memory properties of the Hopfield architecture while accounting for temporal dependencies present in time series data. Our results suggest that the proposed model demonstrates competitive performance compared to state-of-the-art probabilistic forecasting models.
Probabilistic Forecasting via Modern Hopfield Networks
[ "Kashif Rasul", "Pablo Vicente", "Anderson Schneider", "Alexander März" ]
Workshop/AMHN
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=2DS1BDhRz3
@inproceedings{ shan2023errorcorrecting, title={Error-correcting columnar networks: high-capacity memory under sparse connectivity}, author={Haozhe Shan and Ludovica Bachschmid-Romano and Haim Sompolinsky}, booktitle={Associative Memory {\&} Hopfield Networks in 2023}, year={2023}, url={https://openreview.net/forum?id=2DS1BDhRz3} }
Neurons with recurrent connectivity can store memory patterns as attractor states in their dynamics, forming a plausible basis for associative memory in the brain. Classic theoretical results on fully connected recurrent neural networks (RNNs) with binary neurons and Hebbian learning rules state that they can store at most $O\left(N\right)$ memories, where $N$ is the number of neurons. However, under the physiological constraint that neurons are sparsely connected, this capacity is dramatically reduced to $O(K)$, where $K$ is the average degree of connectivity (estimated to be $O(10^{3}\sim10^{4})$ in the mammalian neocortex). This reduced capacity is orders-of magnitude smaller than experimental estimates of human memory capacity. In this work, we propose the error-correcting columnar network (ECCN) as a plausible model of how the brain realizes high-capacity memory storage despite sparse connectivity. In the ECCN, neurons are organized into ``columns'': in each memory, neurons from the same column encode the same feature(s), similar to columns in primary sensory areas. A column-synchronizing mechanism utilizes the redundancy of columnar codes to perform error correction. We analytically computed the memory capacity of the ECCN via a dynamical mean-field theory. The results show that for a fixed column size $M$, the capacity grows linearly with network size $N$ until it saturates at $\propto MK$. For optimal choice of $M$ for each $N$, the capacity is $\propto \sqrt{NK}$.
Error-correcting columnar networks: high-capacity memory under sparse connectivity
[ "Haozhe Shan", "Ludovica Bachschmid-Romano", "Haim Sompolinsky" ]
Workshop/AMHN
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=yvWlYTAkl3
@inproceedings{ martin2023modelfree, title={Model-Free Preference Elicitation}, author={Carlos Martin and Craig Boutilier and Ofer Meshi}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=yvWlYTAkl3} }
Elicitation of user preferences is an effective way to improve the quality of recommendations, especially when there is little or no user history. In this setting, a recommendation system interacts with the user by asking questions and recording the responses. Various criteria have been proposed for optimizing the sequence of queries to improve understanding of user preferences, and thereby the quality of downstream recommendations. A compelling approach is \emph{expected value of information (EVOI)}, a Bayesian approach which computes the expected gain in user utility for possible queries. Previous work on EVOI has focused on probabilistic models of user preferences and responses to compute posterior utilities. By contrast, in this work, we explore model-free variants of EVOI which rely on function approximation to obviate the need for strong modeling assumptions. Specifically, we propose to learn a user response model and user utility model from existing data, which is often available in real-world systems, and to use these models in EVOI in place of the probabilistic models. We show promising empirical results on a preference elicitation task.
Model-Free Preference Elicitation
[ "Carlos Martin", "Craig Boutilier", "Ofer Meshi" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=yuJEkWSkTN
@inproceedings{ zhang2023active, title={Active Learning for Iterative Offline Reinforcement Learning}, author={Lan Zhang and Luigi Franco Tedesco and Pankaj Rajak and Youcef Zemmouri and Hakan Brunzell}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=yuJEkWSkTN} }
Offline Reinforcement Learning (RL) has emerged as a promising approach to address real-world challenges where online interactions with the environment are limited, risky, or costly. Although, recent advancements produce high quality policies from offline data, currently, there is no systematic methodology to continue to improve them without resorting to online fine-tuning. This paper proposes to repurpose Offline RL to produce a sequence of improving policies, namely, Iterative Offline Reinforcement Learning (IORL). To produce such sequence, IORL has to cope with imbalanced offline datasets and to perform controlled environment exploration. Specifically, we introduce ”Return-based Sampling” as means to selectively prioritize experience from high-return trajectories and active learning driven ”Dataset Uncertainty Sampling” to probe state-actions inversely proportional to density in the dataset.We demonstrate that our proposed approach produces policies that achieve monotonically increasing average returns, from 65.4 to 140.2, in the Atari environment.
Active Learning for Iterative Offline Reinforcement Learning
[ "Lan Zhang", "Luigi Franco Tedesco", "Pankaj Rajak", "Youcef Zemmouri", "Hakan Brunzell" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=yPlkx5u4cg
@inproceedings{ go2023transferable, title={Transferable Candidate Proposal with Bounded Uncertainty}, author={Kyeongryeol Go and Kye-Hyeon Kim}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=yPlkx5u4cg} }
From an empirical perspective, the subset chosen through active learning cannot guarantee an advantage over random sampling when transferred to another model. While it underscores the significance of verifying transferability, experimental design from previous works often neglected that the informativeness of a data subset can change over model configurations. To tackle this issue, we introduce a new experimental design, coined as Candidate Proposal, to find transferable data candidates from which active learning algorithms choose the informative subset. Correspondingly, a data selection algorithm is proposed, namely Transferable candidate proposal with Bounded Uncertainty (TBU), which constrains the pool of transferable data candidates by filtering out the presumably redundant data points based on uncertainty estimation. We verified the validity of TBU in image classification benchmarks, including CIFAR-10/100 and SVHN. When transferred to different model configurations, TBU consistency improves performance in existing active learning algorithms. Our code is available at https://github.com/gokyeongryeol/TBU.
Transferable Candidate Proposal with Bounded Uncertainty
[ "Kyeongryeol Go", "Kye-Hyeon Kim" ]
Workshop/ReALML
realml-2023
2312.04604
[ "https://github.com/gokyeongryeol/tbu" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=y7FZ6KXEvl
@inproceedings{ park2023sequentially, title={Sequentially Adaptive Experimentation for Learning Optimal Options subject to Unobserved Contexts}, author={Hongju Park and Mohamad Kazem Shirani Faradonbeh}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=y7FZ6KXEvl} }
Contextual bandits constitute a classical framework for interactive learning of best decisions subject to context information. In this setting, the goal is to sequentially learn arms of highest reward subject to the contextual information, while the unknown reward parameters of each arm need to be learned by experimenting it. Accordingly, a fundamental problem is that of balancing such experimentation (i.e., pulling different arms to learn the parameters), versus sticking with the best arm learned so far, in order to maximize rewards. To study this problem, the existing literature mostly considers perfectly observed contexts. However, the setting of partially observed contexts remains unexplored to date, despite being theoretically more general and practically more versatile. We study bandit policies for learning to select optimal arms based on observations, which are noisy linear functions of the unobserved context vectors. Our theoretical analysis shows that adaptive experiments based on samples from the posterior distribution efficiently learn optimal arms. Specifically, we establish regret bounds that grow logarithmically with time. Extensive simulations for real-world data are presented as well to illustrate this efficacy.
Sequentially Adaptive Experimentation for Learning Optimal Options subject to Unobserved Contexts
[ "Hongju Park", "Mohamad Kazem Shirani Faradonbeh" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xrz7hNsLNd
@inproceedings{ banerjee2023decentralized, title={Decentralized and Asynchronous Multi-Agent Active Search and Tracking when Targets Outnumber Agents}, author={Arundhati Banerjee and Jeff Schneider}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=xrz7hNsLNd} }
Multi-agent multi-target tracking has a wide range of applications, including wildlife patrolling, security surveillance or environment monitoring. Such algorithms often assume that agents are pre-assigned to monitor disjoint partitions of the environment, reducing the burden of exploration. This limits applicability when there are fewer agents than targets, since agents are unable to continuously follow the targets in their fields of view. Multi-agent tracking algorithms additionally assume a central controller and synchronous inter-agent communication. Instead, we focus on the setting of decentralized multi-agent, multi-target, simultaneous active search-*and*-tracking with asynchronous inter-agent communication. Our proposed algorithm DecSTER uses a sequential monte carlo implementation of the probability hypothesis density filter for posterior inference combined with Thompson sampling for decentralized multi-agent decision making. We compare different action selection policies, focusing on scenarios where targets outnumber agents. In simulation, DecSTER outperforms baselines in terms of the Optimal Sub-Pattern Assignment (OSPA) metric for different numbers of targets and varying teamsizes.
Decentralized and Asynchronous Multi-Agent Active Search and Tracking when Targets Outnumber Agents
[ "Arundhati Banerjee", "Jeff Schneider" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xfj5jjpOaL
@inproceedings{ cook2023semiparametric, title={Semiparametric Efficient Inference in Adaptive Experiments}, author={Thomas Cook and Alan Mishler and Aaditya Ramdas}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=xfj5jjpOaL} }
We consider the problem of efficient inference of the Average Treatment Effect in a sequential experiment where the policy governing the assignment of subjects to treatment or control can change over time. We first provide a central limit theorem for the Adaptive Augmented Inverse-Probability Weighted estimator, which is semiparametric efficient, under weaker assumptions than those previously made in the literature. This central limit theorem enables efficient inference at fixed sample sizes. We then consider a sequential inference setting, deriving both asymptotic and nonasymptotic confidence sequences that are considerably tighter than previous methods. These anytime-valid methods enable inference under data-dependent stopping times (sample sizes). Additionally, we use propensity score truncation techniques from the recent off-policy estimation literature to reduce the finite sample variance of our estimator without affecting the asymptotic variance. Empirical results demonstrate that our methods yield narrower confidence sequences than those previously developed in the literature while maintaining time-uniform error control.
Semiparametric Efficient Inference in Adaptive Experiments
[ "Thomas Cook", "Alan Mishler", "Aaditya Ramdas" ]
Workshop/ReALML
realml-2023
2311.18274
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wxpxPL3RkP
@inproceedings{ schachtsiek2023class, title={Class Balanced Dynamic Acquisition for Domain Adaptive Semantic Segmentation using Active Learning}, author={Marc Schachtsiek and Simone Rossi and Thomas Hannagan}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=wxpxPL3RkP} }
Domain adaptive active learning is leading the charge in label-efficient training of neural networks. For semantic segmentation, state-of-the-art models jointly use two criteria of uncertainty and diversity to select training labels, combined with a pixel-wise acquisition strategy. However, we show that such methods currently suffer from a class imbalance issue which degrades their performance for larger active learning budgets. We then introduce Class Balanced Dynamic Acquisition (CBDA), a novel active learning method that mitigates this issue, especially in high-budget regimes. The more balanced labels increase minority class performance, which in turn allows the model to outperform the previous baseline by 0.6, 1.7, and 2.4 mIoU for budgets of 5%, 10%, and 20%, respectively. Additionally, the focus on minority classes leads to improvements of the minimum class performance of 0.5, 2.9, and 4.6 IoU respectively. The top-performing model even exceeds the fully supervised baseline, showing that a more balanced label than the entire ground truth can be beneficial.
Class Balanced Dynamic Acquisition for Domain Adaptive Semantic Segmentation using Active Learning
[ "Marc Schachtsiek", "Simone Rossi", "Thomas Hannagan" ]
Workshop/ReALML
realml-2023
2311.14146
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wtDzsitgO8
@inproceedings{ poiani2023pure, title={Pure Exploration under Mediators{\textquoteright} Feedback}, author={Riccardo Poiani and Alberto Maria Metelli and Marcello Restelli}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=wtDzsitgO8} }
Stochastic multi-armed bandits are a sequential-decision-making framework, where, at each interaction step, the learner selects an arm and observes a stochastic reward. Within the context of best-arm identification (BAI) problems, the goal of the agent lies in finding the optimal arm, i.e., the one with the highest expected reward, as accurately and efficiently as possible. Nevertheless, the sequential interaction protocol of classical BAI problems, where the agent has complete control over the arm being pulled at each round, does not effectively model several decision-making problems of interest (e.g., off-policy learning, human feedback). For this reason, in this work, we propose a novel strict generalization of the classical BAI problem that we refer to as best-arm identification under mediators’ feedback (BAI-MF). More specifically, we consider the scenario in which the learner has access to a set of mediators, each of which selects the arms on the agent’s behalf according to a stochastic and possibly unknown policy. The mediator, then, communicates back to the agent the pulled arm together with the observed reward. In this setting, the agent’s goal lies in sequentially choosing which mediator to query to identify with high probability the optimal arm while minimizing the identification time, i.e., the sample complexity. To this end, we first derive and analyze a statistical lower bound on the sample complexity specific to our general mediator feedback scenario. Then, we propose a sequential decision-making strategy for discovering the best arm; as our theory verifies, this algorithm matches the lower bound both almost surely and in expectation.
Pure Exploration under Mediators’ Feedback
[ "Riccardo Poiani", "Alberto Maria Metelli", "Marcello Restelli" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wXCqXdKaO8
@inproceedings{ ramesh2023distributionally, title={{DISTRIBUTIONALLY} {ROBUST} {MODEL}-{BASED} {REINFORCEMENT} {LEARNING} {WITH} {LARGE} {STATE} {SPACES}}, author={Shyam Sundhar Ramesh and Pier Giuseppe Sessa and Yifan Hu and Andreas Krause and Ilija Bogunovic}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=wXCqXdKaO8} }
Three major challenges in reinforcement learning are the complex dynamical systems with large state spaces, the costly data acquisition processes, and the deviation of real-world dynamics from the training environment deployment. To overcome these issues, we study distributionally robust Markov decision processes with continuous state spaces under the widely used Kullback–Leibler, chi-square, and total variation uncertainty sets. We propose a model-based approach that utilizes Gaussian Processes and the maximum variance reduction algorithm to efficiently learn multi-output nominal transition dynamics, leveraging access to a generative model (i.e., simulator). We further demonstrate the statistical sample complexity of the proposed method for different uncertainty sets. These complexity bounds are independent of the number of states and extend beyond linear dynamics, ensuring the effectiveness of our approach in identifying near-optimal distributionally-robust policies. The proposed method can be further combined with other model-free distributionally robust reinforcement learning methods to obtain a near-optimal robust policy. Experimental results demonstrate the robustness of our algorithm to distributional shifts and its superior performance in terms of the number of samples needed.
DISTRIBUTIONALLY ROBUST MODEL-BASED REINFORCEMENT LEARNING WITH LARGE STATE SPACES
[ "Shyam Sundhar Ramesh", "Pier Giuseppe Sessa", "Yifan Hu", "Andreas Krause", "Ilija Bogunovic" ]
Workshop/ReALML
realml-2023
2309.02236
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=vbPRv4KwfG
@inproceedings{ ament2023sustainable, title={Sustainable Concrete via Bayesian Optimization}, author={Sebastian Ament and Andrew Christopher Witte and Nishant Garg and Julius Kusuma}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=vbPRv4KwfG} }
Eight percent of global carbon dioxide emissions can be attributed to the production of cement, the main component of concrete, which is also the dominant source of CO2 emissions in the construction of data centers. The discovery of lower-carbon concrete formulae is therefore of high significance for sustainability. However, experimenting with new concrete formulae is time consuming and labor intensive, as one usually has to wait to record the concrete’s 28-day compressive strength, a quantity whose measurement can by its definition not be accelerated. This provides an opportunity for experimental design methodology like Bayesian Optimization (BO) to accelerate the search for strong and sustainable concrete formulae. Herein, we 1) propose modeling steps that make concrete strength amenable to be predicted accurately by a Gaussian process model with relatively few measurements, 2) formulate the search for sustainable concrete as a multi-objective optimization problem, and 3) leverage the proposed model to carry out multi-objective BO with real-world strength measurements of the algorithmically proposed mixes. Our experimental results show improved trade-offs between the mixtures’ global warming potential (GWP) and their associated compressive strengths, compared to mixes based on current industry practices. Our methods are open-sourced at github.com/facebookresearch/SustainableConcrete.
Sustainable Concrete via Bayesian Optimization
[ "Sebastian Ament", "Andrew Christopher Witte", "Nishant Garg", "Julius Kusuma" ]
Workshop/ReALML
realml-2023
2310.18288
[ "https://github.com/facebookresearch/sustainableconcrete" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=vBwfTUDTtz
@inproceedings{ matsuura2023active, title={Active Model Selection: A Variance Minimization Approach}, author={Mitsuru Matsuura and Satoshi Hara}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=vBwfTUDTtz} }
The cost of labeling is a significant challenge in practical machine learning. This issue arises not only during the learning phase but also at the model evaluation phase, as there is a need for a substantial amount of labeled test data in addition to the training data. In this study, we address the challenge of active model selection with the goal of minimizing labeling costs for choosing the best-performing model from a set of model candidates. Based on an appropriate test loss estimator, we propose an adaptive labeling strategy that can estimate the difference of test losses with small variance, thereby enabling the estimation of the best model using fewer labeling cost. Experimental results on real-world datasets confirm that our method efficiently selects the best model.
Active Model Selection: A Variance Minimization Approach
[ "Mitsuru Matsuura", "Satoshi Hara" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=utVPf9dRgy
@inproceedings{ shao2023preferenceguided, title={Preference-Guided Bayesian Optimization for Control Policy Learning: Application to Personalized Plasma Medicine}, author={Ketong Shao and Diego Romeres and Ankush Chakrabarty and Ali Mesbah}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=utVPf9dRgy} }
This paper investigates the adaptation of control policies for personalized dose delivery in plasma medicine using preference-learning based Bayesian optimization. Preference learning empowers users to incorporate their preferences or domain expertise during the exploration of optimal control policies, which often results in fast attainment of personalized treatment outcomes. We establish that, compared to multi-objective Bayesian optimization (BO), preference-guided BO offers statistically faster convergence and computes solutions that better reflect user preferences. Moreover, it enables users to actively provide feedback during the policy search procedure, which helps to focus the search in sub-regions of the search space likely to contain preferred local optima. Our findings highlight the suitability of preference-learning-based BO for adapting control policies in plasma treatments, where both user preferences and swift convergence are of paramount importance.
Preference-Guided Bayesian Optimization for Control Policy Learning: Application to Personalized Plasma Medicine
[ "Ketong Shao", "Diego Romeres", "Ankush Chakrabarty", "Ali Mesbah" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=u6NK3Jm4ka
@inproceedings{ wyrwal2023residual, title={Residual Deep Gaussian Processes on Manifolds for Geometry-aware Bayesian Optimization on Hyperspheres}, author={Kacper Wyrwal and Viacheslav Borovitskiy}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=u6NK3Jm4ka} }
Gaussian processes (GPs) are a widely-used model class for approximating unknown functions, especially useful in tasks such as Bayesian optimisation, where accurate uncertainty estimates are key. Deep Gaussian processes (DGPs) are a multi-layered generalisation of GPs, which promises improved performance at modelling complex functions. Some of the problems where GPs and DGPs may be utilised involve data on manifolds like hyperspheres. Recent work has recognised this, generalising scalar-valued and vector-valued Matérn GPs to a broad class of Riemannian manifolds. Despite that, an appropriate analogue of DGP for Riemannian manifolds is missing. We introduce a new model, residual manifold DGP, and a suitable doubly stochastic variational inference technique that helps train and deploy it on hyperspheres. Through examination on stylised examples, we highlight the usefulness of residual deep manifold GPs on regression tasks and in Bayesian optimisation.
Residual Deep Gaussian Processes on Manifolds for Geometry-aware Bayesian Optimization on Hyperspheres
[ "Kacper Wyrwal", "Viacheslav Borovitskiy" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=u2eV6JA0nY
@inproceedings{ shrestha2023exploratory, title={Exploratory Training: When Annotators Learn About Data}, author={Rajesh Shrestha and Omeed Habibelahian and Arash Termehchy and Paolo Papotti}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=u2eV6JA0nY} }
ML systems often present examples and solicit labels from users to learn a target model, i.e., active learning. However, due to the complexity of the underlying data, users may not initially have a perfect understanding of the effective model and do not know the accurate labeling. For example, a user who is training a model for detecting noisy or abnormal values may not perfectly know the properties of typical and clean values in the data. Users may improve their knowledge about the data and target model as they observe examples during training. As users gradually learn about the data and model, they may revise their labeling strategies. Current systems assume that users always provide correct labeling with potentially a fixed and small chance of annotation mistakes. Nonetheless, if the trainer revises its belief during training, such mistakes become significant and non-stationarity. Hence, current systems consume incorrect labels and may learn inaccurate models. In this paper, we build theoretical underpinnings and design algorithms to develop systems that collaborate with users to learn the target model accurately and efficiently. At the core of our proposal, a game-theoretic framework models the joint learning of user and system to reach a desirable eventual stable state, where both user and system share the same belief about the target model. We extensively evaluate our system using user studies over various real-world datasets and show that our algorithms lead to accurate results with a smaller number of interactions compared to existing methods.
Exploratory Training: When Annotators Learn About Data
[ "Rajesh Shrestha", "Omeed Habibelahian", "Arash Termehchy", "Paolo Papotti" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=t3PzfH98Mq
@inproceedings{ wang-henderson2023graph, title={Graph Neural Bayesian Optimization for Virtual Screening}, author={Miles Wang-Henderson and Bartu Soyuer and Parnian Kassraie and Andreas Krause and Ilija Bogunovic}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=t3PzfH98Mq} }
Virtual screening is an essential component of early-stage drug and materials discovery. This is challenged by the increasingly intractable size of virtual libraries and the high cost of evaluating properties. We propose GNN-SS, a Graph Neural Network (GNN) powered Bayesian Optimization (BO) algorithm. GNN-SS utilizes random sub-sampling to reduce the computational complexity of the BO problem, and diversifies queries for training the model. We further introduce data-independent projections to efficiently model second-order random feature interactions, and improve uncertainty estimates. GNN-SS is computationally light, sample-efficient, and rapidly narrows the search space by leveraging the generalization ability of GNNs. Our algorithm achieves state-of-the-art performance among screening methods for the Practical Molecular Optimization benchmark.
Graph Neural Bayesian Optimization for Virtual Screening
[ "Miles Wang-Henderson", "Bartu Soyuer", "Parnian Kassraie", "Andreas Krause", "Ilija Bogunovic" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=sTkValZrOS
@inproceedings{ audiffren2023zooming, title={Zooming Optimistic Optimization Method to solve the Threshold Estimation Problem}, author={Julien Audiffren}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=sTkValZrOS} }
This paper introduces a new global optimization algorithm that solves the threshold estimation problem. In this active learning problem, underlying many empirical neuroscience and psychophysics experiments, the objective is to estimate the input values that would produce the desired output value from an unknown, noisy, non-decreasing response function. Compared to previous approaches, ZOOM (Zooming Optimistic Optimization Method) offers the best of both worlds: ZOOM is model-agnostic, benefits from stronger theoretical guarantees and faster convergence rate, but also quickly jumps between arms, offering strong performance even for small sampling budgets.
Zooming Optimistic Optimization Method to solve the Threshold Estimation Problem
[ "Julien Audiffren" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qwrnFONObA
@inproceedings{ mcinerney2023hessianfree, title={Hessian-Free Laplace in Bayesian Deep Learning}, author={James McInerney and Nathan Kallus}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=qwrnFONObA} }
The Laplace approximation (LA) of the Bayesian posterior is a Gaussian distribution centered at the maximum a posteriori estimate. Its appeal in Bayesian deep learning stems from the ability to quantify uncertainty post-hoc (i.e., after standard network parameter optimization), the ease of sampling from the approximate posterior, and the analytic form of model evidence. Uncertainty in turn can direct experimentation. However, an important computational bottleneck of LA is the necessary step of calculating and inverting the Hessian matrix of the log posterior. The Hessian may be approximated in a variety of ways, with quality varying with a number of factors including the network, dataset, and inference task. In this paper, we propose an alternative algorithm that sidesteps Hessian calculation and inversion. The Hessian-free Laplace (HFL) approximation uses curvature of both the log posterior and network prediction to estimate its variance. Two point estimates are required: the standard maximum a posteriori parameters and the optimal parameter under a loss regularized by the network prediction. We show that under standard assumptions of LA in Bayesian deep learning, HFL targets the same variance as LA, and this is empirically explored in small-scale simulated experiments comparing against the exact Hessian.
Hessian-Free Laplace in Bayesian Deep Learning
[ "James McInerney", "Nathan Kallus" ]
Workshop/ReALML
realml-2023
2403.10671
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=obBbfvg5d0
@inproceedings{ khajehnejad2023on, title={On Complex Network Dynamics of an In-Vitro Neuronal System during Rest and Gameplay}, author={Moein Khajehnejad and Forough Habibollahi and Alon Loeffler and Brett Joseph Kagan and Adeel Razi}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=obBbfvg5d0} }
In this study, we characterize complex network dynamics in live in vitro neuronal systems during two distinct activity states: spontaneous rest state and engagement in a real-time (closed-loop) game environment using the DishBrain system. First, we embed the spiking activity of these channels in a lower-dimensional space using various representation learning methods and then extract a subset of representative channels. Next, by analyzing these low-dimensional representations, we explore the patterns of macroscopic neuronal network dynamics during learning. Remarkably, our findings indicate that just using the low-dimensional embedding of representative channels is sufficient to differentiate the neuronal culture during the Rest and Gameplay. Notably, our investigation shows dynamic changes in the connectivity patterns within the same region and across multiple regions on the multi-electrode array only during Gameplay. These findings underscore the plasticity of neuronal networks in response to external stimuli and highlight the potential for modulating connectivity in a controlled environment. The ability to distinguish between neuronal states using reduced-dimensional representations points to the presence of underlying patterns that could be pivotal for real-time monitoring and manipulation of neuronal cultures. Additionally, this provides insight into how biological based information processing systems rapidly adapt and learn and may lead to new improved algorithms.
On Complex Network Dynamics of an In-Vitro Neuronal System during Rest and Gameplay
[ "Moein Khajehnejad", "Forough Habibollahi", "Alon Loeffler", "Brett Joseph Kagan", "Adeel Razi" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=n9zR0sMY4c
@inproceedings{ vishwakarma2023humanintheloop, title={Human-in-the-Loop Out-of-Distribution Detection with False Positive Rate Control}, author={Harit Vishwakarma and Heguang Lin and Ramya Vinayak}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=n9zR0sMY4c} }
Robustness to Out-of-Distribution (OOD) samples is essential for successful deployment of machine learning models in the open world. Since it is not possible to have a priori access to variety of OOD data before deployment, several recent works have focused on designing scoring functions to quantify OOD uncertainty. These methods often find a threshold that achieves 95% true positive rate (TPR) on the In-Distribution (ID) data used for training and use this threshold for detecting OOD samples. However, this can lead to very high FPR as seen in a comprehensive evaluation in the Open-OOD benchmark, the FPR can range between 60 to 96% on several ID and OOD dataset combinations. In contrast, practical systems deal with a variety of OOD samples on the fly and critical applications, e.g., medical diagnosis, demand guaranteed control of the false positive rate (FPR). To meet these challenges, we propose a mathematically grounded framework for human-in-the-loop OOD detection, wherein expert feedback is used to update the threshold. This allows the system to adapt to variations in the OOD data while adhering to the quality constraints. We propose an algorithm that uses any time valid confidence intervals based on the Law of Iterated Logarithm (LIL). Our theoretical results show that the system meets FPR constraints while minimizing the human feedback for point that are in-distribution. Another key feature of the system is that it can work with any existing post-hoc OOD uncertainty-quantification methods. We evaluate our system empirically on a mixture of benchmark OOD datasets in image classification task on CIFAR-10 and CIFAR-100 as in distribution datasets and show that our method can maintain FPR at most 5% while maximizing TPR.
Human-in-the-Loop Out-of-Distribution Detection with False Positive Rate Control
[ "Harit Vishwakarma", "Heguang Lin", "Ramya Vinayak" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=kPWO1v0slD
@inproceedings{ blau2023crossentropy, title={Cross-Entropy Estimators for Sequential Experiment Design with Reinforcement Learning}, author={Tom Blau and Iadine Chades and Amir Dezfouli and Daniel M Steinberg and Edwin V. Bonilla}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=kPWO1v0slD} }
Reinforcement learning can learn amortised design policies for designing sequences of experiments. However, current methods rely on contrastive estimators of expected information gain, which require an exponential number of contrastive samples to achieve an unbiased estimation. We propose the use of an alternative lower bound estimator, based on the cross-entropy of the joint model distribution and a flexible proposal distribution. This proposal distribution approximates the true posterior of the model parameters given the experimental history and the design policy. Our method requires no contrastive samples, can achieve more accurate estimates of high information gains, allows learning of superior design policies, and is compatible with implicit probabilistic models. We assess our algorithm's performance in various tasks, including continuous and discrete designs and explicit and implicit likelihoods.
Cross-Entropy Estimators for Sequential Experiment Design with Reinforcement Learning
[ "Tom Blau", "Iadine Chades", "Amir Dezfouli", "Daniel M Steinberg", "Edwin V. Bonilla" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=k2kBjcKVal
@inproceedings{ char2023correlated, title={Correlated Trajectory Uncertainty for Adaptive Sequential Decision Making}, author={Ian Char and Youngseog Chung and Rohan Shah and Willie Neiswanger and Jeff Schneider}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=k2kBjcKVal} }
One of the great challenges with decision making tasks on real world systems is the fact that data is sparse and acquiring additional data is expensive. In these cases, it is often crucial to make a model of the environment to assist in making decisions. At the same time, limited data means that learned models are erroneous, making it just as important to equip the model with good predictive uncertainties. In the context of learning sequential decision making policies, these uncertainties can prove useful for informing which data to collect for the greatest improvement in policy performance \citep{mehta2021experimental, mehta2022exploration} or informing the policy about unsure regions of state and action space to avoid during test time \citep{yu2020mopo}. Additionally, assuming that realistic samples of the environment can be drawn, an adaptable policy can be trained that attempts to make optimal decisions for any given possible instance of the environment \citep{ghosh2022offline, chen2021offline}. In this work, we examine the so-called ``probabilistic neural network'' (PNN) model that is ubiquitous in model-based reinforcement learning (MBRL) works. We argue that while PNN models may have good marginal uncertainties, they form a distribution of non-smooth transition functions. Not only are these samples unrealistic and may hamper adaptability, but we also assert that this leads to poor uncertainty estimates when predicting multiple step trajectory estimates. To address this issue, we propose a simple sampling method that can be implemented on top of pre-existing models.We evaluate our sampling technique on a number of environments, including a realistic nuclear fusion task, and find that, not only do smooth transition function samples produce more calibrated uncertainties, but they also lead to better downstream performance for an adaptive policy.
Correlated Trajectory Uncertainty for Adaptive Sequential Decision Making
[ "Ian Char", "Youngseog Chung", "Rohan Shah", "Willie Neiswanger", "Jeff Schneider" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=juq0ZUWOoY
@inproceedings{ li2023efficient, title={Efficient and scalable reinforcement learning via Hypermodel}, author={Yingru Li and Jiawei Xu and Zhi-Quan Luo}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=juq0ZUWOoY} }
Data-efficient reinforcement learning(RL) requires deep exploration. Thompson sampling is a principled method for deep exploration in reinforcement learning. However, Thompson sampling need to track the degree of uncertainty by maintaining the posterior distribution of models, which is computationally feasible only in simple environments with restrictive assumptions. A key problem in modern RL is how to develop data and computation efficient algorithm that is scalable to large-scale complex environments. We develop a principled framework, called HyperFQI, to tackle both the computation and data efficiency issues. HyperFQI can be regarded as approximate Thompson sampling for reinforcement learning based on hypermodel. Hypermodel in this context serves as the role for uncertainty estimation of action-value function. HyperFQI demonstrates its ability for efficient and scalable deep exploration in DeepSea benchmark with large state space. HyperFQI also achieves super-human performance in Atari benchmark with 2M interactions with low computation costs. We also give a rigorous performance analysis for the proposed method, justifying its computation and data efficiency. To the best of knowledge, this is the first principled RL algorithm that is provably efficient and also practically scalable to complex environments such as Arcade learning environment that requires deep networks for pixel-based control.
Efficient and scalable reinforcement learning via Hypermodel
[ "Yingru Li", "Jiawei Xu", "Zhi-Quan Luo" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=iK8FzJvQMH
@inproceedings{ novitasari2023alas, title={{ALAS}: Active Learning for Autoconversion Rates Prediction from Satellite Data}, author={Maria Carolina Novitasari and Johannes Quaas and Miguel R. D. Rodrigues}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=iK8FzJvQMH} }
High-resolution simulations, such as the ICOsahedral Non-hydrostatic Large-Eddy Model (ICON-LEM), provide valuable insights into the complex interactions among aerosols, clouds, and precipitation, which are the major contributors to climate change uncertainty. However, due to its exorbitant computational costs, it can only be employed for a limited period and geographical area. To address this, we propose a more cost-effective method powered by emerging machine learning approach to better understand the intricate dynamics of the climate system. Our approach involves active learning techniques -- by leveraging high-resolution climate simulation as the oracle and an abundant amount of unlabeled data drawn from satellite observations -- to predict autoconversion rates, a crucial step in precipitation formation, while significantly reducing the need for a large number of labeled instances. In this study, we present novel methods: custom query strategy fusion for labeling instances, WiFi and MeFi, along with active feature selection based on SHAP, designed to tackle real-world challenges due to its simplicity and practicality in application, specifically focusing on the prediction of autoconversion rates.
ALAS: Active Learning for Autoconversion Rates Prediction from Satellite Data
[ "Maria Carolina Novitasari", "Johannes Quaas", "Miguel R. D. Rodrigues" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=hzJq3WVGd9
@inproceedings{ kang2023nearequivalence, title={Near-equivalence between bounded regret and delay robustness in interactive decision making}, author={Enoch H. Kang and Panganamala Kumar}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=hzJq3WVGd9} }
Interactive decision making, encompassing bandits, contextual bandits, and reinforcement learning, has recently been of interest to theoretical studies of experimentation design and recommender system algorithm research. Recently, it has been shown that the well-known Graves-Lai constant being zero is a necessary and sufficient condition for achieving bounded (or constant) regret in interactive decision making. As this condition may be a strong requirement for many applications, the practical usefulness of pursuing bounded regret has been questioned. In this paper, we show that the condition of the Graves-Lai constant being zero is also necessary to achieve delay model robustness when reward delays are unknown (i.e., when feedbacks are anonymous). Here, model robustness is measured in terms of $\epsilon$-robustness, one of the most widely used and one of the least adversarial robustness concepts in the robust statistics literature. In particular, we show that $\epsilon$-robustness cannot be achieved for a consistent (i.e., uniformly sub-polynomial regret) algorithm however small the nonzero $\epsilon$ value is when the Grave-Lai constant is not zero. While this is a strongly negative result, we also provide a positive result for linear rewards models (Linear contextual bandits, Reinforcement learning with linear MDP) that the Grave-Lai constant being zero is also sufficient for achieving bounded regret without any knowledge of delay models, i.e., the best of both the efficiency world and the delay robustness world.
Near-equivalence between bounded regret and delay robustness in interactive decision making
[ "Enoch H. Kang", "Panganamala Kumar" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=hrFfR1WZgi
@inproceedings{ savage2023expertguided, title={Expert-guided Bayesian Optimisation for Human-in-the-loop Experimental Design of Known Systems}, author={Tom Savage and Antonio Del rio chanona}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=hrFfR1WZgi} }
Domain experts often possess valuable physical insights that are overlooked in fully automated decision-making processes such as Bayesian optimisation. In this article we apply high-throughput (batch) Bayesian optimisation alongside anthropological decision theory to enable domain experts to influence the selection of optimal experiments. Our methodology exploits the hypothesis that humans are better at making discrete choices than continuous ones and enables experts to influence critical early decisions. At each iteration we solve an augmented multi-objective optimisation problem across a number of alternate solutions, maximising both the sum of their utility function values and the determinant of their covariance matrix, equivalent to their total variability. By taking the solution at the knee point of the Pareto front, we return a set of alternate solutions at each iteration that have both high utility values and are reasonably distinct, from which the expert selects one for evaluation. We demonstrate that even in the case of an uninformed practitioner, our algorithm recovers the regret of standard Bayesian optimisation.
Expert-guided Bayesian Optimisation for Human-in-the-loop Experimental Design of Known Systems
[ "Tom Savage", "Antonio Del rio chanona" ]
Workshop/ReALML
realml-2023
2312.02852
[ "https://github.com/trsav/hitl-bo" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=gkChsof0Rg
@inproceedings{ ochiai2023active, title={Active Testing of Binary Classification Model Using Level Set Estimation}, author={Takuma Ochiai and Keiichiro Seno and Kota Matsui and Satoshi Hara}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=gkChsof0Rg} }
In this study, we propose a method for estimating the test loss in binary classification model with minimal labeling of the test data. The central idea of the proposed method is to reduce the problem of test loss estimation to the problem of level set estimation for the loss function. This reduction allows us to achieve sequential test loss estimation through iterative labeling using active learning methods for level set estimation. Through multiple dataset experiments, we confirmed that the proposed method is effective for evaluating binary classification models and allows for test loss estimation with fewer labeled samples compared to existing methods.
Active Testing of Binary Classification Model Using Level Set Estimation
[ "Takuma Ochiai", "Keiichiro Seno", "Kota Matsui", "Satoshi Hara" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ePglZTbdeI
@inproceedings{ yin2023nonparametric, title={Nonparametric Discrete Choice Experiments with Machine Learning Guided Adaptive Design}, author={Mingzhang Yin and Ruijiang Gao and Weiran Lin and Steven M. Shugan}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=ePglZTbdeI} }
Designing products to meet consumers' preferences is essential for a business's success. We propose Gradient-based Survey (GBS), a discrete choice experiment for multiattribute product design. The experiment elicits consumer preferences through a sequence of paired comparisons for partial profiles. GBS adaptively constructs paired comparison questions based on the respondents' previous choices. Unlike the traditional random utility maximization paradigm, GBS is robust to model misspecification by not requiring a parametric utility model. Cross-pollinating the machine learning and experiment design, GBS is scalable to products with hundreds of attributes and can design personalized products for heterogeneous consumers. We demonstrate the advantage of GBS in accuracy and sample efficiency compared to the existing parametric and nonparametric methods in simulations.
Nonparametric Discrete Choice Experiments with Machine Learning Guided Adaptive Design
[ "Mingzhang Yin", "Ruijiang Gao", "Weiran Lin", "Steven M. Shugan" ]
Workshop/ReALML
realml-2023
2310.12026
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=eLIk3m5C79
@inproceedings{ bakker2023active, title={Active Learning Policies for Solving Inverse Problems}, author={Tim Bakker and Thomas Hehn and Tribhuvanesh Orekondy and Arash Behboodi and Fabio Valerio Massoli}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=eLIk3m5C79} }
In recent years, solving inverse problems for black-box simulators has become a point of focus for the machine learning community due to their ubiquity in science and engineering scenarios. In such settings, the simulator describes a forward process $f: (\psi, x) \rightarrow y$ from simulator parameters $\psi$ and input data $x$ to observations y, and the goal of the inverse problem is to optimise $\psi$ to minimise some observation loss. Simulator gradients are often unavailable or prohibitively expensive to obtain, making optimisation of these simulators particularly challenging. Moreover, in many applications, the goal is to solve a family of related inverse problems. Thus, starting optimisation ab-initio/from-scratch may be infeasible if the forward model is expensive to evaluate. In this paper, we propose a novel method for solving classes of similar inverse problems. We learn an active learning policy that guides the training of a surrogate and use the gradients of this surrogate to optimise the simulator parameters with gradient descent. After training the policy, downstream inverse problem optimisations require up to 90\% fewer forward model evaluations than the baseline.
Active Learning Policies for Solving Inverse Problems
[ "Tim Bakker", "Thomas Hehn", "Tribhuvanesh Orekondy", "Arash Behboodi", "Fabio Valerio Massoli" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=eHTXFqa7pl
@inproceedings{ chen2023physicsenhanced, title={Physics-Enhanced Multi-fidelity Learning for Optical Surface Imprint}, author={Yongchao Chen}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=eHTXFqa7pl} }
Human fingerprints serve as one unique and powerful characteristic for each person, from which policemen can recognize the identity. Similar to humans, many natural bodies and intrinsic mechanical qualities can also be uniquely identified from surface characteristics. To measure the elasto-plastic properties of one material, one formally sharp indenter is pushed into the measured body under constant force and retracted, leaving a unique residual imprint of the minute size from several micrometers to nanometers. However, one great challenge is how to map the optical image of this residual imprint into the real wanted mechanical properties, i.e., the tensile force curve. In this paper, we propose a novel method to use multi-fidelity neural networks (MFNN) to solve this inverse problem. We first actively train the NN model via pure simulation data, and then bridge the sim-to-real gap via transfer learning. The most innovative part is that we use NN to dig out the unknown physics and also implant the known physics into the transfer learning framework, thus highly improving the model stability and decreasing the data requirement. This work serves as one great example of applying machine learning into the real experimental research, especially under the constraints of data limitation and fidelity variance.
Physics-Enhanced Multi-fidelity Learning for Optical Surface Imprint
[ "Yongchao Chen" ]
Workshop/ReALML
realml-2023
2311.10278
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=dfUF5EbJUj
@inproceedings{ qin2023generalized, title={Generalized Objectives in Adaptive Experiments: The Frontier between Regret and Speed}, author={Chao Qin and Daniel Russo}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=dfUF5EbJUj} }
This paper formulates a generalized model of multi-armed bandit experiments that accommodates both cumulative regret minimization and best-arm identification objectives. We identify the optimal instance-dependent scaling of the cumulative cost across experimentation and deployment, which is expressed in the familiar form uncovered by Lai and Robbins (1985). We show that the nature of asymptotically efficient algorithms is nearly independent of the cost functions, emphasizing a remarkable universality phenomenon. Balancing various cost considerations is reduced to an appropriate choice of exploitation rate. Additionally, we explore the Pareto frontier between the length of experiment and the cumulative regret across experimentation and deployment. A notable and universal feature is that even a slight reduction in the exploitation rate (from one to a slightly lower value results) in a substantial decrease in the experiment's length, accompanied by only a minimal increase in the cumulative regret.
Generalized Objectives in Adaptive Experiments: The Frontier between Regret and Speed
[ "Chao Qin", "Daniel Russo" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=brPrxb9Zz3
@inproceedings{ nguyen2023expt, title={Ex{PT}: Scaling Foundation Models for Experimental Design via Synthetic Pretraining}, author={Tung Nguyen and Sudhanshu Agrawal and Aditya Grover}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=brPrxb9Zz3} }
Experimental design is a fundamental problem in many science and engineering fields. In this problem, sample efficiency is crucial due to the time, money, and safety costs of real-world design evaluations. Existing approaches either rely on active data collection or access to large, labeled datasets of past experiments, making them impractical in many real-world scenarios. In this work, we address the more challenging yet realistic setting of few-shot experimental design, where only a few labeled data points of input designs and their corresponding values are available. We approach this problem as a conditional generation task, where a model conditions on a few labeled examples and the desired output to generate an optimal input design. To this end, we present Pretrained Transformers for Experimental Design (ExPT), which uses a novel combination of synthetic pretraining with in-context learning to enable few-shot generalization. In ExPT, we only assume knowledge of a finite collection of unlabelled data points from the input domain and pretrain a transformer neural network to optimize diverse synthetic functions defined over this domain. Unsupervised pretraining allows ExPT to adapt to any design task at test time in an in-context fashion by conditioning on a few labeled data points from the target task and generating the candidate optima. We evaluate ExPT on few-shot experimental design in challenging domains and demonstrate its superior generality and performance compared to existing methods.
ExPT: Scaling Foundation Models for Experimental Design via Synthetic Pretraining
[ "Tung Nguyen", "Sudhanshu Agrawal", "Aditya Grover" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=YrAXARes9d
@inproceedings{ tian2023autodex, title={Aut{ODE}x: Automated Optimal Design of Experiments Platform with Data- and Time-Efficient Multi-Objective Optimization}, author={Yunsheng Tian and Pavle Vanja Konakovic and Beichen Li and Ane Zuniga and Michael Foshey and Timothy Erps and Wojciech Matusik and Mina Konakovic Lukovic}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=YrAXARes9d} }
We introduce AutODEx, an automated machine learning platform for optimal design of experiments to expedite solution discovery with optimal objective trade-offs. We implement state-of-the-art multi-objective Bayesian optimization (MOBO) algorithms in a unified and flexible framework for optimal design of experiments, along with efficient asynchronous batch strategies extended to MOBO to harness experiment parallelization. For users with little or no experience with coding or machine learning, we provide an intuitive graphical user interface (GUI) to help quickly visualize and guide the experiment design. For experienced researchers, our modular code structure serves as a testbed to quickly customize, develop, and evaluate their own MOBO algorithms. Extensive benchmark experiments against other MOBO packages demonstrate \platname's competitive and stable performance. Furthermore, we showcase \platname's real-world utility by autonomously guiding hardware experiments with minimal human involvement.
AutODEx: Automated Optimal Design of Experiments Platform with Data- and Time-Efficient Multi-Objective Optimization
[ "Yunsheng Tian", "Pavle Vanja Konakovic", "Beichen Li", "Ane Zuniga", "Michael Foshey", "Timothy Erps", "Wojciech Matusik", "Mina Konakovic Lukovic" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=YRxd1szajS
@inproceedings{ ren2023accelerated, title={Accelerated High-Entropy Alloys Discovery for Electrocatalysis via Robotic-Aided Active Learning}, author={Zhichu Ren and Zhen Zhang and Yunsheng Tian and Ju Li}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=YRxd1szajS} }
This work explores the accelerated discovery of High-Entropy Alloys electrocatalysts using a novel carbothermal shock fabrication method, underpinned by an active learning approach. A high-throughput robotic platform, integrating a BoTorch-based active learning module with an Opentrons liquid handling robot and a 7-axis robotic arm, expedites the iterative experimental cycles. The recent integration of large language models leverages ChatGPT’s API, facilitating voice-driven interactions between researchers and the automation setup, further enhancing the autonomous workflow under experimental materials science scenarios. Initial optimization efforts for green hydrogen production catalyst yield promising results, showcasing the efficacy of the active learning framework in navigating the complex materials design space of HEAs. This study also emphasizes the crucial need for consistency and reproducibility in real-world experiments to fully harness the potential of active learning in materials science explorations.
Accelerated High-Entropy Alloys Discovery for Electrocatalysis via Robotic-Aided Active Learning
[ "Zhichu Ren", "Zhen Zhang", "Yunsheng Tian", "Ju Li" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=Xw8KTnFpLA
@inproceedings{ dovonon2023longrun, title={Long-run Behaviour of Multi-fidelity Bayesian Optimisation}, author={Gbetondji Jean-Sebastien Dovonon and Jakob Zeitler}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=Xw8KTnFpLA} }
Multi-fidelity Bayesian Optimisation (MFBO) has been shown to generally converge faster than single-fidelity Bayesian Optimisation (SFBO) (\cite{poloczek2017multi}). Inspired by recent benchmark papers, we are investigating the long-run behaviour of MFBO, based on observations in the literature that it might under-perform in certain scenarios (\cite{mikkola2023multi}, \cite{eggensperger2021hpobench}). An under-performance of MBFO in the long-run could significantly undermine its application to many research tasks, especially when we are not able to identify when the under-performance begins, and other BO algorithms would have performed better. We create a simple benchmark study, showcase empirical results and discuss scenarios, concluding with inconclusive results.
Long-run Behaviour of Multi-fidelity Bayesian Optimisation
[ "Gbetondji Jean-Sebastien Dovonon", "Jakob Zeitler" ]
Workshop/ReALML
realml-2023
2312.12633
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=Xu8d36bb5c
@inproceedings{ vishwakarma2023understanding, title={Understanding Threshold-based Auto-labeling: The Good, the Bad, and the Terra Incognita}, author={Harit Vishwakarma and Heguang Lin and Frederic Sala and Ramya Vinayak}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=Xu8d36bb5c} }
Creating large-scale high-quality labeled datasets is a major bottleneck in supervised machine learning workflows. Threshold-based auto-labeling (TBAL), where validation data obtained from humans is used to find a confidence threshold above which the data is machine-labeled, reduces reliance on manual annotation. TBAL is emerging as a widely-used solution in practice. Given the long shelf-life and diverse usage of the resulting datasets, understanding when the data obtained by such auto-labeling systems can be relied on is crucial. This is the first work to analyze TBAL systems and derive sample complexity bounds on the amount of human-labeled validation data required for guaranteeing the quality of machine-labeled data. Our results provide two crucial insights. First, reasonable chunks of unlabeled data can be automatically and accurately labeled by seemingly bad models. Second, a hidden downside of TBAL systems is potentially prohibitive validation data usage. Together, these insights describe the promise and pitfalls of using such systems. We validate our theoretical guarantees with extensive experiments on synthetic and real datasets.
Understanding Threshold-based Auto-labeling: The Good, the Bad, and the Terra Incognita
[ "Harit Vishwakarma", "Heguang Lin", "Frederic Sala", "Ramya Vinayak" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=WWqJWiyQ2D
@inproceedings{ ikram2023probabilistic, title={Probabilistic Generative Modeling for Procedural Roundabout Generation for Developing Countries}, author={Zarif Ikram and Ling Pan and Dianbo Liu}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=WWqJWiyQ2D} }
Due to limited resources and fast economic growth, designing optimal transportation road networks with traffic simulation and validation in a cost-effective manner is vital for developing countries, where extensive manual testing is expensive and often infeasible. Current rule-based road design generators lack diversity, a key feature for design robustness. Generative Flow Networks (GFlowNets) learn stochastic policies to sample from an unnormalized reward distribution, thus generating high-quality solutions while preserving their diversity. In this work, we formulate the problem of linking incident roads to the circular junction of a roundabout by a Markov decision process, and we leverage GFlowNets as the Junction-Art road generator. We compare our method with related methods and our empirical results show that our method achieves better diversity while preserving a high validity score.
Probabilistic Generative Modeling for Procedural Roundabout Generation for Developing Countries
[ "Zarif Ikram", "Ling Pan", "Dianbo Liu" ]
Workshop/ReALML
realml-2023
2310.03687
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=WDLXX4NJSK
@inproceedings{ shen2023efficient, title={Efficient Variational Sequential Information Control}, author={Jianwei Shen and Jason Pacheco}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=WDLXX4NJSK} }
We develop a family of fast variational methods for sequential control in dynamical settings where an agent is incentivized to maximize information gain. We consider the case of optimal control in continuous nonlinear dynamical systems that prohibit exact evaluation of the mutual information (MI) reward. Our approach couples efficient message-passing inference with variational bounds on the MI objective under Gaussian projections. We also develop a Gaussian mixture approximation that enables exact MI evaluation under constraints on the component covariances. We validate our methodology in nonlinear systems with superior and faster control compared to standard particle-based methods. We show our approach improves the accuracy and efficiency of one-shot robotic learning with intrinsic MI rewards.
Efficient Variational Sequential Information Control
[ "Jianwei Shen", "Jason Pacheco" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=UWJpUpG8Cv
@inproceedings{ kassraie2023anytime, title={Anytime Model Selection in Linear Bandits}, author={Parnian Kassraie and Nicolas Emmenegger and Andreas Krause and Aldo Pacchiano}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=UWJpUpG8Cv} }
Model selection in the context of bandit optimization is a challenging problem, as it requires balancing exploration and exploitation not only for action selection, but also for model selection. One natural approach is to rely on online learning algorithms that treat different models as experts. Existing methods, however, scale poorly ($\mathrm{poly}M$) with the number of models $M$ in terms of their regret. We develop \alexp, an anytime algorithm, which has an exponentially improved ($\log M$) dependence on $M$ for its regret. We neither require knowledge of the horizon $n$, nor rely on an initial purely exploratory stage. Our approach utilizes a novel time-uniform analysis of the Lasso, by defining a self-normalized martingale sequence based on the empirical process error, establishing a new connection between interactive learning and high-dimensional statistics.
Anytime Model Selection in Linear Bandits
[ "Parnian Kassraie", "Nicolas Emmenegger", "Andreas Krause", "Aldo Pacchiano" ]
Workshop/ReALML
realml-2023
2307.12897
[ "https://github.com/lasgroup/alexp" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ThMSXaolvn
@inproceedings{ hern{\'a}ndez-garc{\'\i}a2023multifidelity, title={Multi-Fidelity Active Learning with {GF}lowNets}, author={Alex Hern{\'a}ndez-Garc{\'\i}a and Nikita Saxena and Moksh Jain and Cheng-Hao Liu and Yoshua Bengio}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=ThMSXaolvn} }
Many relevant scientific and engineering problems present challenges where current machine learning methods cannot yet efficiently leverage the available data and resources. For example, certain relevant problems involve exploring very large, structured and high-dimensional spaces, and where querying a high fidelity, black-box objective function is very expensive. Progress in machine learning methods that can efficiently tackle such problems would help accelerate currently crucial areas such as drug and materials discovery. In this paper, we propose a multi-fidelity active learning algorithm with GFlowNets as a sampler, to efficiently discover diverse, high-scoring candidates where multiple approximations of the black-box function are available at lower fidelity and cost. Our evaluation on molecular discovery tasks show that multi-fidelity active learning with GFlowNets can discover high-scoring candidates at a fraction of the budget of its single-fidelity counterpart while maintaining diversity, unlike RL-based alternatives. These results open new avenues for multi-fidelity active learning to accelerate scientific discovery and engineering design.
Multi-Fidelity Active Learning with GFlowNets
[ "Alex Hernández-García", "Nikita Saxena", "Moksh Jain", "Cheng-Hao Liu", "Yoshua Bengio" ]
Workshop/ReALML
realml-2023
2306.11715
[ "https://github.com/nikita-0209/mf-al-gfn" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=SuSPkCI0qP
@inproceedings{ fowler2023learning, title={Learning in Clinical Trial Settings}, author={Zoe Fowler and Kiran Premdat Kokilepersaud and Mohit Prabhushankar and Ghassan AlRegib}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=SuSPkCI0qP} }
This paper presents an approach to active learning that considers the non-independent and identically distributed (non-i.i.d.) structure of a clinical trial setting. There exists two types of clinical trials: retrospective and prospective. Retrospective clinical trials analyze data after treatment has been performed; prospective clinical trials collect data as treatment is ongoing. Traditional active learning approaches are often unrealistic in practice and assume the dataset is i.i.d. when selecting training samples; however, in the case of clinical trials, treatment results in a dependency between the data collected at the current and past visits. Thus, we propose prospective active learning to overcome the limitations present in traditional active learning methods, where we condition on the time data was collected. We compare our proposed method to the traditional active learning paradigm, which we refer to as retrospective in nature, on one clinical trial dataset and one non-clinical trial dataset. We show that in clinical trial settings, our proposed method outperforms retrospective active learning.
Learning in Clinical Trial Settings
[ "Zoe Fowler", "Kiran Premdat Kokilepersaud", "Mohit Prabhushankar", "Ghassan AlRegib" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=SmvTEe9iSG
@inproceedings{ sinaga2023preferential, title={Preferential Heteroscedastic Bayesian Optimization with Informative Noise Priors}, author={Marshal Arijona Sinaga and Julien Martinelli and Samuel Kaski}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=SmvTEe9iSG} }
Preferential Bayesian optimization (PBO) is a sample-efficient framework for optimizing a black-box function by utilizing human preferences between two candidate solutions as a proxy. Conventional PBO relies on homoscedastic noise to model human preference structure. However, such noise fails to accurately capture the varying levels of human aleatoric uncertainty among different pairs of candidates. For instance, a chemist with solid expertise in glucose-related molecules may easily compare two compounds and struggle for alcohol-related molecules. Furthermore, PBO ignores this uncertainty when searching for a new candidate, consequently underestimating the risk associated with human uncertainty. To address this, we propose heteroscedastic noise models to learn human preference structure. Moreover, we integrate the preference structure with the acquisition functions that account for aleatoric uncertainty. The noise models assign noise based on the distance of a specific input to a predefined set of reliable inputs known as \emph{anchors}. We empirically evaluate the proposed approach on a range of synthetic black-box functions, demonstrating a consistent improvement over homoscedastic PBO.
Preferential Heteroscedastic Bayesian Optimization with Informative Noise Priors
[ "Marshal Arijona Sinaga", "Julien Martinelli", "Samuel Kaski" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ScOvmGz4xH
@inproceedings{ bal2023optimistic, title={Optimistic Games for Combinatorial Bayesian Optimization with Applications to Protein Design}, author={Melis Ilayda Bal and Pier Giuseppe Sessa and Mojmir Mutny and Andreas Krause}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=ScOvmGz4xH} }
Bayesian optimization (BO) is a powerful framework to optimize black box expensive-to-evaluate functions via sequential interactions. In several important problems (e.g. drug discovery, circuit design, neural architecture search, etc.), though, such functions are defined over $\textit{combinatorial and unstructured}$ spaces. This makes existing BO algorithms not feasible due to the intractable maximization of the acquisition function to find informative evaluation points. To address this issue, we propose $\textbf{GameOpt}$, a novel game-theoretical approach to combinatorial BO. $\textbf{GameOpt}$ establishes a cooperative game between the different optimization variables and computes informative points to be game $\textit{equilibria}$ of the acquisition function. These are stable configurations from which no variable has an incentive to deviate -- analogous to local optima in continuous domains. Crucially, this allows us to efficiently break down the complexity of the combinatorial domain into individual decision sets, making $\textbf{GameOpt}$ scalable to large combinatorial spaces. We demonstrate the application of $\textbf{GameOpt}$ to the challenging $\textit{protein design}$ problem and validate its performance on two real-world protein datasets. Each protein can take up to $20^{X}$ possible configurations, where $X$ is the length of a protein, making standard BO methods unusable. Instead, our approach iteratively selects informative protein configurations and very quickly discovers highly active protein variants compared to other baselines.
Optimistic Games for Combinatorial Bayesian Optimization with Applications to Protein Design
[ "Melis Ilayda Bal", "Pier Giuseppe Sessa", "Mojmir Mutny", "Andreas Krause" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=QIwA1zUd2t
@inproceedings{ nam2023npcnis, title={{NPC}-{NIS}: Navigating Semiconductor Process Corners with Neural Importance Sampling}, author={Hong Chul Nam and Chanwoo Park}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=QIwA1zUd2t} }
Traditional corner case analysis in semiconductor circuit design typically involves the use of predetermined semiconductor process parameters, including Fast, Typical, and Slow corners for PMOS and NMOS devices, frequently yielding overly conservative designs due to the utilization of fixed, and potentially non-representative, process parameter values for circuit simulations. Identifying the worst cases of circuit FoMs within typical semiconductor process variation ranges presents a considerable challenge, especially given the complexities associated with accurately sampling rare semiconductor events. In response, we introduce NPC-NIS, a model specifically developed for estimating rare cases in semiconductor circuit analysis, leveraging a learnable importance sampling strategy. We model the distribution of process parameters that exhibit the worst FoMs within a realistic range. This adaptable framework dynamically identifies and addresses rare semiconductor cases within typical process variation ranges, enhancing our circuit design optimization capabilities under realistic conditions. Our empirical results validate the effectiveness of the Neural Importance Sampling (NIS) approach in identifying and mitigating rare semiconductor scenarios, thereby contributing to the development of more robust and reliable semiconductor circuit designs and connecting traditional semiconductor corner case analysis with realworld semiconductor applications.
NPC-NIS: Navigating Semiconductor Process Corners with Neural Importance Sampling
[ "Hong Chul Nam", "Chanwoo Park" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=P7PuMQEKbF
@inproceedings{ che2023planning, title={Planning Contextual Adaptive Experiments with Model Predictive Control}, author={Ethan Che and Jimmy Wang and Hongseok Namkoong}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=P7PuMQEKbF} }
Implementing adaptive experimentation methods in the real world often encounters a multitude of operational difficulties, including batched/delayed feedback, non-stationary environments, and constraints on treatment allocations. To improve the flexibility of adaptive experimentation, we propose a Bayesian, optimization-based framework founded on model-predictive control (MPC) for the linear contextual bandit setting. While we focus on simple regret minimization, the framework can flexibly incorporate multiple objectives along with constraints, batches, personalized and non-personalized policies, as well as predictions of future context arrivals. Most importantly, it maintains this flexibility while guaranteeing improvement over non-adaptive A/B testing across all time horizons, and empirically outperforms standard policies such as Thompson Sampling. Overall, this framework offers a way to guide adaptive designs across the varied demands of modern large-scale experiments.
Planning Contextual Adaptive Experiments with Model Predictive Control
[ "Ethan Che", "Jimmy Wang", "Hongseok Namkoong" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=OWz37WOETP
@inproceedings{ martinelli2023learning, title={Learning relevant contextual variables within Bayesian optimization}, author={Julien Martinelli and Ayush Bharti and Armi Tiihonen and Louis Filstroff and S. T. John and Sabina J. Sloman and Patrick Rinke and Samuel Kaski}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=OWz37WOETP} }
Contextual Bayesian Optimization (CBO) efficiently optimizes black-box, expensive-to- evaluate functions with respect to design variables, while simultaneously integrating relevant contextual information regarding the environment, such as experimental conditions. However, the relevance of contextual variables is not necessarily known beforehand. Moreover, contextual variables can sometimes be optimized themselves, an overlooked setting by current CBO algorithms. Optimizing contextual variables may be costly, which raises the question of determining a minimal relevant subset. We address this problem using a novel method, Sensitivity-Analysis-Driven Contextual BO (SADCBO). We learn the relevance of context variables by sensitivity analysis of the posterior surrogate model, whilst minimizing the cost of optimization by leveraging recent developments on early stopping for BO. We empirically evaluate our proposed SADCBO against alternatives on both synthetic and real-world experiments, and demonstrate a consistent improvement across examples.
Learning relevant contextual variables within Bayesian optimization
[ "Julien Martinelli", "Ayush Bharti", "Armi Tiihonen", "Louis Filstroff", "S. T. John", "Sabina J. Sloman", "Patrick Rinke", "Samuel Kaski" ]
Workshop/ReALML
realml-2023
2305.14120
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=JVKQ5ovWgN
@inproceedings{ lau2023pinnacle, title={{PINNACLE}: {PINN} Adaptive ColLocation and Experimental points selection}, author={Gregory Kang Ruey Lau and Apivich Hemachandra and See-Kiong Ng and Bryan Kian Hsiang Low}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=JVKQ5ovWgN} }
Physics-Informed Neural Networks (PINNs), which incorporate PDEs as soft constraints, train with a composite loss function that contains multiple training point types: different types of collocation points chosen during training to enforce each PDE and initial/boundary conditions, and experimental points which are usually costly to obtain via experiments or simulations. Training PINNs using this loss function is challenging as it typically requires selecting large numbers of points of different types, each with different training dynamics. Unlike past works that focused on the selection of either collocation or experimental points, this work introduces PINN Adaptive ColLocation and Experimental points selection (PINNACLE), the first algorithm that jointly optimizes the selection of all training point types, while automatically adjusting the proportion of collocation point types as training progresses. PINNACLE uses information on the interactions among training point types, which had not been considered before, based on an analysis of PINN training dynamics via the Neural Tangent Kernel (NTK). We theoretically show that the criterion used by PINNACLE is related to the PINN generalization error, and empirically demonstrate that PINNACLE is able to outperform existing point selection methods for forward, inverse, and transfer learning problems.
PINNACLE: PINN Adaptive ColLocation and Experimental points selection
[ "Gregory Kang Ruey Lau", "Apivich Hemachandra", "See-Kiong Ng", "Bryan Kian Hsiang Low" ]
Workshop/ReALML
realml-2023
2404.07662
[ "https://github.com/apivich-h/pinnacle" ]
https://huggingface.co/papers/2404.07662
0
0
0
4
1
[]
[]
[]
null
https://openreview.net/forum?id=JJrwnclCFZ
@inproceedings{ sorourifar2023accelerating, title={Accelerating Black-Box Molecular Property Optimization by Adaptively Learning Sparse Subspaces}, author={Farshud Sorourifar and Thomas Banker and Joel Paulson}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=JJrwnclCFZ} }
Molecular property optimization (MPO) problems are inherently challenging since they are formulated over discrete, unstructured spaces and the labeling process involves expensive simulations or experiments, which fundamentally limits the amount of available data. Bayesian optimization (BO), which is a powerful and popular framework for efficient optimization of noisy, black-box objective functions (e.g., measured property values), thus is a potentially attractive framework for MPO. To apply BO to MPO problems, one must select a structured molecular representation that enables construction of a probabilistic surrogate model. Many molecular representations have been developed, however, they are all high-dimensional, which introduces important challenges in the BO process – mainly because the curse of dimensionality makes it difficult to define and perform inference over a suitable class of surrogate models. This challenge has been recently addressed by learning a lower-dimensional encoding of a SMILE or graph representation of a molecule in an unsupervised manner and then performing BO in the encoded space. In this work, we show that such methods have a tendency to “get stuck,” which we hypothesize occurs since the mapping from the encoded space to property values is not necessarily well-modeled by a Gaussian process. We argue for an alternative approach that combines numerical molecular descriptors with a sparse axis-aligned Gaussian process model, which is capable of rapidly identifying sparse subspaces that are most relevant to modeling the unknown property function. We demonstrate that our proposed method substantially outperforms existing MPO methods on a variety of benchmark and real-world problems. Specifically, we show that our method can routinely find near-optimal molecules out of a set of more than > 100k alternatives within 100 or fewer expensive queries.
Accelerating Black-Box Molecular Property Optimization by Adaptively Learning Sparse Subspaces
[ "Farshud Sorourifar", "Thomas Banker", "Joel Paulson" ]
Workshop/ReALML
realml-2023
2401.01398
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=GUKt7ENgSr
@inproceedings{ ding2023ever, title={Ever Evolving Evaluator ({EV}3): Towards Flexible and Reliable Meta-Optimization for Knowledge Distillation}, author={Li Ding and Masrour Zoghi and Guy Tennenholtz and Maryam Karimzadehgan}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=GUKt7ENgSr} }
We introduce EV3, a novel meta-optimization framework designed to efficiently train scalable machine learning models through an intuitive explore-assess-adapt protocol. In each iteration of EV3, we explore various model parameter updates, assess them using pertinent evaluation methods, and then adapt the model based on the optimal updates and previous progress history. EV3 offers substantial flexibility without imposing stringent constraints like differentiability on the key objectives relevant to the tasks of interest, allowing for exploratory updates with intentionally-biased gradients and through a diversity of losses and optimizers. Additionally, the assessment phase provides reliable safety controls to ensure robust generalization, and can dynamically prioritize tasks in scenarios with multiple objectives. With inspiration drawn from evolutionary algorithms, meta-learning, and neural architecture search, we investigate an application of EV3 to knowledge distillation. Our experimental results illustrate EV3's capability to safely explore the modeling landscape, while hinting at its potential applicability across numerous domains due to its inherent flexibility and adaptability. Finally, we provide a JAX implementation of EV3, along with source code for experiments, available at: https://github.com/google-research/google-research/tree/master/ev3.
Ever Evolving Evaluator (EV3): Towards Flexible and Reliable Meta-Optimization for Knowledge Distillation
[ "Li Ding", "Masrour Zoghi", "Guy Tennenholtz", "Maryam Karimzadehgan" ]
Workshop/ReALML
realml-2023
2310.18893
[ "https://github.com/google-research/google-research" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=GKq0Vco2TW
@inproceedings{ mishra2023provablyconvergent, title={Provably-Convergent Bayesian Source Seeking with Mobile Agents in Multimodal Fields}, author={Vivek Mishra and Raul Astudillo and Peter I. Frazier and Fumin Zhang}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=GKq0Vco2TW} }
We consider source-seeking tasks, where the goal is to locate a source using a mobile agent that gathers potentially noisy measurements from the emitted signal. Such tasks are prevalent, for example, when searching radioactive or chemical sources using mobile sensors that track wind-carried particles. In this work, we propose an iterative Bayesian algorithm for source seeking, especially well-suited for challenging environments characterized by multimodal signal intensity and noisy observations. At each step, this algorithm computes a Bayesian posterior distribution characterizing the source's location using prior physical knowledge of the observation process and the accumulated data. Subsequently, it decides where the agent should move and observe next by following a search strategy that implicitly considers paths to the source's most likely location under the posterior. We show that the trajectory of an agent executing the proposed algorithm converges to the source's location asymptotically with probability one. We validate the algorithm's convergence through simulated experiments of an agent seeking a chemical plume in a turbulent environment.
Provably-Convergent Bayesian Source Seeking with Mobile Agents in Multimodal Fields
[ "Vivek Mishra", "Raul Astudillo", "Peter I. Frazier", "Fumin Zhang" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=G6ujG6LaKV
@inproceedings{ n{\'e}meth2023computeefficient, title={Compute-Efficient Active Learning}, author={G{\'a}bor N{\'e}meth and Tamas Matuszka}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=G6ujG6LaKV} }
Active learning, a powerful paradigm in machine learning, aims at reducing labeling costs by selecting the most informative samples from an unlabeled dataset. However, traditional active learning process often demands extensive computational resources, hindering scalability and efficiency. In this paper, we address this critical issue by presenting a novel method designed to alleviate the computational burden associated with active learning on massive datasets. To achieve this goal, we introduce a simple, yet effective method-agnostic framework that outlines how to strategically choose and annotate data points, optimizing the process for efficiency while maintaining model performance. Through case studies, we demonstrate the effectiveness of our proposed method in reducing computational costs while maintaining or, in some cases, even surpassing baseline model outcomes. Code is available at https://github.com/aimotive/Compute-Efficient-Active-Learning
Compute-Efficient Active Learning
[ "Gábor Németh", "Tamas Matuszka" ]
Workshop/ReALML
realml-2023
2401.07639
[ "https://github.com/aimotive/Compute-Efficient-Active-Learning" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=FpMRFG3z2Q
@inproceedings{ song2023circuitvae, title={Circuit{VAE}: Efficient and Scalable Latent Circuit Optimization}, author={Jialin Song and Aidan Swope and Robert Kirby and Rajarshi Roy and Saad Godil and Jonathan Raiman and Bryan Catanzaro}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=FpMRFG3z2Q} }
Automatically designing fast and space-efficient digital circuits is challenging because circuits are discrete, must exactly implement the desired logic, and are costly to simulate. We address these challenges with CircuitVAE, a search algorithm that embeds computation graphs in a continuous space and optimizes a learned surrogate of physical simulation by gradient descent. By carefully controlling overfitting of the simulation surrogate and ensuring diverse exploration, our algorithm is highly sample-efficient, yet gracefully scales to large problem instances and high sample budgets. We test CircuitVAE by designing binary adders across a large range of sizes, IO timing constraints, and sample budgets. Our method excels at designing large circuits, where other algorithms struggle: compared to reinforcement learning and genetic algorithms, CircuitVAE typically finds 64-bit adders which are smaller and faster using less than half the sample budget. We also find CircuitVAE can design state-of-the-art adders in a real-world chip, demonstrating that our method can outperform commercial tools in a realistic setting.
CircuitVAE: Efficient and Scalable Latent Circuit Optimization
[ "Jialin Song", "Aidan Swope", "Robert Kirby", "Rajarshi Roy", "Saad Godil", "Jonathan Raiman", "Bryan Catanzaro" ]
Workshop/ReALML
realml-2023
2406.09535
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=FDoS9cHQTp
@inproceedings{ novitasari2023unleashing, title={Unleashing the Autoconversion Rates Forecasting: Evidential Regression from Satellite Data}, author={Maria Carolina Novitasari and Johannes Quaas and Miguel R. D. Rodrigues}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=FDoS9cHQTp} }
High-resolution simulations such as the ICOsahedral Non-hydrostatic Large-Eddy Model (ICON-LEM) can be used to understand the interactions between aerosols, clouds, and precipitation processes that currently represent the largest source of uncertainty involved in determining the radiative forcing of climate change. Nevertheless, due to the exceptionally high computing cost required, this simulation-based approach can only be employed for a short period of time within a limited area. Despite the fact that machine learning can solve this problem, the related model uncertainties may make it less reliable. To address this, we developed a neural network (NN) model powered with evidential learning to assess the data and model uncertainties applied to satellite observation data. Our study focuses on estimating the rate at which small droplets (cloud droplets) collide and coalesce to become larger droplets (raindrops) – autoconversion rates -- since this is one of the key processes in the precipitation formation of liquid clouds, hence crucial to better understanding cloud responses to anthropogenic aerosols. The results of estimating the autoconversion rates demonstrate that the model performs reasonably well, with the inclusion of both aleatoric and epistemic uncertainty estimation, which improves the credibility of the model and provides useful insights for future improvement.
Unleashing the Autoconversion Rates Forecasting: Evidential Regression from Satellite Data
[ "Maria Carolina Novitasari", "Johannes Quaas", "Miguel R. D. Rodrigues" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=F6jSo0PIKy
@inproceedings{ agarwal2023towards, title={Towards Scalable Identification of Brick Kilns from Satellite Imagery with Active Learning}, author={Aditi Agarwal and Suraj Jaiswal and Madhav Kanda and Dhruv Patel and Rishabh Mondal and Vannsh Jani and Zeel B Patel and Nipun Batra and Sarath Guttikunda}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=F6jSo0PIKy} }
Air pollution is a leading cause of death globally, especially in south-east Asia. Brick production contributes significantly to air pollution. However, unlike other sources such as power plants, brick production is unregulated and thus hard to monitor. Traditional survey-based methods for kiln identification are time and resource-intensive. Similarly, it is time-consuming for air quality experts to annotate satellite imagery manually. Recently, computer vision machine learning models have helped reduce labeling costs, but they need sufficiently large labeled imagery. In this paper, we propose scalable methods using active learning to accurately detect brick kilns with minimal manual labeling effort. Through this work, we have identified more than 700 new brick kilns across the Indo-Gangetic region: a highly populous and polluted region spanning 0.4 million square kilometers in India. In addition, we have deployed our model as a web application for automatically identifying brick kilns given a specific area by the user.
Towards Scalable Identification of Brick Kilns from Satellite Imagery with Active Learning
[ "Aditi Agarwal", "Suraj Jaiswal", "Madhav Kanda", "Dhruv Patel", "Rishabh Mondal", "Vannsh Jani", "Zeel B Patel", "Nipun Batra", "Sarath Guttikunda" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=Ej4YjxAvcp
@inproceedings{ gajjar2023improved, title={Improved Bounds for Agnostic Active Learning of Single Index Models}, author={Aarshvi Gajjar and Xingyu Xu and Christopher Musco and Chinmay Hegde}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=Ej4YjxAvcp} }
We study active learning for single index models of the form $F({\mathbf{x}}) = f(\langle {\mathbf{w}}, {\mathbf{x}}\rangle)$, where $f:\mathbb{R} \to \mathbb{R}$ and ${\mathbf{x},\mathbf{w}} \in \mathbb{R}^d$. Such functions are important in scientific computing, where they are used to construct surrogate models for partial differential equations (PDEs) and to approximate high-dimensional Quantities of Interest. In these applications, collecting function samples requires solving a partial differential equation, so sample-efficient active learning methods translate to reduced computational cost. Our work provides two main results. First, when $f$ is known and Lipschitz, we show that $\tilde{O}(d)$ samples collected via \emph{statistical leverage score sampling} are sufficient to find an optimal single index model for a given target function, even in the challenging and practically important agnostic (adversarial noise) setting. This result is optimal up to logarithmic factors and improves quadratically on a recent $\tilde{O}(d^{2})$ bound of \citet{gajjar2023active}. Second, we show that $\tilde{O}(d^{3/2})$ samples suffice in the more difficult non-parametric setting when $f$ is \emph{unknown}, which is the also best result known in this general setting.
Improved Bounds for Agnostic Active Learning of Single Index Models
[ "Aarshvi Gajjar", "Xingyu Xu", "Christopher Musco", "Chinmay Hegde" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=E8zSTm2bGu
@inproceedings{ folch2023practical, title={Practical Path-based Bayesian Optimization}, author={Jose Pablo Folch and James A C Odgers and Shiqiang Zhang and Robert Matthew Lee and Behrang Shafei and David Walz and Calvin Tsay and Mark van der Wilk and Ruth Misener}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=E8zSTm2bGu} }
There has been a surge in interest in data-driven experimental design with applications to chemical engineering and drug manufacturing. Bayesian optimization (BO) has proven to be adaptable to such cases, since we can model the reactions of interest as expensive black-box functions. Sometimes, the cost of this black-box functions can be separated into two parts: (a) the cost of the experiment itself, and (b) the cost of changing the input parameters. In this short paper, we extend the SnAKe algorithm to deal with both types of costs simultaneously. We further propose extensions to the case of a maximum allowable input change, as well as to the multi-objective setting.
Practical Path-based Bayesian Optimization
[ "Jose Pablo Folch", "James A C Odgers", "Shiqiang Zhang", "Robert Matthew Lee", "Behrang Shafei", "David Walz", "Calvin Tsay", "Mark van der Wilk", "Ruth Misener" ]
Workshop/ReALML
realml-2023
2312.00622
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=D0fdIDnsWZ
@inproceedings{ hellan2023datadriven, title={Data-driven Prior Learning for Bayesian Optimisation}, author={Sigrid Passano Hellan and Christopher G. Lucas and Nigel H. Goddard}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=D0fdIDnsWZ} }
Transfer learning for Bayesian optimisation has generally assumed a strong similarity between optimisation tasks, with at least a subset having similar optimal inputs. This assumption can reduce computational costs, but it is violated in a wide range of optimisation problems where transfer learning may nonetheless be useful. We replace this assumption with a weaker one only requiring the shape of the optimisation landscape to be similar, and analyse the recent method Prior Learning for Bayesian Optimisation — PLeBO — in this setting. By learning priors for the hyperparameters of the Gaussian process surrogate model we can better approximate the underlying function, especially for few function evaluations. We validate the learned priors and compare to a breadth of transfer learning approaches, using synthetic data and a recent air pollution optimisation problem as benchmarks. We show that PLeBO and prior transfer find good inputs in fewer evaluations.
Data-driven Prior Learning for Bayesian Optimisation
[ "Sigrid Passano Hellan", "Christopher G. Lucas", "Nigel H. Goddard" ]
Workshop/ReALML
realml-2023
2311.14653
[ "https://github.com/sighellan/plebo" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=CzdSCFOG1n
@inproceedings{ mishler2023active, title={Active Learning with Missing Not At Random Outcomes}, author={Alan Mishler and Mohsen Ghassemi and Alec Koppel and Sumitra Ganesh}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=CzdSCFOG1n} }
When outcomes in training data are missing not at random (MNAR), predictors that are trained on that data can be arbitrarily biased. In some cases, however, batches of missing outcomes can be recovered at some cost, giving rise to a pool-based active learning setting. Previous active learning approaches implicitly treat all labeled data as having come from the same distribution, whereas in the MNAR setting, the training data and the initial unlabeled pool have different distributions. We propose MNAR-Aware Active Learning (MAAL), an active learning procedure that takes this into account and takes advantage of information that the missingness indicator carries about the outcome. We additionally consider acquisition functions that are attuned to the MNAR setting. Experiments on a large set of classification benchmark datasets demonstrate the benefits of our proposed approach over standard active and passive learning approaches.
Active Learning with Missing Not At Random Outcomes
[ "Alan Mishler", "Mohsen Ghassemi", "Alec Koppel", "Sumitra Ganesh" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=BctsZxNsfO
@inproceedings{ bruns-smith2023robust, title={Robust Fitted-Q-Evaluation and Iteration under Sequentially Exogenous Unobserved Confounders}, author={David Bruns-Smith and Angela Zhou}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=BctsZxNsfO} }
Offline reinforcement learning is important in domains such as medicine, economics, and e-commerce where online experimentation is costly, dangerous or unethical, and where the true model is unknown. We study robust policy evaluation and policy optimization in the presence of sequentially-exogenous unobserved confounders under a sensitivity model. We propose and analyze orthogonalized robust fitted-Q-iteration that uses closed-form solutions of the robust Bellman operator to derive a loss minimization problem for the robust Q function, and adds a bias-correction to quantile estimation. Our algorithm enjoys the computational ease of fitted-Q-iteration and statistical improvements (reduced dependence on quantile estimation error) from orthogonalization. We provide sample complexity bounds, insights, and show effectiveness both in simulations and on real-world longitudinal healthcare data of treating sepsis. In particular, our model of sequential unobserved confounders yields an online Markov decision process, rather than partially observed Markov decision process: we illustrate how this can enable warm-starting optimistic reinforcement learning algorithms with valid robust bounds from observational data.
Robust Fitted-Q-Evaluation and Iteration under Sequentially Exogenous Unobserved Confounders
[ "David Bruns-Smith", "Angela Zhou" ]
Workshop/ReALML
realml-2023
2302.00662
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=A1RVn1m3J3
@inproceedings{ rankovi{\'c}2023bochemian, title={BoChemian: Large Language Model Embeddings for Bayesian Optimization of Chemical Reactions}, author={Bojana Rankovi{\'c} and Philippe Schwaller}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=A1RVn1m3J3} }
This paper explores the integration of Large Language Models (LLM) embeddings with Bayesian Optimization (BO) in the domain of chemical reaction optimization with the showcase study on Buchwald-Hartwig reactions. By leveraging LLMs, we can transform textual chemical procedures into an informative feature space suitable for Bayesian optimization. Our findings show that even out-of-the-box open-source LLMs can map chemical reactions for optimization tasks, highlighting their latent specialized knowledge. The results motivate the consideration of further model specialization through adaptive fine-tuning within the bo framework for on-the-fly optimization. This work serves as a foundational step toward a unified computational framework that synergizes textual chemical descriptions with machine-driven optimization, aiming for more efficient and accessible chemical research. The code is available at: https://github.com/schwallergroup/bochemian.
BoChemian: Large Language Model Embeddings for Bayesian Optimization of Chemical Reactions
[ "Bojana Ranković", "Philippe Schwaller" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=7juV7SKVvM
@inproceedings{ das2023textitless, title={{\textbackslash}textit\{Less But Better\}{\textbackslash}{\textbackslash} Towards better {\textbackslash}textit\{{AQ}\} Monitoring by learning {\textbackslash}{\textbackslash} Inducing Points for Multi-Task Gaussian Processes}, author={Progyan Das and Mihir Agarwal}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=7juV7SKVvM} }
Air pollution is a pressing global issue affecting both human health and environmental sustainability. The high financial burden of conventional Air Quality (AQ) monitoring stations and their sparse spatial distribution necessitate advanced inferencing techniques for effective regulation and public health policies. We introduce a comprehensive framework employing Variational Multi-Output Gaussian Processes (VMOGP) with a Spectral Mixture (SM) kernel designed to model and predict multiple AQ indicators, particularly $PM_{2.5}$ and Carbon Monoxide ($CO$). Our method unifies the strengths of Multi-Output Gaussian Processes (MOGPs) and Variational Multi-Task Gaussian Processes (VMTGP) to capture intricate spatio-temporal correlations among air pollutants, thus delivering enhanced robustness and accuracy over Single-Output Gaussian Processes (SOGPs) and state-of-the-art neural attention-based methods. Importantly, by analyzing the variational distribution of auxiliary inducing points, we identify high-information geographical locales for optimized AQ monitoring frameworks. Through extensive empirical evaluations, we demonstrate superior performance in both accuracy and uncertainty quantification. Our methodology promises significant implications for urban planning, adaptive station placement, and public health policy formulation.
Less But Better Towards better AQ Monitoring by learning Inducing Points for Multi-Task Gaussian Processes
[ "Progyan Das", "Mihir Agarwal" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=7iybUXjQgp
@inproceedings{ akengin2023actsort, title={ActSort: An active-learning accelerated cell sorting algorithm for large-scale calcium imaging datasets}, author={Hakki Orhun Akengin and Mehmet Anil Aslihak and Yiqi Jiang and Yang Li and Oscar Hernandez and Hakan Inan and Christopher Miranda and Marta Blanco Pozo and Fatih Dinc and Mark Schnitzer}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=7iybUXjQgp} }
Due to rapid progress in optical imaging technologies, contemporary neural calcium imaging studies can monitor the dynamics of 10,000 or more neurons at once in the brains of awake behaving mammals. After automated extraction of the neurons' putative locations, a typical experiment involves extensive human labor to cull false-positive cells from the data, a process called \emph{cell sorting.} Efforts to automate cell sorting via the use of trained models either employ pre-trained, suboptimal classifiers or require reduced but still substantial human labor to train dataset-specific classifiers. In this workshop paper, we introduce an active-learning accelerated cell-sorting paradigm, termed ActSort, which establishes an online feedback loop between the human annotator and the cell classifier. To test this paradigm, we designed a benchmark by curating large-scale calcium imaging datasets from 5 mice, with approximately 40,000 cell candidates in total. Each movie was annotated by 4 (out of 6 total) human annotators, yielding about 160,000 total annotations. With this approach, we tested two active learning strategies, discriminative active learning (DAL) and confidence-based active learning (CAL). To create a baseline representing the traditional strategy, we performed random and first-to-last annotations, in which cells are annotated in either a random order or the order they are received from the cell-extraction algorithm. Our analysis revealed that, even when using the active learning-derived results of $<5\%$ of the human-annotated cells, CAL surpassed human performance levels in both precision and recall. In comparison, the first-to-last strategy required $80\%$ of the cells to be annotated to achieve the same mark. By decreasing the human labor needed from hours to minutes while also enabling more accurate predictions than a typical human annotator, ActSort overcomes a bottleneck in neuroscience research and enables rapid pre-processing of large-scale brain-imaging datasets.
ActSort: An active-learning accelerated cell sorting algorithm for large-scale calcium imaging datasets
[ "Hakki Orhun Akengin", "Mehmet Anil Aslihak", "Yiqi Jiang", "Yang Li", "Oscar Hernandez", "Hakan Inan", "Christopher Miranda", "Marta Blanco Pozo", "Fatih Dinc", "Mark Schnitzer" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=7R23KXdqGV
@inproceedings{ yu2023actively, title={Actively learning a Bayesian matrix fusion model with deep side information}, author={Yangyang Yu and Jordan W. Suchow}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=7R23KXdqGV} }
High-dimensional deep neural network representations of images and concepts can be aligned to predict human annotations of diverse stimuli. However, such alignment requires the costly collection of behavioral responses, such that, in practice, the deep-feature spaces are only ever sparsely sampled. Here, we propose an active learning approach to adaptively sample experimental stimuli to efficiently learn a Bayesian matrix factorization model with deep side information. We observe a significant efficiency gain over a passive baseline. Furthermore, with a sequential batched sampling strategy, the algorithm is applicable not only to small datasets collected from traditional laboratory experiments but also to settings where large-scale crowdsourced data collection is needed to accurately align the high-dimensional deep feature representations derived from pre-trained networks. This provides cost-effective solutions for collecting and generating quality-assured predictions in large-scale behavioral and cognitive studies.
Actively learning a Bayesian matrix fusion model with deep side information
[ "Yangyang Yu", "Jordan W. Suchow" ]
Workshop/ReALML
realml-2023
2306.05331
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=6hkY6dYtBA
@inproceedings{ stretcu2023agile, title={Agile Modeling: From Concept to Classifier in Minutes}, author={Otilia Stretcu and Edward Vendrow and Kenji Hata and Krishnamurthy Viswanathan and Vittorio Ferrari and Sasan Tavakkol and Wenlei Zhou and Aditya Avinash and Enming Luo and Neil Gordon Alldrin and Mohammadhossein Bateni and Gabriel Berger and Andrew Bunner and Chun-Ta Lu and Javier A Rey and Giulia DeSalvo and Ranjay Krishna and Ariel Fuxman}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=6hkY6dYtBA} }
The application of computer vision methods to nuanced, subjective concepts is growing. While crowdsourcing has served the vision community well for most objective tasks (such as labeling a "zebra"), it now falters on tasks where there is substantial subjectivity in the concept (such as identifying "gourmet tuna"). However, empowering any user to develop a classifier for their concept is technically difficult: users are neither machine learning experts nor have the patience to label thousands of examples. In reaction, we introduce the problem of Agile Modeling: the process of turning any subjective visual concept into a computer vision model through real-time user-in-the-loop interactions. We instantiate an Agile Modeling prototype for image classification and show through a user study (N=14) that users can create classifiers with minimal effort in under 30 minutes. We compare this user driven process with the traditional crowdsourcing paradigm and find that the crowd's notion often differs from that of the user's, especially as the concepts become more subjective. Finally, we scale our experiments with simulations of users training classifiers for ImageNet21k categories to further demonstrate the efficacy of the approach.
Agile Modeling: From Concept to Classifier in Minutes
[ "Otilia Stretcu", "Edward Vendrow", "Kenji Hata", "Krishnamurthy Viswanathan", "Vittorio Ferrari", "Sasan Tavakkol", "Wenlei Zhou", "Aditya Avinash", "Enming Luo", "Neil Gordon Alldrin", "Mohammadhossein Bateni", "Gabriel Berger", "Andrew Bunner", "Chun-Ta Lu", "Javier A Rey", "Giulia DeSalvo", "Ranjay Krishna", "Ariel Fuxman" ]
Workshop/ReALML
realml-2023
2302.12948
[ "" ]
https://huggingface.co/papers/2302.12948
3
1
0
18
1
[]
[]
[]
null
https://openreview.net/forum?id=5br5UllmBy
@inproceedings{ chaudhari2023learning, title={Learning Models and Evaluating Policies with Offline Off-Policy Data under Partial Observability}, author={Shreyas Chaudhari and Philip S. Thomas and Bruno Castro da Silva}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=5br5UllmBy} }
Models in reinforcement learning are often estimated from offline data, which in many real-world scenarios is subject to partial observability. In this work, we study the challenges that emerge from using models estimated from partially-observable offline data for policy evaluation. Notably, a complete definition of the models includes dependence on the data-collecting policy. To address this issue, we introduce a method for model estimation that incorporates importance weighting in the model learning process. The off-policy samples are reweighted to be reflective of their probabilities under a different policy, such that the resultant model is a consistent estimator of the off-policy model and provides consistent estimates of the expected off-policy return. This is a crucial step towards the reliable and responsible use of models learned under partial observability, particularly in scenarios where inaccurate policy evaluation can have catastrophic consequences. We empirically demonstrate the efficacy of our method and its resilience to common approximations such as weight clipping on a range of domains with diverse types of partial observability.
Learning Models and Evaluating Policies with Offline Off-Policy Data under Partial Observability
[ "Shreyas Chaudhari", "Philip S. Thomas", "Bruno Castro da Silva" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=4X33gHxHf1
@inproceedings{ zhang2023labelbench, title={LabelBench: A Comprehensive Framework for Benchmarking Adaptive Label-Efficient Learning}, author={Jifan Zhang and Yifang Chen and Gregory Canal and Arnav Mohanty Das and Gantavya Bhatt and Yinglun Zhu and Stephen Mussmann and Simon Shaolei Du and Jeff Bilmes and Kevin Jamieson and Robert D Nowak}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=4X33gHxHf1} }
Labeled data are critical to modern machine learning applications, but obtaining labels can be expensive. To mitigate this cost, machine learning methods, such as transfer learning, semi-supervised learning and active learning, aim to be $\text{\textit{label-efficient}}$: achieving high predictive performance from relatively few labeled examples. While obtaining the best label-efficiency in practice often requires combinations of these techniques, existing benchmark and evaluation frameworks do not capture a concerted combination of all such techniques. This paper addresses this deficiency by introducing LabelBench, a new computationally-efficient framework for joint evaluation of multiple label-efficient learning techniques. As an application of LabelBench, we introduce a novel benchmark of state-of-the-art active learning methods in combination with semi-supervised learning for fine-tuning pretrained vision transformers. Our benchmark demonstrates significantly better label-efficiencies than previously reported in active learning. LabelBench's modular codebase is open-sourced for the broader community to contribute label-efficient learning methods and benchmarks. The repository can be found at: https://github.com/EfficientTraining/LabelBench.
LabelBench: A Comprehensive Framework for Benchmarking Adaptive Label-Efficient Learning
[ "Jifan Zhang", "Yifang Chen", "Gregory Canal", "Arnav Mohanty Das", "Gantavya Bhatt", "Yinglun Zhu", "Stephen Mussmann", "Simon Shaolei Du", "Jeff Bilmes", "Kevin Jamieson", "Robert D Nowak" ]
Workshop/ReALML
realml-2023
2306.09910
[ "https://github.com/efficienttraining/labelbench" ]
https://huggingface.co/papers/2306.09910
0
0
0
8
1
[]
[]
[]
null
https://openreview.net/forum?id=3UuSQNVHS6
@inproceedings{ bankes2023reducr, title={{REDUCR}: Robust Data Downsampling Using Class Priority Reweighting}, author={William Bankes and George Hughes and Ilija Bogunovic and Zi Wang}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=3UuSQNVHS6} }
Modern machine learning models are becoming increasingly expensive to train for real-world image and text classification tasks, where massive web-scale data is collected in a streaming fashion. To reduce the training cost, online batch selection techniques have been developed to choose the most informative datapoints. However, these techniques can suffer from poor worst-class generalization performance due to class imbalance and distributional shifts. This work introduces REDUCR, a robust and efficient data downsampling method that uses class priority reweighting. REDUCR reduces the training data while preserving worst-class generalization performance. REDUCR assigns priority weights to datapoints in a class-aware manner using an online learning algorithm. We demonstrate the data efficiency and robust performance of REDUCR on vision and text classification tasks. On web-scraped datasets with imbalanced class distributions, REDUCR achieves significant test accuracy boosts for the worst-performing class (but also on average), surpassing state-of-the-art methods by around 14%.
REDUCR: Robust Data Downsampling Using Class Priority Reweighting
[ "William Bankes", "George Hughes", "Ilija Bogunovic", "Zi Wang" ]
Workshop/ReALML
realml-2023
2312.00486
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=2aOKjoPwT4
@inproceedings{ kokubun2023local, title={Local Acquisition Function for Active Level Set Estimation}, author={Yuta Kokubun and Kota Matsui and Kentaro Kutsukake and Wataru Kumagai and Takafumi Kanamori}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=2aOKjoPwT4} }
In this paper, we propose a new acquisition function based on local search for active super-level set estimation. Conventional acquisition functions for level set estimation problems are considered to struggle with problems where the threshold is high, and many points in the upper-level set have function values close to the threshold. The proposed method addresses this issue by effectively switching between two acquisition functions: one rapidly finds local level set and the other performs global exploration. The effectiveness of the proposed method is evaluated through experiments with synthetic and real-world datasets.
Local Acquisition Function for Active Level Set Estimation
[ "Yuta Kokubun", "Kota Matsui", "Kentaro Kutsukake", "Wataru Kumagai", "Takafumi Kanamori" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=0CPnNCOFiI
@inproceedings{ tifrea2023improving, title={Improving class and group imbalanced classification with uncertainty-based active learning}, author={Alexandru Tifrea and John Hill and Fanny Yang}, booktitle={NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World}, year={2023}, url={https://openreview.net/forum?id=0CPnNCOFiI} }
Recent experimental and theoretical analyses have revealed that uncertainty-based active learning algorithms (U-AL) are often not able to improve the average accuracy compared to even the simple baseline of passive learning (PL). However, we show in this work that U-AL is a competitive method in problems with severe data imbalance, when instead of the \emph{average} accuracy, the focus is the \emph{worst-subpopulation} accuracy. We show in extensive experiments that U-AL outperforms algorithms that explicitly aim to improve worst-subpopulation performance such as reweighting. We provide insights that explain the good performance of U-AL and show a theoretical result that is supported by our experimental observations.
Improving class and group imbalanced classification with uncertainty-based active learning
[ "Alexandru Tifrea", "John Hill", "Fanny Yang" ]
Workshop/ReALML
realml-2023
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zBQYr8O2gT
@inproceedings{ kapusniak2023learning, title={Learning Genomic Sequence Representations using Graph Neural Networks over De Bruijn Graphs}, author={Kacper Kapusniak and Manuel Burger and Gunnar Ratsch and Amir Joudaki}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=zBQYr8O2gT} }
The rapid expansion of genomic sequence data calls for new methods to achieve robust sequence representations. Existing techniques often neglect intricate structural details, emphasizing mainly contextual information. To address this, we developed k-mer embeddings that merge contextual and structural string information by enhancing De Bruijn graphs with structural similarity connections. Subsequently, we crafted a self-supervised method based on Contrastive Learning that employs a heterogeneous Graph Convolutional Network encoder and constructs positive pairs based on node similarities. Our embeddings consistently outperform prior techniques for Edit Distance Approximation and Closest String Retrieval tasks.
Learning Genomic Sequence Representations using Graph Neural Networks over De Bruijn Graphs
[ "Kacper Kapusniak", "Manuel Burger", "Gunnar Ratsch", "Amir Joudaki" ]
Workshop/GLFrontiers
poster
2312.03865
[ "https://github.com/ratschlab/genomic-gnn" ]
https://huggingface.co/papers/2312.03865
0
0
0
4
1
[]
[]
[]
null
https://openreview.net/forum?id=wz6l6yLTv1
@inproceedings{ guerranti2023on, title={On the Adversarial Robustness of Graph Contrastive Learning Methods}, author={Filippo Guerranti and Zinuo Yi and Anna Starovoit and Rafiq Mazen Kamel and Simon Geisler and Stephan G{\"u}nnemann}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=wz6l6yLTv1} }
Contrastive learning (CL) has emerged as a powerful framework for learning representations of images and text in a self-supervised manner while enhancing model robustness against adversarial attacks. More recently, researchers have extended the principles of contrastive learning to graph-structured data, giving birth to the field of graph contrastive learning (GCL). However, whether GCL methods can deliver the same advantages in adversarial robustness as their counterparts in the image and text domains remains an open question. In this paper, we introduce a comprehensive robustness evaluation protocol tailored to assess the robustness of GCL models. We subject these models to adaptive adversarial attacks targeting the graph structure, specifically in the evasion scenario. We evaluate node and graph classification tasks using diverse real-world datasets and attack strategies. With our work, we aim to offer insights into the robustness of GCL methods and hope to open avenues for potential future research directions.
On the Adversarial Robustness of Graph Contrastive Learning Methods
[ "Filippo Guerranti", "Zinuo Yi", "Anna Starovoit", "Rafiq Mazen Kamel", "Simon Geisler", "Stephan Günnemann" ]
Workshop/GLFrontiers
poster
2311.17853
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wjVKHEPoU2
@inproceedings{ jang2023a, title={A Simple and Scalable Representation for Graph Generation}, author={Yunhui Jang and Seul Lee and Sungsoo Ahn}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=wjVKHEPoU2} }
Recently, there has been a surge of interest in employing neural networks for graph generation, a fundamental statistical learning problem with critical applications like molecule design and community analysis. However, most approaches encounter significant limitations when generating large-scale graphs. This is due to their requirement to output the full adjacency matrices whose size grows quadratically with the number of nodes. In response to this challenge, we introduce a new, simple, and scalable graph representation named gap encoded edge list (GEEL) that has a small representation size that aligns with the number of edges. In addition, GEEL significantly reduces the vocabulary size by incorporating the gap encoding and bandwidth restriction schemes. GEEL can be autoregressively generated with the incorporation of node positional encoding, and we further extend GEEL to deal with attributed graphs by designing a new grammar. Our findings reveal that the adoption of this compact representation not only enhances scalability but also bolsters performance by simplifying the graph generation process. We conduct a comprehensive evaluation across ten non-attributed and two molecular graph generation tasks, demonstrating the effectiveness of GEEL.
A Simple and Scalable Representation for Graph Generation
[ "Yunhui Jang", "Seul Lee", "Sungsoo Ahn" ]
Workshop/GLFrontiers
poster
2312.02230
[ "https://github.com/yunhuijang/geel" ]
https://huggingface.co/papers/2312.02230
0
0
0
3
1
[]
[]
[]
null
https://openreview.net/forum?id=wFJIkt2WAa
@inproceedings{ gao2023double, title={Double Equivariance for Inductive Link Prediction for Both New Nodes and New Relation Types}, author={Jianfei Gao and Yangze Zhou and Jincheng Zhou and Bruno Ribeiro}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=wFJIkt2WAa} }
The task of inductive link prediction in discrete attributed multigraphs (e.g., knowledge graphs, multilayer networks, heterogeneous networks, etc.) generally focuses on test predictions with solely new nodes but not both new nodes and new relation types. In this work, we formally define the task of predicting (completely) new nodes and new relation types in test as a doubly inductive link prediction task and introduce a theoretical framework for the solution. We start by defining the concept of double permutation-equivariant representations that are equivariant to permutations of both node identities and edge relation types. We then propose a general blueprint to design neural architectures that impose a structural representation of relations that can inductively generalize from training nodes and relations to arbitrarily new test nodes and relations without the need for adaptation, side information, or retraining. We also introduce the concept of distributionally double equivariant positional embeddings designed to perform the same task. Finally, we empirically demonstrate the capability of the two proposed models on a set of novel real-world benchmarks, showcasing relative performance gains of up to 41.40% on predicting new relations types compared to baselines.
Double Equivariance for Inductive Link Prediction for Both New Nodes and New Relation Types
[ "Jianfei Gao", "Yangze Zhou", "Jincheng Zhou", "Bruno Ribeiro" ]
Workshop/GLFrontiers
oral
2302.01313
[ "https://github.com/purdueminds/isdea" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=tHQdZ74NLe
@inproceedings{ bhaila2023local, title={Local Differential Privacy in Graph Neural Networks: a Reconstruction Approach}, author={Karuna Bhaila and Wen Huang and Yongkai Wu and Xintao Wu}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=tHQdZ74NLe} }
Graph Neural Networks have achieved tremendous success in modeling complex graph data in a variety of applications. However, there are limited studies investigating privacy protection in GNNs. In this work, we propose a learning framework that can provide node-level privacy, while incurring low utility loss. We focus on a decentralized notion of Differential Privacy, namely Local Differential Privacy, and apply randomization mechanisms to perturb both feature and label data before being collected by a central server for model training. Specifically, we investigate the application of randomization mechanisms in high-dimensional feature settings and propose an LDP protocol with strict privacy guarantees. Based on frequency estimation in statistical analysis of randomized data, we develop reconstruction methods to approximate features and labels from perturbed data. We also formulate this learning framework to utilize frequency estimates of graph clusters to supervise the training procedure at a sub-graph level. Extensive experiments on real-world and semi-synthetic datasets demonstrate the validity of our proposed model.
Local Differential Privacy in Graph Neural Networks: a Reconstruction Approach
[ "Karuna Bhaila", "Wen Huang", "Yongkai Wu", "Xintao Wu" ]
Workshop/GLFrontiers
poster
2309.08569
[ "https://github.com/karuna-bhaila/rgnn" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=shwYa41B3G
@inproceedings{ liao2023gentkg, title={Gen{TKG}: Generative Forecasting on Temporal Knowledge Graph}, author={Ruotong Liao and Xu Jia and Yunpu Ma and Volker Tresp}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=shwYa41B3G} }
The rapid advancements in large language models (LLMs) have ignited interest in the realm of the temporal knowledge graph (TKG) domain, where conventional carefully designed embedding-based and rule-based models dominate. The question remains open of whether pre-trained LLMs can understand structured temporal relational data and replace them as the foundation model for temporal relational forecasting. Besides, challenges occur in the huge chasms between complex graph data structure and linear natural expressions LLMs can handle, and between the enormous data volume of TKGs and heavy computation costs of finetuning LLMs. To address these challenges, we bring temporal knowledge forecasting into the generative setting and propose a novel retrieval augmented generation framework named GenTKG combining a temporal logical rule-based retrieval strategy and lightweight few-shot parameter-efficient instruction tuning to solve the above challenges. Extensive experiments have shown that GenTKG is a simple but effective, efficient, and generalizable approach that outperforms conventional methods on temporal relational forecasting with extremely limited computation. Our work opens a new frontier for the temporal knowledge graph domain.
GenTKG: Generative Forecasting on Temporal Knowledge Graph
[ "Ruotong Liao", "Xu Jia", "Yunpu Ma", "Volker Tresp" ]
Workshop/GLFrontiers
poster
[ "https://github.com/mayhugotong/gentkg" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=shKEocsDbB
@inproceedings{ tsitsulin2023the, title={The Graph Lottery Ticket Hypothesis: Finding Sparse, Informative Graph Structure}, author={Anton Tsitsulin and Bryan Perozzi}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=shKEocsDbB} }
Graph learning methods help utilize implicit relationships among data items, thereby reducing training label requirements and improving task performance. However, determining the optimal graph structure for a particular learning task remains a challenging research problem. In this work, we introduce the Graph Lottery Ticket (GLT) Hypothesis – that there is an extremely sparse backbone for every graph, and that graph learning algorithms attain comparable performance when trained on that subgraph as on the full graph. We identify and systematically study 8 key metrics of interest that directly influence the performance of graph learning algorithms. Subsequently, we define the notion of a "winning ticket" for graph structure – an extremely sparse subset of edges that can deliver a robust approximation of the entire graph's performance. We propose a straightforward and efficient algorithm for finding these GLTs in arbitrary graphs. Empirically, we observe that performance of different graph learning algorithms can be matched or even exceeded on graphs with the average degree as low as 5.
The Graph Lottery Ticket Hypothesis: Finding Sparse, Informative Graph Structure
[ "Anton Tsitsulin", "Bryan Perozzi" ]
Workshop/GLFrontiers
poster
2312.04762
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=sCk5NkZ8Ow
@inproceedings{ choi2023node, title={Node Mutual Information: Enhancing Graph Neural Networks for Heterophily}, author={Seongjin Choi and Gahee Kim and Se-Young Yun}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=sCk5NkZ8Ow} }
Graph neural networks (GNNs) have achieved great success in graph analysis by leveraging homophily, where connected nodes share similar properties. However, GNNs struggle on heterophilic graphs where connected nodes tend to differ. Some of the existing methods use neighborhood expansion which is intractable for large graphs. This paper proposes utilizing node mutual information (MI) to capture dependencies between nodes in heterophilic graphs for use in GNNs. We first define a probability space associated with the graph and introduce $k^{th}$ node random variables to partition the graph based on node distances. The MI between two nodes' random variables then quantifies their dependency regardless of distance by considering both direct and indirect connections. We propose $k^{th}$ MIGNN where the $k^{th}$ MI values are used as weights in the message aggregation function. Experiments on real-world datasets with varying heterophily ratios show the proposed method achieves competitive performance compared to baseline GNNs. The results demonstrate that leveraging node mutual information effectively captures complex node dependencies in heterophilic graphs.
Node Mutual Information: Enhancing Graph Neural Networks for Heterophily
[ "Seongjin Choi", "Gahee Kim", "Se-Young Yun" ]
Workshop/GLFrontiers
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=sC6NcC4ns9
@inproceedings{ shirzad2023lowwidth, title={Low-Width Approximations and Sparsification for Scaling Graph Transformers}, author={Hamed Shirzad and Balaji Venkatachalam and Ameya Velingker and Danica Sutherland and David Woodruff}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=sC6NcC4ns9} }
Graph Transformers have shown excellent results on a diverse set of datasets. However, memory limitations prohibit these models from scaling to larger graphs. With standard single-GPU setups, even training on medium-sized graphs is impossible for most Graph Transformers. While the $\mathcal{O}(nd^2+n^2d)$ complexity of each layer can be reduced to $\mathcal{O}((n+m)d+nd^2)$ using sparse attention models such as Exphormer for graphs with $n$ nodes and $m$ edges, these models are still infeasible to train on training on small-memory devices even for medium-sized datasets. Here, we propose to sparsify the Exphormer model even further, by using a small ``pilot'' network to estimate attention scores along the graph edges, then training a larger model only using $\mathcal O(n)$ edges deemed important by the small network. We show empirically that attention scores from smaller networks provide a good estimate of the attention scores in larger networks, and that this process can yield a large-width sparse model nearly as good as the large-width non-sparse model.
Low-Width Approximations and Sparsification for Scaling Graph Transformers
[ "Hamed Shirzad", "Balaji Venkatachalam", "Ameya Velingker", "Danica Sutherland", "David Woodruff" ]
Workshop/GLFrontiers
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qYpqrt6X3I
@inproceedings{ teneva2023knowledge, title={Knowledge Graphs are not Created Equal: Exploring the Properties and Structure of Real {KG}s}, author={Nedelina Teneva and Estevam Hruschka}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=qYpqrt6X3I} }
Despite the recent popularity of knowledge graph (KG) related tasks and benchmarks such as KG embeddings, link prediction, entity alignment and evaluation of the reasoning abilities of pretrained language models as KGs, the structure and properties of real KGs are not well studied. In this paper, we perform a large scale comparative study of 29 real KG datasets from diverse domains such as the natural sciences, medicine, and NLP to analyze their properties and structural patterns. Based on our findings, we make several recommendations regarding KG-based model development and evaluation. We believe that the rich structural information contained in KGs can benefit the development of better KG models across fields and we hope this study will contribute to breaking the existing data silos between different areas of research (e.g., ML, NLP, AI for sciences).
Knowledge Graphs are not Created Equal: Exploring the Properties and Structure of Real KGs
[ "Nedelina Teneva", "Estevam Hruschka" ]
Workshop/GLFrontiers
poster
2311.06414
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qCgptbOsdS
@inproceedings{ pan2023fedgkd, title={Fed{GKD}: Unleashing the Power of Collaboration in Federated Graph Neural Networks}, author={Qiying Pan and Ruofan Wu and Tengfei LIU and Tianyi Zhang and Yifei Zhu and Weiqiang Wang}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=qCgptbOsdS} }
Federated training of Graph Neural Networks (GNN) has become popular in recent years due to its ability to perform graph-related tasks under data isolation scenarios while preserving data privacy. However, graph heterogeneity issues in federated GNN systems continue to pose challenges. Existing frameworks address the problem by representing local tasks using different statistics and relating them through a simple aggregation mechanism. However, these approaches suffer from limited efficiency from two aspects: low quality of task-relatedness quantification and inefficacy of exploiting the collaboration structure. To address these issues, we propose FedGKD, a novel federated GNN framework that utilizes a novel client-side graph dataset distillation method to extract task features that better describe task-relatedness, and introduces a novel server-side aggregation mechanism that is aware of the global collaboration structure. We conduct extensive experiments on six real-world datasets of different scales, demonstrating our framework's outperformance.
FedGKD: Unleashing the Power of Collaboration in Federated Graph Neural Networks
[ "Qiying Pan", "Ruofan Wu", "Tengfei LIU", "Tianyi Zhang", "Yifei Zhu", "Weiqiang Wang" ]
Workshop/GLFrontiers
poster
2309.09517
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=q2xXh4M9Dx
@inproceedings{ graziani2023no, title={No {PAIN} no Gain: More Expressive {GNN}s with Paths}, author={Caterina Graziani and Tamara Drucks and Monica Bianchini and franco scarselli and Thomas G{\"a}rtner}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=q2xXh4M9Dx} }
Motivated by the lack of theoretical investigation into the discriminative power of _paths_, we characterize classes of graphs where paths are sufficient to identify every instance. Our analysis motivates the integration of paths into the learning procedure of graph neural networks in order to enhance their expressiveness. We formally justify the use of paths based on finite-variable counting logic and prove the effectiveness of paths to recognize graph structural features related to cycles and connectivity. We show that paths are able to identify graphs for which higher-order models fail. Building on this, we propose PAth Isomorphism Network (PAIN), a novel graph neural network that replaces the topological neighborhood with paths in the aggregation step of the message-passing procedure. This modification leads to an algorithm that is strictly more expressive than the Weisfeiler-Leman graph isomorphism test, at the cost of a polynomial-time step for every iteration and fixed path length. We support our theoretical findings by empirically evaluating PAIN on synthetic datasets.
No PAIN no Gain: More Expressive GNNs with Paths
[ "Caterina Graziani", "Tamara Drucks", "Monica Bianchini", "franco scarselli", "Thomas Gärtner" ]
Workshop/GLFrontiers
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=nMdpkeYKzJ
@inproceedings{ erdogan2023poisoning, title={Poisoning \${\textbackslash}times\$ Evasion: Symbiotic Adversarial Robustness for Graph Neural Networks}, author={Ege Erdogan and Simon Geisler and Stephan G{\"u}nnemann}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=nMdpkeYKzJ} }
It is well-known that deep learning models are vulnerable w.r.t. small input perturbations. Such perturbed instances are called adversarial examples. Adversarial examples are commonly crafted to fool a model either at training time (poisoning) or test time (evasion). In this work, we study the symbiosis of poisoning and evasion. We show that combining both threat models can substantially improve the devastating efficacy of adversarial attacks. Specifically, we study the robustness of Graph Neural Networks (GNNs) under structure perturbations and devise a memory-efficient adaptive end-to-end attack for the novel threat model using first-order optimization.
Poisoning × Evasion: Symbiotic Adversarial Robustness for Graph Neural Networks
[ "Ege Erdogan", "Simon Geisler", "Stephan Günnemann" ]
Workshop/GLFrontiers
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=mv5sZrQwLM
@inproceedings{ cohen-karlik2023order, title={Order Agnostic Autoregressive Graph Generation}, author={Edo Cohen-Karlik and Eyal Rozenberg and Daniel Freedman}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=mv5sZrQwLM} }
Graph generation is a fundamental problem in various domains, including chemistry and social networks. Recent work has shown that molecular graph generation using recurrent neural networks (RNNs) is advantageous compared to traditional generative approaches which require converting continuous latent representations into graphs. One issue which arises when treating graph generation as sequential generation is the arbitrary order of the sequence which results from a particular choice of graph flattening method. In this work we propose using RNNs, taking into account the non-sequential nature of graphs by adding an Orderless Regularization (OLR) term that encourages the hidden state of the recurrent model to be invariant to different valid orderings present under the training distribution. We demonstrate that sequential graph generation models benefit from our proposed regularization scheme, especially when data is scarce. Our findings contribute to the growing body of research on graph generation and provide a valuable tool for various applications requiring the synthesis of realistic and diverse graph structures.
Order Agnostic Autoregressive Graph Generation
[ "Edo Cohen-Karlik", "Eyal Rozenberg", "Daniel Freedman" ]
Workshop/GLFrontiers
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=mstzBSOx2e
@inproceedings{ ravichandran2023graphrnn, title={Graph{RNN} Revisited: An Ablation Study and Extensions for Directed Acyclic Graphs}, author={Maya Ravichandran and Mark Koch and Taniya Das and Nikhil Khatri}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=mstzBSOx2e} }
GraphRNN is a deep learning-based architecture proposed by You et al. for learning generative models for graphs. We replicate the results of You et al. using a reproduced implementation of the GraphRNN architecture and evaluate this against baseline models using new metrics. Through an ablation study, we find that the BFS traversal suggested by You et al. to collapse representations of isomorphic graphs contributes significantly to model performance. Additionally, we extend GraphRNN to generate directed acyclic graphs by replacing the BFS traversal with a topological sort. We demonstrate that this method improves significantly over a directed-multiclass variant of GraphRNN on a real-world dataset.
GraphRNN Revisited: An Ablation Study and Extensions for Directed Acyclic Graphs
[ "Maya Ravichandran", "Mark Koch", "Taniya Das", "Nikhil Khatri" ]
Workshop/GLFrontiers
poster
2307.14109
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=m4mF2T0bVm
@inproceedings{ wang2023knowledge, title={Knowledge Graph Prompting for Multi-Document Question Answering}, author={Yu Wang and Nedim Lipka and Ryan Rossi and Alexa Siu and Ruiyi Zhang and Tyler Derr}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=m4mF2T0bVm} }
The 'pre-train, prompt, predict' paradigm of large language models (LLMs) has achieved remarkable success in open-domain question answering (OD-QA). However, few works explore this paradigm in the scenario of multi-document question answering (MD-QA), a task demanding a thorough understanding of the logical associations among the contents and structures of different documents. To fill this crucial gap, we propose a Knowledge Graph Prompting (KGP) method to formulate the right context in prompting LLMs for MD-QA, which consists of a graph construction module and a graph traversal module. For graph construction, we create a knowledge graph (KG) over multiple documents with nodes symbolizing passages or document structures (e.g., pages/tables), and edges denoting the semantic/lexical similarity between passages or intra-document structural relations. For graph traversal, we design an LM-guided graph traverser that navigates across nodes and gathers supporting passages assisting LLMs in MD-QA. The constructed graph serves as the global ruler that regulates the transitional space among passages and reduces retrieval latency. Concurrently, the LM-guided traverser acts as a local navigator that gathers pertinent context to progressively approach the question and guarantee retrieval quality. Extensive experiments underscore the efficacy of KGP for MD-QA, signifying the potential of leveraging graphs in enhancing the prompt design for LLMs. Our code will be released upon publication.
Knowledge Graph Prompting for Multi-Document Question Answering
[ "Yu Wang", "Nedim Lipka", "Ryan Rossi", "Alexa Siu", "Ruiyi Zhang", "Tyler Derr" ]
Workshop/GLFrontiers
oral
2308.11730
[ "https://github.com/yuwvandy/kg-llm-mdqa" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=m1WEVCY1O2
@inproceedings{ zuo2023dipgnn, title={DiP-{GNN}: Discriminative Pre-Training of Graph Neural Networks}, author={Simiao Zuo and Haoming Jiang and Qingyu Yin and Xianfeng Tang and Bing Yin and Tuo Zhao}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=m1WEVCY1O2} }
Graph neural network (GNN) pre-training methods have been proposed to enhance the power of GNNs. Specifically, a GNN is first pre-trained on a large-scale unlabeled graph and then fine-tuned on a separate small labeled graph for downstream applications, such as node classification. One popular pre-training method is to mask out a proportion of the edges, and a GNN is trained to recover them. However, such a generative method suffers from graph mismatch. That is, the masked graph input to the GNN deviates from the original graph. To alleviate this issue, we propose DiP-GNN (Discriminative Pre-training of Graph Neural Networks). Specifically, we train a generator to recover identities of the masked edges, and simultaneously, we train a discriminator to distinguish the generated edges from the original graph's edges. The discriminator is subsequently used for downstream fine-tuning. In our pre-training framework, the graph seen by the discriminator better matches the original graph because the generator can recover a proportion of the masked edges. Extensive experiments on large-scale homogeneous and heterogeneous graphs demonstrate the effectiveness of DiP-GNN. Our code will be publicly available.
DiP-GNN: Discriminative Pre-Training of Graph Neural Networks
[ "Simiao Zuo", "Haoming Jiang", "Qingyu Yin", "Xianfeng Tang", "Bing Yin", "Tuo Zhao" ]
Workshop/GLFrontiers
poster
2209.07499
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=lSa6SEEqTL
@inproceedings{ kang2023coupling, title={Coupling Graph Neural Networks with Non-Integer Order Dynamics: A Robustness Study}, author={Qiyu Kang and Kai Zhao and Yang Song and Yihang Xie and Yanan Zhao and Sijie Wang and Rui She and Wee Peng Tay}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=lSa6SEEqTL} }
In this work, we rigorously investigate the robustness of graph neural fractional-order differential equation (FDE) models. This framework extends beyond traditional graph neural ordinary differential equation (ODE) models by implementing the time-fractional Caputo derivative. Utilizing fractional calculus allows our model to consider long-term dependencies during the feature updating process, diverging from the Markovian updates seen in traditional graph neural ODE models. The efficacy of FDE models in surpassing ODE models has been confirmed in a different submitted work, particularly in environments free from attacks or perturbations. While traditional graph neural ODE models have been verified to possess a degree of stability and resilience in the presence of adversarial attacks in existing literature, the robustness of graph neural FDE models, especially under adversarial conditions, remains largely unexplored. This paper undertakes a detailed assessment of the robustness of graph neural FDE models. We establish a theoretical foundation outlining the robustness features of graph neural FDE models, highlighting that they maintain more stringent output perturbation bounds in the face of input and functional disturbances, relative to their integer-order counterparts. Through rigorous experimental assessments, which include graph alteration scenarios and adversarial attack contexts, we empirically validate the improved robustness of graph neural FDE models against their conventional graph neural ODE counterparts.
Coupling Graph Neural Networks with Non-Integer Order Dynamics: A Robustness Study
[ "Qiyu Kang", "Kai Zhao", "Yang Song", "Yihang Xie", "Yanan Zhao", "Sijie Wang", "Rui She", "Wee Peng Tay" ]
Workshop/GLFrontiers
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=lR5NYB9zrv
@inproceedings{ lachi2023graph, title={Graph Pooling Provably Improves Expressivity}, author={Veronica Lachi and Alice Moallemy-Oureh and Andreas Roth and Pascal Welke}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=lR5NYB9zrv} }
In the domain of graph neural networks (GNNs), pooling operators are fundamental to reduce the size of the graph by simplifying graph structures and vertex features. Recent advances have shown that well-designed pooling operators, coupled with message-passing layers, can endow hierarchical GNNs with an expressive power regarding the graph isomorphism test that is equal to the Weisfeiler-Leman test. However, the ability of hierarchical GNNs to increase expressive power by utilizing graph coarsening was not yet explored. This results in uncertainties about the benefits of pooling operators and a lack of sufficient properties to guide their design. In this work, we identify conditions for pooling operators to generate WL-*distinguishable* coarsened graphs from originally WL-*indistinguishable* but non-isomorphic graphs. Our conditions are versatile and can be tailored to specific tasks and data characteristics, offering a promising avenue for further research.
Graph Pooling Provably Improves Expressivity
[ "Veronica Lachi", "Alice Moallemy-Oureh", "Andreas Roth", "Pascal Welke" ]
Workshop/GLFrontiers
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=jyfiPivRBH
@inproceedings{ huang2023can, title={Can {LLM}s Effectively Leverage Graph Structural Information: When and Why}, author={Jin Huang and Xingjian Zhang and Qiaozhu Mei and Jiaqi Ma}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=jyfiPivRBH} }
This paper studies Large Language Models (LLMs) augmented with structured data--particularly graphs--a crucial data modality that remains underexplored in the LLM literature. We aim to understand when and why the incorporation of structural information inherent in graph data can improve the prediction performance of LLMs on node classification tasks with textual features. To address the "when" question, we examine a variety of prompting methods for encoding structural information, in settings where textual node features are either rich or scarce. For the "why" questions, we probe into two potential contributing factors to the LLM performance: data leakage and homophily. Our exploration of these questions reveals that (i) LLMs can benefit from structural information, especially when textual node features are scarce; (ii) there is no substantial evidence indicating that the performance of LLMs is significantly attributed to data leakage; and (iii) the performance of LLMs on a target node is strongly positively related to the local homophily ratio of the node.
Can LLMs Effectively Leverage Graph Structural Information: When and Why
[ "Jin Huang", "Xingjian Zhang", "Qiaozhu Mei", "Jiaqi Ma" ]
Workshop/GLFrontiers
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=jbx2WMP9VX
@inproceedings{ liu2023semisupervised, title={Semi-Supervised Graph Imbalanced Regression}, author={Gang Liu and Tong Zhao and Eric Inae and Tengfei Luo and Meng Jiang}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=jbx2WMP9VX} }
Data imbalance is easily found in annotated data when the observations of certain continuous label values are difficult to collect for regression tasks. When they come to molecule and polymer property predictions, the annotated graph datasets are often small because labeling them requires expensive equipment and effort. To address the lack of examples of rare label values in graph regression tasks, we propose a semi-supervised framework to progressively balance training data and reduce model bias via self-training. The training data balance is achieved by (1) pseudo-labeling more graphs for under-represented labels with a novel regression confidence measurement and (2) augmenting graph examples in latent space for remaining rare labels after data balancing with pseudo-labels. The former is to identify quality examples from unlabeled data whose labels are confidently predicted and sample a subset of them with a reverse distribution from the imbalanced annotated data. The latter collaborates with the former to target a perfect balance using a novel label-anchored mixup algorithm. We perform experiments in seven regression tasks on graph datasets. Results demonstrate that the proposed framework significantly reduces the error of predicted graph properties, especially in under-represented label areas.
Semi-Supervised Graph Imbalanced Regression
[ "Gang Liu", "Tong Zhao", "Eric Inae", "Tengfei Luo", "Meng Jiang" ]
Workshop/GLFrontiers
poster
2305.12087
[ "https://github.com/liugangcode/SGIR" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=jNL90brxG0
@inproceedings{ hu2023beyond, title={Beyond Text: A Deep Dive into Large Language Models' Ability on Understanding Graph Data}, author={Yuntong Hu and Zheng Zhang and Liang Zhao}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=jNL90brxG0} }
Large language models (LLMs) have achieved impressive performance on many natural language processing tasks. However, their capabilities on graph-structured data remain relatively unexplored. In this paper, we conduct a series of experiments benchmarking leading LLMs on diverse graph prediction tasks spanning node, edge, and graph levels. We aim to assess whether LLMs can effectively process graph data and leverage topological structures to enhance performance, compared to specialized graph neural networks. Through varied prompt formatting and task/dataset selection, we analyze how well LLMs can interpret and utilize graph structures. By comparing LLMs' performance with specialized graph models, we offer insights into the strengths and limitations of employing LLMs for graph analytics. Our findings provide insights into LLMs' capabilities and suggest avenues for further exploration in applying them to graph analytics.
Beyond Text: A Deep Dive into Large Language Models' Ability on Understanding Graph Data
[ "Yuntong Hu", "Zheng Zhang", "Liang Zhao" ]
Workshop/GLFrontiers
poster
2310.04944
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=i8JgQVS30K
@inproceedings{ davies2023its, title={Its All Graph To Me: Single-Model Graph Representation Learning on Multiple Domains}, author={Alex Davies and Riku Green and Nirav Ajmeri and Telmo Silva Filho}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=i8JgQVS30K} }
Graph neural networks (GNNs) have revolutionised the field of graph representation learning and plays a critical role in graph-based research. Recent work explores applying GNNs to pre-training and fine-tuning, where a model is trained on a large dataset and its learnt representations are then transferred to a smaller dataset. However, current work only explore pre-training on a single domain; for example, a model pre-trained on molecular graphs is fine-tuned on other molecular graphs. This leads to poor generalisability of pre-trained models to novel domains and tasks. In this work, we curate a multi-graph-domain dataset and apply state-of-the-art Graph Adversarial Contrastive Learning (GACL) methods. We present a pre-trained graph model that may have the capability of acting as a foundational graph model. We will evaluate the efficacy of its learnt representations on various downstream tasks against baseline models pre-trained on single domains. In addition, we aim to compare our model to un-trained and non-transferred models, and show that performance using our foundational model is capable of achieving equal or better than task-specific methodology.
Its All Graph To Me: Single-Model Graph Representation Learning on Multiple Domains
[ "Alex Davies", "Riku Green", "Nirav Ajmeri", "Telmo Silva Filho" ]
Workshop/GLFrontiers
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=hySoEBuTuM
@inproceedings{ li2023longrange, title={Long-Range Neural Atom Learning for Molecular Graphs}, author={Xuan Li and Zhanke Zhou and Jiangchao Yao and Yu Rong and Lu Zhang and Bo Han}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=hySoEBuTuM} }
Graph Neural Networks (GNNs) have been widely adopted for drug discovery with molecular graphs. Nevertheless, current GNNs are mainly good at leveraging short-range interactions (SRI) but struggle to capture long-range interactions (LRI), both of which are crucial for determining molecular properties. To tackle this issue, we propose a method that implicitly projects all original atoms into a few \textit{Neural Atoms}, which abstracts the collective information of atomic groups within a molecule. Specifically, we explicitly exchange the information among neural atoms and project them back to the atoms’ representations as an enhancement. With this mechanism, neural atoms establish the communication channels among distant nodes, effectively reducing the interaction scope of arbitrary node pairs into a single hop. To provide an inspection of our method from a physical perspective, we reveal its connection with the traditional LRI calculation method, Ewald Summation. We conduct extensive experiments on three long-range graph benchmarks, covering both graph-level and link-level tasks on molecular graphs. We empirically justify that our method can be equipped with an arbitrary GNN and help to capture LRI.
Long-Range Neural Atom Learning for Molecular Graphs
[ "Xuan Li", "Zhanke Zhou", "Jiangchao Yao", "Yu Rong", "Lu Zhang", "Bo Han" ]
Workshop/GLFrontiers
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]