bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
197
792
abstract
stringlengths
303
3.45k
title
stringlengths
10
159
authors
sequencelengths
1
28
id
stringclasses
44 values
type
stringclasses
16 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
444 values
n_linked_authors
int64
-1
9
upvotes
int64
-1
42
num_comments
int64
-1
13
n_authors
int64
-1
92
paper_page_exists_pre_conf
int64
0
1
Models
sequencelengths
0
100
Datasets
sequencelengths
0
11
Spaces
sequencelengths
0
100
null
https://openreview.net/forum?id=gXCqzpvhuD
@inproceedings{ bause2023maximally, title={Maximally Expressive {GNN}s for Outerplanar Graphs}, author={Franka Bause and Fabian Jogl and Patrick Indri and Tamara Drucks and David Penz and Nils Kriege and Thomas G{\"a}rtner and Pascal Welke and Maximilian Thiessen}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=gXCqzpvhuD} }
We propose a linear time graph transformation that enables the Weisfeiler-Leman (WL) test and message passing graph neural networks (MPNNs) to be maximally expressive on outerplanar graphs. Our approach is motivated by the fact that most pharmaceutical molecules correspond to outerplanar graphs. Existing research predominantly enhances the expressivity of graph neural networks without specific graph families in mind. This often leads to methods that are impractical due to their computational complexity. In contrast, the restriction to outerplanar graphs enables us to encode the Hamiltonian cycle of each biconnected component in linear time. As the main contribution of the paper we prove that our method achieves maximum expressivity on outerplanar graphs. Experiments confirm that our graph transformation improves the predictive performance of MPNNs on molecular benchmark datasets at negligible computational overhead.
Maximally Expressive GNNs for Outerplanar Graphs
[ "Franka Bause", "Fabian Jogl", "Patrick Indri", "Tamara Drucks", "David Penz", "Nils Kriege", "Thomas Gärtner", "Pascal Welke", "Maximilian Thiessen" ]
Workshop/GLFrontiers
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=gVn2F8MBnP
@inproceedings{ m{\"u}ller2023privacyutility, title={Privacy-Utility Trade-offs in Neural Networks for Medical Population Graphs: Insights from Differential Privacy and Graph Structure}, author={Tamara M{\"u}ller and Maulik Chevli and Ameya Daigavane and Daniel Rueckert and Georgios Kaissis}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=gVn2F8MBnP} }
Differential privacy (DP) is the gold standard for protecting individuals' data while enabling deep learning. It is well-established and frequently used for applications in medicine and healthcare to protect sensitive patient data. When using graph deep learning on so-called population graphs, however, the application of DP becomes more challenging compared to grid-like data structures like images or tables. In this work, we initiate an empirical investigation of differentially private graph neural networks on population graphs in the medical domain by examining privacy-utility trade-offs under different graph learning methods on both real-world and synthetic datasets. We compare two state-of-the-art methods for differentially private graph deep learning and empirically audit privacy guarantees through node membership inference and link stealing attacks. We focus on the impact of the graph structure, one of the most important inherent challenges of medical population graphs. Our findings highlight the potential and challenges of this specific DP application area. Moreover, we find that the underlying graph structure constitutes a potential factor for larger performance gaps on one of the explored methods by showing a correlation between the graphs' homophily and the accuracy of the trained model.
Privacy-Utility Trade-offs in Neural Networks for Medical Population Graphs: Insights from Differential Privacy and Graph Structure
[ "Tamara Müller", "Maulik Chevli", "Ameya Daigavane", "Daniel Rueckert", "Georgios Kaissis" ]
Workshop/GLFrontiers
poster
2307.06760
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=fLEPS2BwRL
@inproceedings{ behrouz2023unsupervised, title={Unsupervised Representation Learning of Brain Activity via Bridging Voxel Activity and Functional Connectivity}, author={Ali Behrouz and Parsa Delavari and Farnoosh Hashemi}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=fLEPS2BwRL} }
Effective brain representation learning is a key step toward revealing the understanding of cognitive processes and unlocking detecting and potential therapeutic interventions for neurological diseases/disorders. Existing studies have focused on either (1) voxel-level activity, where only a single beta weight for each voxel (i.e., aggregation of voxel activity over a time window) is considered, missing their temporal dynamics, or (2) functional connectivity of the brain in the level of region of interests, missing voxel-level activities. In this paper, we bridge this gap and design BrainMixer, an unsupervised learning framework that effectively utilizes both functional connectivity and associated time series of voxels to learn voxel-level representation in an unsupervised manner. BrainMixer employs two simple yet effective MLP-based encoders to simultaneously learn the dynamics of voxel-level signals and their functional correlations. To encode voxel activity, BrainMixer fuses information across both time and voxel dimensions via a dynamic self-attention mechanism. To learn the structure of the functional connectivity graph, BrainMixer presents a temporal graph patching and encodes each patch by combining its nodes' features via a new adaptive temporal pooling. Our experiments show that BrainMixer attains outstanding performance and outperforms 13 baselines in different downstream tasks and experimental setups.
Unsupervised Representation Learning of Brain Activity via Bridging Voxel Activity and Functional Connectivity
[ "Ali Behrouz", "Parsa Delavari", "Farnoosh Hashemi" ]
Workshop/GLFrontiers
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=esQEXmBqxR
@inproceedings{ koke2023holonets, title={HoloNets: Spectral Convolutions do extend to Directed Graphs}, author={Christian Koke and Daniel Cremers}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=esQEXmBqxR} }
Within the graph learning community, conventional wisdom dictates that spectral convolutional networks may only be deployed on undirected graphs: Only there could the existence of a well-defined graph Fourier transform be guaranteed, so that information may be translated between spatial- and spectral domains. Here we show this traditional reliance on the graph Fourier transform to be superfluous: Making use of certain advanced tools from complex analysis and spectral theory, we extend spectral convolutions to directed graphs. We provide a frequency- response interpretation of newly developed filters, investigate the influence of the basis’ used to express filters and discuss the interplay with characteristic operators on which networks are based. In order to thoroughly test the developed general theory, we conduct experiments in real world settings, showcasing that directed spectral convolutional networks provide new state of the art results for heterophilic node classification and – as opposed to baselines – may be rendered stable to resolution-scale varying topological perturbations.
HoloNets: Spectral Convolutions do extend to Directed Graphs
[ "Christian Koke", "Daniel Cremers" ]
Workshop/GLFrontiers
oral
2310.02232
[ "https://github.com/ChristianKoke/HoloNets" ]
https://huggingface.co/papers/2310.02232
0
0
0
2
1
[]
[]
[]
null
https://openreview.net/forum?id=eYyZisiBG3
@inproceedings{ cavazos2023explaining, title={Explaining Drug Repositioning: A Case-Based Reasoning Graph Neural Network Approach}, author={Adriana Gonzalez Cavazos}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=eYyZisiBG3} }
Drug repositioning, the identification of novel uses of existing therapies, has become an attractive strategy to accelerate drug development. Knowledge graphs (KGs) have emerged as a powerful representation of interconnected data within the biomedical domain. While link prediction on biomedical can ascertain new connections between drugs and diseases, most approaches only state whether two nodes are related. Yet, they fail to explain why two nodes are related. In this project, we introduce an implementation of the semi-parametric Case-Based Reasoning over subgraphs (CBR-SUBG), designed to derive a drug query’s underlying mechanisms by gathering graph patterns of similar nodes. We show that our adaptation outperforms existing KG link prediction models on a drug repositioning task. Furthermore, our findings demonstrate that CBR-SUBG strategy can provide interpretable biological paths as evidence supporting putative repositioning candidates, leading to more informed decisions.
Explaining Drug Repositioning: A Case-Based Reasoning Graph Neural Network Approach
[ "Adriana Gonzalez Cavazos" ]
Workshop/GLFrontiers
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=e8ba9Hu1mM
@inproceedings{ bar-shalom2023subgraphormer, title={Subgraphormer: Subgraph {GNN}s meet Graph Transformers}, author={Guy Bar-Shalom and Beatrice Bevilacqua and Haggai Maron}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=e8ba9Hu1mM} }
In the realm of Graph Neural Network (GNNs), two intriguing research directions have recently emerged: Subgraph GNNs and Graph Transformers. These approaches have distinct origins -- Subgraph GNNs aim to address the limitations of message passing, while Graph Transformers seek to build on the success of sequential transformers in language and vision tasks. In this paper, we propose a model that integrates both approaches, dubbed _Subgraphormer_, which combines the message passing and global aggregation schemes from Subgraph GNNs with attention mechanisms and positional and structural encodings, which are arguably the most important components in Graph Transformers. Our preliminary experimental results demonstrate significant performance improvements over both Subgraph GNNs and Graph Transformers.
Subgraphormer: Subgraph GNNs meet Graph Transformers
[ "Guy Bar-Shalom", "Beatrice Bevilacqua", "Haggai Maron" ]
Workshop/GLFrontiers
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=dC3CaRZ8tf
@inproceedings{ j{\"u}r{\ss}2023everybody, title={Everybody Needs a Little {HELP}: Explaining Graphs via Hierarchical Concepts}, author={Jonas J{\"u}r{\ss} and Lucie Charlotte Magister and Pietro Barbiero and Pietro Lio and Nikola Simidjievski}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=dC3CaRZ8tf} }
Graph neural networks (GNNs) have led to major breakthroughs in a variety of domains such as drug discovery, social network analysis, and travel time estimation. However, they lack interpretability which hinders human trust and thereby deployment to settings with high-stakes decisions. A line of interpretable methods approach this by discovering a small set of relevant concepts as subgraphs in the last GNN layer that together explain the prediction. This can yield oversimplified explanations, failing to explain the interaction between GNN layers. To address this oversight, we provide HELP (Hierarchical Explainable Latent Pooling), a novel, inherently interpretable graph pooling approach that reveals how concepts from different GNN layers compose to new ones in later steps. HELP is more than 1-WL expressive and is the first non-spectral, end-to-end-learnable, hierarchical graph pooling method that can learn to pool a variable number of arbitrary connected components. We empirically demonstrate that it performs on-par with standard GCNs and popular pooling methods in terms of accuracy while yielding explanations that are aligned with expert knowledge in the domains of chemistry and social networks. In addition to a qualitative analysis, we employ concept completeness scores as well as concept conformity, a novel metric to measure the noise in discovered concepts, quantitatively verifying that the discovered concepts are significantly easier to fully understand than those from previous work. Our work represents a first step towards an understanding of graph neural networks that goes beyond a set of concepts from the final layer and instead explains the complex interplay of concepts on different levels.
Everybody Needs a Little HELP: Explaining Graphs via Hierarchical Concepts
[ "Jonas Jürß", "Lucie Charlotte Magister", "Pietro Barbiero", "Pietro Lio", "Nikola Simidjievski" ]
Workshop/GLFrontiers
poster
2311.15112
[ "https://github.com/jonasjuerss/help" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=d1oVSv97ZM
@inproceedings{ wang2023protohg, title={Proto{HG}: Prototype-Enhanced Hypergraph Learning for Heterogeneous Information Networks}, author={Shuai Wang and Jiayi Shen and Athanasios Efthymiou and Stevan Rudinac and Monika Kackovic and Nachoem Wijnberg and Marcel Worring}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=d1oVSv97ZM} }
The variety and complexity of relations in real-world data lead to Heterogeneous Information Networks (HINs). Capturing the semantics from such networks requires approaches capable of utilizing the full richness of the HINs. Existing methods for modeling HINs employ techniques originally designed for graph neural networks in combination with HIN decomposition analysis, especially using manually predefined metapaths. In this paper, we introduce a novel hypergraph learning approach for node classification in HINs. Using hypergraphs instead of graphs, our method captures higher-order relationships among heterogeneous nodes and extracts semantic information without relying on metapaths. Our method leverages the power of prototypes to improve the robustness of the hypergraph learning process, and we further discuss the advantages that our method can bring to scalability, which due to their expressiveness is an important issue for hypergraphs. Extensive experiments on three real-world HINs demonstrate the effectiveness of our method.
ProtoHG: Prototype-Enhanced Hypergraph Learning for Heterogeneous Information Networks
[ "Shuai Wang", "Jiayi Shen", "Athanasios Efthymiou", "Stevan Rudinac", "Monika Kackovic", "Nachoem Wijnberg", "Marcel Worring" ]
Workshop/GLFrontiers
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ck099Dyu4j
@inproceedings{ jin2023learning, title={Learning Multiplex Embeddings on Text-rich Networks with One Text Encoder}, author={Bowen Jin and Wentao Zhang and Yu Zhang and Yu Meng and Han Zhao and Jiawei Han}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=ck099Dyu4j} }
In real-world scenarios, texts in a network are often linked by multiple semantic relations (e.g., papers in an academic network are referenced by other publications, written by the same author, or published in the same venue). Text documents and their relations form a multiplex text-rich network. Mainstream text representation learning methods use pretrained language models (PLMs) to generate one embedding for each text unit, expecting that all types of relations between texts can be captured by these single-view embeddings. However, this presumption does not hold particularly in multiplex text-rich networks. Along another line of work, multiplex graph neural networks (GNNs) directly initialize node attributes as a feature vector for node representation learning, but they cannot fully capture the semantics of the nodes' associated texts. In this paper, we propose METERN, a new framework for learning Multiplex Embeddings on TExt-Rich Networks. In contrast to existing methods, METERN uses one text encoder to model the shared knowledge across relations and leverages a small number of parameters per relation to derive relation-specific representations. This allows the encoder to effectively capture the multiplex structures in the network while also preserving parameter efficiency. We conduct experiments on nine downstream tasks in five networks from both academic and e-commerce domains, where METERN outperforms baselines significantly and consistently. The code is available at https://anonymous.4open.science/r/METER-submit-2C7B.
Learning Multiplex Embeddings on Text-rich Networks with One Text Encoder
[ "Bowen Jin", "Wentao Zhang", "Yu Zhang", "Yu Meng", "Han Zhao", "Jiawei Han" ]
Workshop/GLFrontiers
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=cZMdiwFjCh
@inproceedings{ park2023nonbacktracking, title={Non-backtracking Graph Neural Networks}, author={Seonghyun Park and Narae Ryu and Gahee Kim and Dongyeop Woo and Se-Young Yun and Sungsoo Ahn}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=cZMdiwFjCh} }
The celebrated message-passing updates for graph neural networks allow the representation of large-scale graphs with local and computationally tractable updates. However, the local updates suffer from backtracking, i.e., a message flows through the same edge twice and revisits the previously visited node. Since the number of message flows increases exponentially with the number of updates, the redundancy in local updates prevents the graph neural network from accurately recognizing a particular message flow for downstream tasks. In this work, we propose to resolve such a redundancy via the non-backtracking graph neural network (NBA-GNN) that updates a message without incorporating the message from the previously visited node. We further investigate how NBA-GNN alleviates the over-squashing of GNNs, and establish a connection between NBA-GNN and the impressive performance of non-backtracking updates for stochastic block model recovery. We empirically verify the effectiveness of our NBA-GNN on long-range graph benchmark and transductive node classification problems.
Non-backtracking Graph Neural Networks
[ "Seonghyun Park", "Narae Ryu", "Gahee Kim", "Dongyeop Woo", "Se-Young Yun", "Sungsoo Ahn" ]
Workshop/GLFrontiers
oral
2310.07430
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=cVIRwcJ3Cb
@inproceedings{ zhang2023graph, title={Graph Meets {LLM}s: Towards Large Graph Models}, author={Ziwei Zhang and Haoyang Li and Zeyang Zhang and Yijian Qin and Xin Wang and Wenwu Zhu}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=cVIRwcJ3Cb} }
Large models have emerged as the most recent groundbreaking achievements in artificial intelligence, and particularly machine learning. However, when it comes to graphs, large models have not achieved the same level of success as in other fields, such as natural language processing and computer vision. In order to promote applying large models for graphs forward, we present a perspective paper to discuss the challenges and opportunities associated with developing large graph models. First, we discuss the desired characteristics of large graph models. Then, we present detailed discussions from three key perspectives: representation basis, graph data, and graph models. In each category, we provide a brief overview of recent advances and highlight the remaining challenges together with our visions. Finally, we discuss valuable applications of large graph models. We believe this perspective can encourage further investigations into large graph models, ultimately pushing us one step closer towards artificial general intelligence (AGI). We are the first to comprehensively study large graph models, to the best of our knowledge.
Graph Meets LLMs: Towards Large Graph Models
[ "Ziwei Zhang", "Haoyang Li", "Zeyang Zhang", "Yijian Qin", "Xin Wang", "Wenwu Zhu" ]
Workshop/GLFrontiers
poster
2308.14522
[ "https://github.com/thumnlab/awesome-large-graph-model" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=cMz2pdzu8W
@inproceedings{ wu2023privacypreserving, title={Privacy-preserving design of graph neural networks with applications to vertical federated learning}, author={Ruofan Wu and Mingyang Zhang and Lingjuan Lyu and Xiaolong Xu and Xiuquan Hao and xinyi fu and Tengfei LIU and Tianyi Zhang and Weiqiang Wang}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=cMz2pdzu8W} }
The paradigm of vertical federated learning (VFL), where institutions collaboratively train machine learning models via combining each other's local feature or label information, has achieved great success in applications to financial risk management (FRM). The surging developments of graph representation learning (GRL) have opened up new opportunities for FRM applications under FL via efficiently utilizing the graph-structured data generated from underlying transaction networks. Meanwhile, transaction information is often considered highly sensitive. To prevent data leakage during training, it is critical to develop FL protocols with **formal privacy guarantees**. In this paper, we present an end-to-end GRL framework in the VFL setting called VESPER, which is built upon a general privatization scheme termed perturbed message passing (PMP) that allows the privatization of many popular graph neural architectures. Based on PMP, we discuss the strengths and weaknesses of specific design choices of concrete graph neural architectures and provide solutions and improvements for both dense and sparse graphs. Extensive empirical evaluations over both public datasets and an industry dataset demonstrate that VESPER is capable of training high-performance GNN models over both sparse and dense graphs under reasonable privacy budgets.
Privacy-preserving design of graph neural networks with applications to vertical federated learning
[ "Ruofan Wu", "Mingyang Zhang", "Lingjuan Lyu", "Xiaolong Xu", "Xiuquan Hao", "xinyi fu", "Tengfei LIU", "Tianyi Zhang", "Weiqiang Wang" ]
Workshop/GLFrontiers
poster
2310.20552
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=bxOD6eMAfa
@inproceedings{ varbella2023powergraph, title={PowerGraph: A power grid benchmark dataset for graph neural networks}, author={Anna Varbella and Kenza Amara and Blazhe Gjorgiev and Giovanni Sansavini}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=bxOD6eMAfa} }
Public Graph Neural Networks (GNN) benchmark datasets facilitate the use of GNN and enhance GNN applicability to diverse disciplines. The community currently lacks public datasets of electrical power grids for GNN applications. Indeed, GNNs have the potential for capturing complex power grid phenomena over alternative machine learning techniques. Power grids are complex engineered networks that are naturally amenable to graph representations. Therefore, GNN have the potential for capturing the behaviour of power grids over alternative machine learning techniques. To this aim, we develop a graph dataset for cascading failure events, which are the major cause of blackouts in electric power grids. Historical blackout datasets are scarce and incomplete. The assessment of vulnerability and the identification of critical components are usually conducted via computationally expensive offline simulations of cascading failures. Instead, we propose the use of machine learning models for the online detection of cascading failures leveraging the knowledge of the system state at the onset of the cascade. We develop PowerGraph, a graph dataset modelling cascading failures in power grids, designed for two purposes, namely, i) training GNN models for different graph-level tasks including multi-class classification, binary classification, and regression, and ii) explaining GNN models. The dataset generated via physics-based cascading failure model ensures generality of the operating and environmental conditions by spanning diverse failure scenarios. In addition, we foster the use of the dataset to benchmark GNN explainability methods by assigning ground-truth edge-level explanations. PowerGraph helps the development of better GNN models for graph-level tasks and explainability, critical in many domains ranging from chemistry to biology, where the systems and processes can be described as graphs. The dataset is available at https://figshare.com/articles/dataset/PowerGraph/22820534 and the code at https://anonymous.4open.science/r/PowerGraph/.
PowerGraph: A power grid benchmark dataset for graph neural networks
[ "Anna Varbella", "Kenza Amara", "Blazhe Gjorgiev", "Giovanni Sansavini" ]
Workshop/GLFrontiers
poster
2402.02827
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=a9loLCbWs1
@inproceedings{ fu2023implicit, title={Implicit Graph Neural Diffusion Based on Constrained Dirichlet Energy Minimization}, author={Guoji Fu and Mohammed Haroon Dupty and Yanfei Dong and Wee Sun Lee}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=a9loLCbWs1} }
Implicit graph neural networks (GNNs) have emerged as a potential approach to enable GNNs to capture long-range dependencies effectively. However, poorly designed implicit GNN layers can experience over-smoothing or may have limited adaptability to learn the graph geometry, potentially hindering their performance in graph learning problems. To address these issues, we introduce a geometric framework to design implicit graph diffusion layers based on a parameterized graph Laplacian operator. Our framework allows learning the metrics of vertex and edge spaces, as well as the graph gradient operator from data. We further show how implicit GNN layers can be viewed as the fixed-point solution of a Dirichlet energy minimization problem and give conditions under which it may suffer from over-smoothing. To overcome the over-smoothing problem, we design our implicit graph diffusion layer as the solution of a Dirichlet energy minimization problem with constraints on vertex features, enabling it to trade off smoothing with the preservation of node feature information. With an appropriate hyperparameter set to be larger than the largest eigenvalue of the parameterized graph Laplacian, our framework guarantees a unique equilibrium and quick convergence. Our models demonstrate better performance than leading implicit and explicit GNNs on benchmark datasets for node and graph classification tasks, with substantial accuracy improvements observed for some datasets.
Implicit Graph Neural Diffusion Based on Constrained Dirichlet Energy Minimization
[ "Guoji Fu", "Mohammed Haroon Dupty", "Yanfei Dong", "Wee Sun Lee" ]
Workshop/GLFrontiers
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=YXSJxi8dOV
@inproceedings{ berto2023rlco, title={{RL}4{CO}: a Unified Reinforcement Learning for Combinatorial Optimization Library}, author={Federico Berto and Chuanbo Hua and Junyoung Park and Minsu Kim and Hyeonah Kim and Jiwoo Son and Haeyeon Kim and Joungho Kim and Jinkyoo Park}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=YXSJxi8dOV} }
Deep reinforcement learning offers notable benefits in addressing combinatorial problems over traditional solvers, reducing the reliance on domain-specific knowl- edge and expert solutions, and improving computational efficiency. Despite the recent surge in interest in neural combinatorial optimization, practitioners often do not have access to a standardized code base. Moreover, different algorithms are frequently based on fragmentized implementations that hinder reproducibility and fair comparison. To address these challenges, we introduce RL4CO, a uni- fied Reinforcement Learning (RL) for Combinatorial Optimization (CO) library. We employ state-of-the-art software and best practices in implementation, such as modularity and configuration management, to be flexible, easily modifiable, and extensible by researchers. Thanks to our unified codebase, we benchmark baseline RL solvers with different evaluation schemes on zero-shot performance, general- ization, and adaptability on diverse tasks. Notably, we find that some recent meth- ods may fall behind their predecessors depending on the evaluation settings. We hope RL4CO will encourage the exploration of novel solutions to complex real- world tasks, allowing the community to compare with existing methods through a unified framework that decouples the science from software engineering. We open-source our library at https://github.com/ai4co/rl4co.
RL4CO: a Unified Reinforcement Learning for Combinatorial Optimization Library
[ "Federico Berto", "Chuanbo Hua", "Junyoung Park", "Minsu Kim", "Hyeonah Kim", "Jiwoo Son", "Haeyeon Kim", "Joungho Kim", "Jinkyoo Park" ]
Workshop/GLFrontiers
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=YILik4gFBk
@inproceedings{ yoon2023multimodal, title={Multimodal Graph Learning for Generative Tasks}, author={Minji Yoon and Jing Yu Koh and Bryan Hooi and Russ Salakhutdinov}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=YILik4gFBk} }
Multimodal learning combines multiple data modalities, broadening the types and complexity of data our models can utilize; for example, from plain text to image-caption pairs. Most multimodal learning algorithms focus on modeling simple one-to-one pairs of data from two modalities, such as image-caption pairs, or audio-text pairs. However, in most real-world settings, entities of different modalities interact with each other in more complex and multifaceted ways, going beyond one-to-one mappings. We propose to represent these complex relationships as graphs, allowing us to capture data with any number of modalities, and with complex relationships between modalities that can flexibly vary from one sample to another. Toward this goal, we propose Multimodal Graph Learning (MMGL), a general and systematic framework for capturing information from multiple multimodal neighbors with relational structures among them. In particular, we focus on MMGL for \emph{generative} tasks, building upon pretrained Language Models (LMs), aiming to augment their text generation with multimodal neighbor contexts. We study three research questions raised by MMGL: (1) how can we infuse multiple neighbor information into the pretrained LMs, while avoiding scalability issues? (2) how can we infuse the graph structure information among multimodal neighbors into the LMs? and (3) how can we finetune the pretrained LMs to learn from the neighbor context parameter-efficiently? We conduct extensive experiments to answer these three questions on MMGL and analyze the empirical results to pave the way for future MMGL research.
Multimodal Graph Learning for Generative Tasks
[ "Minji Yoon", "Jing Yu Koh", "Bryan Hooi", "Russ Salakhutdinov" ]
Workshop/GLFrontiers
poster
2310.07478
[ "https://github.com/minjiyoon/mmgl" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=XanCwCz4mR
@inproceedings{ kausika2023curiouswalk, title={CuriousWalk: Enhancing Multi-Hop Reasoning in Graphs with Random Network Distillation}, author={Varun Kausika and Saurabh Jha and Adya Jha and Amy Zhang and Michael Sury}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=XanCwCz4mR} }
Structured knowledge bases in the forms of graphs often suffer from incompleteness and inaccuracy in representing information. One popular method of densifying graphs involves constructing a reinforcement learning agent that learns to traverse entities and relations in a sequential way from a query entity, according to a query relation until it reaches the desired answer entity. However, these agents are often limited by sparse reward structures of the environment, as well as their inability to find diverse paths from the question to the answer entities. In this paper, we attempt to address these issues by augmenting the agent with intrinsic rewards which can help in exploration as well as offering meaningful feedback at intermediate steps to push the agent in the right direction.
CuriousWalk: Enhancing Multi-Hop Reasoning in Graphs with Random Network Distillation
[ "Varun Kausika", "Saurabh Jha", "Adya Jha", "Amy Zhang", "Michael Sury" ]
Workshop/GLFrontiers
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=VAATRBCyPg
@inproceedings{ chowdhury2023sparse, title={Sparse but Strong: Crafting Adversarially Robust Graph Lottery Tickets}, author={Subhajit Dutta Chowdhury and Zhiyu Ni and Qingyuan Peng and Souvik Kundu and Pierluigi Nuzzo}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=VAATRBCyPg} }
Graph Lottery Tickets (GLTs), comprising a sparse adjacency matrix and a sparse graph neural network (GNN), can significantly reduce the inference latency and compute footprint compared to their dense counterparts. Despite these benefits, their performance against adversarial structure perturbations remains to be fully explored. In this work, we first investigate the resilience of GLTs against different structure perturbation attacks and observe that they are highly vulnerable and show a large drop in classification accuracy. Based on this observation, we then present an adversarially robust graph sparsification (ARGS) framework that prunes the adjacency matrix and the GNN weights by optimizing a novel loss function capturing the graph homophily property and information associated with both the true labels of the train nodes and the pseudo labels of the test nodes. By iteratively applying ARGS to prune both the perturbed graph adjacency matrix and the GNN model weights, we can find adversarially robust graph lottery tickets that are highly sparse yet achieve competitive performance under different untargeted training-time structure attacks. Evaluations conducted on various benchmarks, considering different poisoning structure attacks, namely, PGD, MetaAttack, Meta-PGD, and PR-BCD demonstrate that the GLTs generated by ARGS can significantly improve the robustness, even when subjected to high levels of sparsity.
Sparse but Strong: Crafting Adversarially Robust Graph Lottery Tickets
[ "Subhajit Dutta Chowdhury", "Zhiyu Ni", "Qingyuan Peng", "Souvik Kundu", "Pierluigi Nuzzo" ]
Workshop/GLFrontiers
poster
2312.06568
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=V5eDwEDfXT
@inproceedings{ koke2023resolvnet, title={ResolvNet: A Graph Convolutional Network with multi-scale Consistency}, author={Christian Koke and Abhishek Saroha and Yuesong Shen and Marvin Eisenberger and Daniel Cremers}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=V5eDwEDfXT} }
It is by now a well known fact in the graph learning community that the presence of bottlenecks severely limits the ability of graph neural networks to propagate information over long distances. What so far has not been appreciated is that, counter-intuitively, also the presence of strongly connected sub-graphs may severely restrict information flow in common architectures. Motivated by this observation, we introduce the concept of multi-scale consistency. At the node level this concept refers to the retention of a connected propagation graph even if connectivity varies over a given graph. At the graph-level, multi-scale consistency refers to the fact that distinct graphs describing the same object at different resolutions should be assigned similar feature vectors. As we show, both properties are not satisfied by poular graph neural network architectures. To remedy these shortcomings, we introduce ResolvNet, a flexible graph neural network based on the mathematical concept of resolvents. We rigorously establish its multi-scale consistency theoretically and verify it in extensive experiments on real world data: Here networks based on this ResolvNet architecture prove expressive; out-performing baselines significantly on many tasks; in- and outside the multi-scale setting.
ResolvNet: A Graph Convolutional Network with multi-scale Consistency
[ "Christian Koke", "Abhishek Saroha", "Yuesong Shen", "Marvin Eisenberger", "Daniel Cremers" ]
Workshop/GLFrontiers
oral
2310.00431
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=U0EPHUDrot
@inproceedings{ dinh2023on, title={On the modelling and impact of negative edges in graph convolutional networks for node classification}, author={Thu Trang Dinh and Julia Handl and Luis Ospina-Forero}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=U0EPHUDrot} }
Signed graphs are important data structures to simultaneously express positive and negative relationships. Their application ranges from structural health monitoring to financial models, where the meaning and properties of negative relationships can play a significant role. In this paper, we provide a comprehensive examination of existing approaches for the integration of signed edges into the Graph Convolutional Network (GCN) framework for node classification. Here we use a combination of theoretical and empirical analysis to gain a deeper understanding of the strengths and limitations of different mechanisms and to identify areas for possible improvement. We compare six different approaches to the integration of negative link information within the framework of the simple GCN. In particular, we analyze sensitivity towards feature noise, negative edge noise and positive edge noise, as well as robustness towards feature scaling and translation, explaining the results obtained on the basis of individual model assumptions and biases. Our findings highlight the importance of capturing the meaning of negative links in a given domain context, and appropriately reflecting it in the choice of GCN model.
On the modelling and impact of negative edges in graph convolutional networks for node classification
[ "Thu Trang Dinh", "Julia Handl", "Luis Ospina-Forero" ]
Workshop/GLFrontiers
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=TsjGaj45Li
@inproceedings{ sestak2023vnegnn, title={{VN}-{EGNN}: Equivariant Graph Neural Networks with Virtual Nodes Enhance Protein Binding Site Identification}, author={Florian Sestak and Lisa Schneckenreiter and Sepp Hochreiter and Andreas Mayr and G{\"u}nter Klambauer}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=TsjGaj45Li} }
Being able to identify regions within or around proteins, to which ligands can potentially bind, is an essential step to develop new drugs. Binding site iden- tification methods can now profit from the availability of large amounts of 3D structures in protein structure databases or from AlphaFold predictions. Current binding site identification methods rely on geometric deep learning, which takes geometric invariances and equivariances into account. Such methods turned out to be very beneficial for physics-related tasks like binding energy or motion tra- jectory prediction. However, their performance at binding site identification is still limited, which might be due to limited expressivity or oversquashing effects of E(n)-Equivariant Graph Neural Networks (EGNNs). Here, we extend EGNNs by adding virtual nodes and applying an extended message passing scheme. The virtual nodes in these graphs both improve the predictive performance and can also learn to represent binding sites. In our experiments, we show that VN-EGNN sets a new state of the art at binding site identification on three common benchmarks, COACH420, HOLO4K, and PDBbind2020.
VN-EGNN: Equivariant Graph Neural Networks with Virtual Nodes Enhance Protein Binding Site Identification
[ "Florian Sestak", "Lisa Schneckenreiter", "Sepp Hochreiter", "Andreas Mayr", "Günter Klambauer" ]
Workshop/GLFrontiers
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=T40bRpEd6P
@inproceedings{ jiang2023hierarchical, title={Hierarchical Relationships: A New Perspective to Enhance Scene Graph Generation}, author={Bowen Jiang and Camillo Taylor}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=T40bRpEd6P} }
This paper presents a finding that leveraging the hierarchical structures among labels for relationships and objects can substantially improve the performance of scene graph generation systems. The focus of this work is to create an informative hierarchical structure that can divide object and relationship categories into disjoint super-categories in a systematic way. Specifically, we introduce a Bayesian prediction head to jointly predict the super-category of relationships between a pair of object instances, as well as the detailed relationship within that super-category simultaneously, facilitating more informative predictions. The resulting model exhibits the capability to produce a more extensive set of predicates beyond the dataset annotations, and to tackle the prevalent issue of low annotation quality. While our paper presents preliminary findings, experiments on the Visual Genome dataset show its strong performance, particularly in predicate classifications and zero-shot settings, that demonstrates the promise of our approach.
Hierarchical Relationships: A New Perspective to Enhance Scene Graph Generation
[ "Bowen Jiang", "Camillo Taylor" ]
Workshop/GLFrontiers
poster
2303.06842
[ "https://github.com/bowen-upenn/scene_graph_commonsense" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ScNNo7v4t0
@inproceedings{ chen2023exploring, title={Exploring the Potential of Large Language Models ({LLM}s) in Learning on Graph}, author={Zhikai Chen and Haitao Mao and Hang Li and Wei Jin and Hongzhi Wen and Xiaochi Wei and Shuaiqiang Wang and Dawei Yin and Wenqi Fan and Hui Liu and Jiliang Tang}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=ScNNo7v4t0} }
Learning on Graphs has attracted immense attention due to its wide real-world applications. The most popular pipeline for learning on graphs with textual node attributes primarily relies on Graph Neural Networks (GNNs), and utilizes shallow text embedding as initial node representations, which has limitations in general knowledge and profound semantic understanding. In recent years, Large Language Models (LLMs) have been proven to possess extensive common knowledge and powerful semantic comprehension abilities that have revolutionized existing workflows to handle text data. In this paper, we aim to explore the potential of LLMs in graph machine learning, especially the node classification task, and investigate two possible pipelines: LLMs-as-Enhancers and LLMs-as-Predictors. The former leverages LLMs to enhance nodes' text attributes with their massive knowledge and then generate predictions through GNNs. The latter attempts to directly employ LLMs as standalone predictors. We conduct comprehensive and systematical studies on these two pipelines under various settings. From comprehensive empirical results, we make original observations and find new insights that open new possibilities and suggest promising directions to leverage LLMs for learning on graphs.
Exploring the Potential of Large Language Models (LLMs) in Learning on Graph
[ "Zhikai Chen", "Haitao Mao", "Hang Li", "Wei Jin", "Hongzhi Wen", "Xiaochi Wei", "Shuaiqiang Wang", "Dawei Yin", "Wenqi Fan", "Hui Liu", "Jiliang Tang" ]
Workshop/GLFrontiers
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ROB6TdsmFa
@inproceedings{ gao2023one, title={One Node Per User: Node-Level Federated Learning for Graph Neural Networks}, author={Zhidong Gao and Yuanxiong Guo and Yanmin Gong}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=ROB6TdsmFa} }
Graph Neural Networks (GNNs) training often necessitates gathering raw user data on a central server, which raises significant privacy concerns. Federated learning emerges as a solution, enabling collaborative model training without users directly sharing their raw data. However, integrating federated learning with GNNs presents unique challenges, especially when a client represents a graph node and holds merely a single feature vector. In this paper, we propose a novel framework for node-level federated graph learning. Specifically, we decouple the message-passing and feature vector transformation processes of the first GNN layer, allowing them to be executed separately on the user devices and the cloud server. Moreover, we introduce a graph Laplacian term based on the feature vector's latent representation to regulate the user-side model updates. The experiment results on multiple datasets show that our approach achieves better performance compared with baselines.
One Node Per User: Node-Level Federated Learning for Graph Neural Networks
[ "Zhidong Gao", "Yuanxiong Guo", "Yanmin Gong" ]
Workshop/GLFrontiers
poster
2409.19513
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=QgdnOBnh9i
@inproceedings{ giovanni2023how, title={How does over-squashing affect the power of {GNN}s?}, author={Francesco Di Giovanni and T. Konstantin Rusch and Michael Bronstein and Andreea Deac and Marc Lackenby and Siddhartha Mishra and Petar Veli{\v{c}}kovi{\'c}}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=QgdnOBnh9i} }
Graph Neural Networks (GNNs) are the state-of-the-art model for machine learning on graph-structured data. The most popular class of GNNs operate by exchanging information between adjacent nodes, and are known as Message Passing Neural Networks (MPNNs). While understanding the expressive power of MPNNs is a key question, existing results typically consider settings with uninformative node features. In this paper, we provide a rigorous analysis to determine which function classes of node features can be learned by an MPNN of a given capacity. We do so by measuring the level of \emph{pairwise interactions} between nodes that MPNNs allow for. This measure provides a novel quantitative characterization of the so-called over-squashing effect, which is observed to occur when a large volume of messages is aggregated into fixed-size vectors. Using our measure, we prove that, to guarantee sufficient communication between pairs of nodes, the capacity of the MPNN must be large enough, depending on properties of the input graph structure, such as commute times. For many relevant scenarios, our analysis results in impossibility statements in practice, showing that \emph{over-squashing hinders the expressive power of MPNNs}. Our theory also holds for geometric graphs and hence extends to equivariant MPNNs on point clouds. We validate our analysis through extensive controlled experiments and ablation studies.
How does over-squashing affect the power of GNNs?
[ "Francesco Di Giovanni", "T. Konstantin Rusch", "Michael Bronstein", "Andreea Deac", "Marc Lackenby", "Siddhartha Mishra", "Petar Veličković" ]
Workshop/GLFrontiers
oral
2306.03589
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=QA3Z1cPDrc
@inproceedings{ georgiev2023beyond, title={Beyond Erdos-Renyi: Generalization in Algorithmic Reasoning on Graphs}, author={Dobrik Georgiev and Pietro Lio and Jakub Bachurski and Junhua Chen and Tunan Shi}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=QA3Z1cPDrc} }
Neural algorithmic reasoning excels in many graph algorithms, but assessment mainly focuses on the Erdős-Rényi (ER) graph family. This study delves into graph algorithmic models' generalization across diverse distributions. Testing a leading model exposes overreliance on ER graphs for generalization assessment. We further investigate two scenarios: generalisation to every target distribution and single target distributions. Our results suggest that achieving the former is not as trivial and achieving the latter can be aided selecting source distribution via novel Tree Mover's Distance interpretation.
Beyond Erdos-Renyi: Generalization in Algorithmic Reasoning on Graphs
[ "Dobrik Georgiev", "Pietro Lio", "Jakub Bachurski", "Junhua Chen", "Tunan Shi" ]
Workshop/GLFrontiers
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=PlqQ8V5BfC
@inproceedings{ paliotta2023graph, title={Graph Neural Networks Go Forward-Forward}, author={Daniele Paliotta and Mathieu Alain and B{\'a}lint M{\'a}t{\'e} and Fran{\c{c}}ois Fleuret}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=PlqQ8V5BfC} }
We present the Graph Forward-Forward (GFF) algorithm, an extension of the Forward-Forward procedure to graphs, able to handle features distributed over a graph's nodes. This allows training graph neural networks with forward passes only, without backpropagation. Our method is agnostic to the message-passing scheme, and provides a more biologically plausible learning scheme than backpropagation, while also carrying computational advantages. With GFF, graph neural networks are trained greedily layer by layer, using both positive and negative samples. We run experiments on 11 standard graph property prediction tasks, showing how GFF provides an effective alternative to backpropagation for training graph neural networks. This shows in particular that this procedure is remarkably efficient in spite of combining the per-layer training with the locality of the processing in a GNN.
Graph Neural Networks Go Forward-Forward
[ "Daniele Paliotta", "Mathieu Alain", "Bálint Máté", "François Fleuret" ]
Workshop/GLFrontiers
poster
2302.05282
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=PYcp183GBL
@inproceedings{ garcia2023towards, title={Towards Particle Flow Event Reconstruction at the Future Circular Collider with {GNN}s}, author={Dolores Garcia and Gregor Kr{\v{z}}manc and Philipp Zehetner and Jan Kieseler and Michele Selvaggi}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=PYcp183GBL} }
Reconstructing particles properties from raw signals measured in particle physics detectors is a challenging task due to the complex shapes of the showers, variety in density and sparsity. Classical particle reconstruction algorithms in current detectors use a multi-step pipeline, but the increase in data complexity of future detectors will reduce their performance. We consider a geometric graph representation due to the sparsity and difference in density of particle showers. We introduce a dataset for particle level reconstruction at the Future Circular Collider and benchmark the performance of state-of-the-art GNN architectures on this dataset. We show that our pipeline performs with high efficiency and response and discuss how this type of data can further drive the development of novel geometric GNN approaches.
Towards Particle Flow Event Reconstruction at the Future Circular Collider with GNNs
[ "Dolores Garcia", "Gregor Kržmanc", "Philipp Zehetner", "Jan Kieseler", "Michele Selvaggi" ]
Workshop/GLFrontiers
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=PPIVoxynCt
@inproceedings{ chen2023edge, title={{EDGE}++: Improved Training and Sampling of {EDGE}}, author={Xiaohui Chen and Mingyang Wu and Liping Liu}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=PPIVoxynCt} }
Traditional graph-generative models like the Stochastic-Block Model (SBM) fall short in capturing complex structures inherent in large graphs. Recently developed deep learning models like NetGAN, CELL, and Variational Graph Autoencoders have made progress but face limitations in replicating key graph statistics. Diffusion-based methods such as EDGE have emerged as promising alternatives, however, they present challenges in computational efficiency and generative performance. In this paper, we propose enhancements to the EDGE model to address these issues. Specifically, we introduce a degree-specific noise schedule that optimizes the number of active nodes at each timestep, significantly reducing memory consumption. Additionally, we present an improved sampling scheme that fine-tunes the generative process, allowing for better control over the similarity between the synthesized and the true network. Our experimental results demonstrate that the proposed modifications not only improve the efficiency but also enhance the accuracy of the generated graphs, offering a robust and scalable solution for graph generation tasks.
EDGE++: Improved Training and Sampling of EDGE
[ "Xiaohui Chen", "Mingyang Wu", "Liping Liu" ]
Workshop/GLFrontiers
poster
2310.14441
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=Nrs8BA84br
@inproceedings{ cha2023on, title={On the Temperature of Bayesian Graph Neural Networks for Conformal Prediction}, author={Seohyeon Cha and Honggu Kang and Joonhyuk Kang}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=Nrs8BA84br} }
Accurate uncertainty quantification in graph neural networks (GNNs) is essential, especially in high-stakes domains where GNNs are frequently employed. Conformal prediction (CP) offers a promising framework for quantifying uncertainty by providing $\\textit{valid}$ prediction sets for any black-box model. CP ensures formal probabilistic guarantees that a prediction set contains a true label with a desired probability. However, the size of prediction sets, known as $\\textit{inefficiency}$, is influenced by the underlying model and data generating process. On the other hand, Bayesian learning also provides a credible region based on the estimated posterior distribution, but this region is $\\textit{well-calibrated}$ only when the model is correctly specified. Building on a recent work that introduced a scaling parameter for constructing valid credible regions from posterior estimate, our study explores the advantages of incorporating a temperature parameter into Bayesian GNNs within CP framework. We empirically demonstrate the existence of temperatures that result in more efficient prediction sets. Furthermore, we conduct an analysis to identify the factors contributing to inefficiency and offer valuable insights into the relationship between CP performance and model calibration.
On the Temperature of Bayesian Graph Neural Networks for Conformal Prediction
[ "Seohyeon Cha", "Honggu Kang", "Joonhyuk Kang" ]
Workshop/GLFrontiers
poster
2310.11479
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=NE8Da26bEW
@inproceedings{ xuanyuan2023shedding, title={Shedding Light on Random Dropping and Oversmoothing}, author={Han Xuanyuan and Tianxiang Zhao and Dongsheng Luo}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=NE8Da26bEW} }
Graph Neural Networks (GNNs) are widespread in graph representation learning. *Random dropping* approaches, notably DropEdge and DropMessage, claim to alleviate the key issues of overfitting and oversmoothing by randomly removing elements of the graph representation. However, their effectiveness is largely unverified. In this work, we show empirically that they have a limited effect in reducing oversmoothing at test time due to their training time exclusive nature. We show that DropEdge in particular can be seen as a form of training data augmentation, and its benefits to model generalization are not strictly related to oversmoothing, suggesting that in practice, the precise link between oversmoothing and performance is more nuanced than previously thought. We address the limitations of current dropping methods by *learning* to drop via optimizing an information bottleneck, which enables dropping to be performed effectively at test time.
Shedding Light on Random Dropping and Oversmoothing
[ "Han Xuanyuan", "Tianxiang Zhao", "Dongsheng Luo" ]
Workshop/GLFrontiers
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=LzMWMJlxHg
@inproceedings{ galkin2023towards, title={Towards Foundation Models for Knowledge Graph Reasoning}, author={Mikhail Galkin and Xinyu Yuan and Hesham Mostafa and Jian Tang and Zhaocheng Zhu}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=LzMWMJlxHg} }
Foundation models in language and vision have the ability to run inference on any textual and visual inputs thanks to the transferable representations such as a vocabulary of tokens in language. Knowledge graphs (KGs) have different entity and relation vocabularies that generally do not overlap. The key challenge of designing foundation models on KGs is to learn such transferable representations that enable inference on any graph with arbitrary entity and relation vocabularies. In this work, we make a step towards such foundation models and present ULTRA, an approach for learning universal and transferable graph representations. ULTRA builds relational representations as a function conditioned on their interactions. Such a conditioning strategy allows a pre-trained ULTRA model to inductively generalize to any unseen KG with any relation vocabulary and to be fine-tuned on any graph. Conducting link prediction experiments on 57 different KGs, we find that the zero-shot inductive inference performance of a single pre-trained ULTRA model on unseen graphs of various sizes is often on par or better than strong baselines trained on specific graphs. Fine-tuning further boosts the performance.
Towards Foundation Models for Knowledge Graph Reasoning
[ "Mikhail Galkin", "Xinyu Yuan", "Hesham Mostafa", "Jian Tang", "Zhaocheng Zhu" ]
Workshop/GLFrontiers
poster
2310.04562
[ "https://github.com/DeepGraphLearning/ULTRA" ]
https://huggingface.co/papers/2310.04562
1
3
1
5
1
[ "mgalkin/ultra_50g", "mgalkin/ultra_3g", "mgalkin/ultra_4g" ]
[]
[]
null
https://openreview.net/forum?id=KIISfvU806
@inproceedings{ zhou2023a, title={A Multi-Task Perspective for Link Prediction with New Relation Types and Nodes}, author={Jincheng Zhou and Beatrice Bevilacqua and Bruno Ribeiro}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=KIISfvU806} }
The task of inductive link prediction in (discrete) attributed multigraphs infers missing attributed links (relations) between nodes in new test multigraphs. Traditional relational learning methods face the challenge of limited generalization to test multigraphs containing both novel nodes and novel relation types not seen in training. Recently, under the only assumption that all relation types share the same structural predictive patterns (single task), Gao et al. (2023) proposed a link prediction method using the theoretical concept of double equivariance (equivariance for nodes & relation types), in contrast to the (single) equivariance (only for nodes) used to design Graph Neural Networks (GNNs). In this work we further extend the double equivariance concept to multi-task double equivariance, where we define link prediction in attributed multigraphs that can have distinct and potentially conflicting predictive patterns for different sets of relation types (multiple tasks). Our empirical results on real-world datasets demonstrate that our approach can effectively generalize to test graphs with multi-task structures without access to additional information.
A Multi-Task Perspective for Link Prediction with New Relation Types and Nodes
[ "Jincheng Zhou", "Beatrice Bevilacqua", "Bruno Ribeiro" ]
Workshop/GLFrontiers
poster
2307.06046
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=JTRErQb2oN
@inproceedings{ ouyang2023prompt, title={Prompt Learning Unlocked for App Promotion in the Wild}, author={Zhongyu Ouyang and Shifu Hou and Shang Ma and Chaoran Chen and Chunhui Zhang and Toby Li and Xusheng Xiao and Chuxu Zhang and Yanfang Ye}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=JTRErQb2oN} }
In recent times, mobile apps have increasingly incorporated app promotion ads to promote other apps, raising cybersecurity and online commerce concerns related to societal trust and recommendation systems. To effectively discover the intricate nature of the app promotion graph data, we center around the graph completion task, aiming to learn the connection patterns among diverse relations and entities. However, accurately deciphering the connection patterns in such a large and diverse graph presents significant challenges for deep learning models. To overcome these challenges, we introduce Prompt Promotion, a transformer-based framework that unlocks prompt learning capabilities by incorporating metapath- and embedding-based prompts that provide valuable hints to guide the model's predictions for undetermined connection patterns. Experimental results show that our Prompt Promotion model represents a pioneering prompt-based capability in effectively completing the app promotion graph. It not only demonstrates superior performance in heterogeneous graph completion in real-world scenarios, but also exhibits strong generalization capabilities for diverse, complex, and noisy connection patterns when paired with their respective prompts.
Prompt Learning Unlocked for App Promotion in the Wild
[ "Zhongyu Ouyang", "Shifu Hou", "Shang Ma", "Chaoran Chen", "Chunhui Zhang", "Toby Li", "Xusheng Xiao", "Chuxu Zhang", "Yanfang Ye" ]
Workshop/GLFrontiers
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=It9QWbVe02
@inproceedings{ chen2023uncertaintyaware, title={Uncertainty-Aware Robust Learning on Noisy Graphs}, author={Shuyi Chen and Kaize Ding and Shixiang Zhu}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=It9QWbVe02} }
Graph neural networks have shown impressive capabilities in solving various graph learning tasks, particularly excelling in node classification. However, their effectiveness can be hindered by the challenges arising from the widespread existence of noisy measurements associated with the topological or nodal information present in real-world graphs. These inaccuracies in observations can corrupt the crucial patterns within the graph data, ultimately resulting in undesirable performance in practical applications. To address these issues, this paper proposes a novel uncertainty-aware graph learning framework motivated by distributionally robust optimization. The framework aims to alleviate the challenges by considering the distributional uncertainty associated with the graph data. Specifically, we use a graph neural network-based encoder to embed the node features and find the optimal node embeddings by minimizing the worst-case risk through a minimax formulation. Such an uncertainty-aware learning process leads to improved node representations and a more robust graph predictive model that effectively mitigates the impact of uncertainty arising from data noise. The learned LFDs also provide a means to quantify the predictive uncertainty, which is valuable in some uncertainty-sensitive scenarios where incorrect decisions can have severe consequence. In addition, we adopt the idea of differentiable optimization and develop an end-to-end learning algorithm that seamlessly integrates graph learning and distributionally robust optimization. Our experimental result shows that the proposed framework achieves superior predictive performance compared to the state-of-the-art baselines under various noisy settings.
Uncertainty-Aware Robust Learning on Noisy Graphs
[ "Shuyi Chen", "Kaize Ding", "Shixiang Zhu" ]
Workshop/GLFrontiers
poster
2306.08210
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=I5hf3opvgK
@inproceedings{ roy2023gadebm, title={{GAD}-{EBM}: Graph Anomaly Detection using Energy-Based Models}, author={Amit Roy and Juan Shu and Olivier Elshocht and Jeroen Smeets and Ruqi Zhang and Pan Li}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=I5hf3opvgK} }
Graph Anomaly Detection (GAD) is essential in fields ranging from network security, bioinformatics to finance. Previous works often adopt auto-encoders to compute reconstruction errors for anomaly detection: anomalies are hard to be reconstructed. In this work, we revisit the first principle for anomaly detection, i.e., the Neyman-Pearson rule, where the optimal anomaly detector is based on the likelihood of a data point given the normal distribution of data. However, in practice, the distribution is often unknown and the estimation of the distribution of graph-structured data may be hard. Moreover, the likelihood computation of a graph-structured data point may be challenging as well. In this paper, we propose a novel approach GAD-EBM that can estimate the distribution of graphs and compute likelihoods efficiently by using Energy-Based Models (EBMs) over graphs. GAD-EBM approaches the likelihood of a rooted subgraph of node v, and further can leverage the likelihood to accurately identify whether node v is anomalous or not. Traditional score matching for training EBMs may not be used to apply EBMs that model the distribution of graphs because of complicated discreteness and multi-modality of graph data. We propose a Subgraph Score Matching (SSM) approach, which is specifically designed for graph data based on a novel framework of neighborhood state-space graphs. Experimentation conducted on six real-world datasets validates the effectiveness and efficiency of GAD-EBM and the source code for GAD-EBM is openly available.
GAD-EBM: Graph Anomaly Detection using Energy-Based Models
[ "Amit Roy", "Juan Shu", "Olivier Elshocht", "Jeroen Smeets", "Ruqi Zhang", "Pan Li" ]
Workshop/GLFrontiers
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=HKdsrm5nCW
@inproceedings{ kataria2023linear, title={Linear Complexity Framework for Feature-Aware Graph Coarsening via Hashing}, author={Mohit Kataria and Aditi Khandelwal and Rocktim Das and Sandeep Kumar and Jayadeva Jayadeva}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=HKdsrm5nCW} }
Large-scale graphs are increasingly common in various applications, leading to significant computational challenges in data processing and analysis. To address this, coarsening algorithms are employed to reduce graph size while preserving key properties. However, existing methods for large-scale graphs are computationally intensive, undermining the coarsening goal. Additionally, real-world graphs often contain node-specific features or contexts, which current coarsening approaches overlook, focusing solely on structural information like adjacency matrices. This limitation may not suit downstream tasks reliant on node features. In this paper, we introduce a Feature-Aware graph Coarsening algorithm via Hashing, called FACH, inspired by locality sensitive hashing to coarsen the graph based on the node features. To our knowledge, this is the first-ever method that coarsens a graph with node features in linear time. FACH is over 7× faster than the quickest and around 150× faster than the existing techniques for datasets like Coauthor Physics which has 34,493 nodes. We also demonstrate the efficacy of the proposed framework in terms of superior run-time complexity. The coarsened graph obtained by our method also preserves the spectral properties of the original graph while achieving massive improvement in time-complexity of coarsening which is the primary goal of our study. We showcase the effectiveness of FACH for the downstream task by evaluating the performance on scalable training of graph neural networks using coarsened data on benchmark real-world datasets.
Linear Complexity Framework for Feature-Aware Graph Coarsening via Hashing
[ "Mohit Kataria", "Aditi Khandelwal", "Rocktim Das", "Sandeep Kumar", "Jayadeva Jayadeva" ]
Workshop/GLFrontiers
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=EVp40Cz0PR
@inproceedings{ g2023satg, title={{SATG} : Structure Aware Transformers on Graphs for Node Classification}, author={Sumedh B G and Sanjay Patnala and Himil Vasava and Akshay Sethi and Sonia Gupta}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=EVp40Cz0PR} }
Transformers have achieved state-of-the-art performance in the fields of Computer Vision (CV) and Natural Language Processing (NLP). Inspired by this, architectures have come up in recent times that incorporate transformers into the domain of graph neural networks. Most of the existing Graph Transformers either take a set of all the nodes as an input sequence leading to quadratic time complexity or they take only one hop or k-hop neighbours as the input sequence, thereby completely ignoring any long-range interactions. To this end, we propose Structure Aware Transformer on Graphs (SATG), where we capture both short-range and long-range interactions in a computationally efficient manner. When it comes to dealing with non-euclidean spaces like graphs, positional encoding becomes an integral component to provide structural knowledge to the transformer. Upon observing the shortcomings of the existing set of positional encodings, we introduce a new class of positional encodings trained on a Neighbourhood Contrastive Loss that effectively captures the entire topology of the graph. We also introduce a method to effectively capture long-range interactions without having a quadratic time complexity. Extensive experiments done on five benchmark datasets show that SATG consistently outperforms GNNs by a substantial margin and also successfully outperforms other Graph Transformers.
SATG : Structure Aware Transformers on Graphs for Node Classification
[ "Sumedh B G", "Sanjay Patnala", "Himil Vasava", "Akshay Sethi", "Sonia Gupta" ]
Workshop/GLFrontiers
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=BaxFC3z9R6
@inproceedings{ li2023what, title={What Improves the Generalization of Graph Transformer? A Theoretical Dive into Self-attention and Positional Encoding}, author={Hongkang Li and Meng Wang and Tengfei Ma and Sijia Liu and ZAIXI ZHANG and Pin-Yu Chen}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=BaxFC3z9R6} }
Graph Transformers, which incorporate self-attention and positional encoding, have recently emerged as a powerful architecture for various graph learning tasks. Despite their impressive performance, the complex non-convex interactions across layers and the recursive graph structure have made it challenging to establish a theoretical foundation for learning and generalization. This study introduces the first theoretical investigation of a shallow Graph Transformer for semi-supervised node classification, comprising a self-attention layer with relative positional encoding and a two-layer perception. Focusing on a graph data model with discriminative nodes that determine node labels and non-discriminative nodes that are class-irrelevant, we characterize the sample complexity required to achieve a zero generalization error by training with stochastic gradient descent (SGD). This paper provides the quantitative characterization of the sample complexity and number of iterations for convergence dependent on the fraction of discriminative nodes, the dominant patterns, the fraction of erroneous labels, and the initial model errors. Furthermore, we demonstrate that self-attention and positional encoding enhance generalization by making the attention map sparse and promoting the core neighborhood during training, which explains the superior feature representation of Graph Transformers. Our theoretical results are supported by empirical experiments on synthetic and real-world benchmarks.
What Improves the Generalization of Graph Transformer? A Theoretical Dive into Self-attention and Positional Encoding
[ "Hongkang Li", "Meng Wang", "Tengfei Ma", "Sijia Liu", "ZAIXI ZHANG", "Pin-Yu Chen" ]
Workshop/GLFrontiers
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=9bD3oqcD6g
@inproceedings{ sohn2023todflow, title={{TOD}-Flow: Modeling the Structure of Task-Oriented Dialogues}, author={Sungryull Sohn and Yiwei Lyu and Anthony Liu and Lajanugen Logeswaran and Dong-Ki Kim and Dongsub Shim and Honglak Lee}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=9bD3oqcD6g} }
Task-Oriented Dialogue (TOD) systems have become crucial components in interactive artificial intelligence applications. While recent advances have capitalized on pre-trained language models (PLMs), they exhibit limitations regarding transparency and controllability. To address these challenges, we propose a novel approach focusing on inferring the TOD-flow graph from dialogue data annotated with dialog acts, uncovering the underlying task structure in the form of a graph. The inferred TOD-flow graph can be easily integrated with any dialogue model to improve its prediction performance, transparency, and controllability. Our TOD-flow graph learns what a model can, should, and should not predict, effectively reducing the search space and providing a rationale for the model's prediction. We show that the proposed TOD-flow graph better resemble human-annotated graphs compared to prior approaches. Furthermore, when combined with several dialogue policies and end-to-end dialogue models, we demonstrate that our approach significantly improves dialog act classification and end-to-end response generation performance in the MultiWOZ and SGD benchmarks.
TOD-Flow: Modeling the Structure of Task-Oriented Dialogues
[ "Sungryull Sohn", "Yiwei Lyu", "Anthony Liu", "Lajanugen Logeswaran", "Dong-Ki Kim", "Dongsub Shim", "Honglak Lee" ]
Workshop/GLFrontiers
poster
2312.04668
[ "https://github.com/srsohn/tod-flow" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=9ADkymyCPA
@inproceedings{ subramonian2023networked, title={Networked Inequality: Preferential Attachment Bias in Graph Neural Network Link Prediction}, author={Arjun Subramonian and Levent Sagun and Yizhou Sun}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=9ADkymyCPA} }
Graph neural network (GNN) link prediction is increasingly deployed in citation, collaboration, and online social networks to recommend academic literature, collaborators, and friends. While prior research has investigated the dyadic fairness of GNN link prediction, the within-group fairness and ``rich get richer'' dynamics of link prediction remain underexplored. However, these aspects have significant consequences for degree and power imbalances in networks. In this paper, we shed light on how degree bias in networks affects Graph Convolutional Network (GCN) link prediction. In particular, we theoretically uncover that GCNs with a symmetric normalized graph filter have a within-group preferential attachment bias. We validate our theoretical analysis on real-world citation, collaboration, and online social networks. We further bridge GCN's preferential attachment bias with unfairness in link prediction and propose a new within-group fairness metric. This metric quantifies disparities in link prediction scores between social groups, towards combating the amplification of degree and power disparities. Finally, we propose a simple training-time strategy to alleviate within-group unfairness, and we show that it is effective on citation, online social, and credit networks.
Networked Inequality: Preferential Attachment Bias in Graph Neural Network Link Prediction
[ "Arjun Subramonian", "Levent Sagun", "Yizhou Sun" ]
Workshop/GLFrontiers
oral
2309.17417
[ "https://github.com/arjunsubramonian/link_bias_amplification" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=947KhgKKGG
@inproceedings{ wang2023towards, title={Towards Flexible, Efficient, and Effective Tensor Product Networks}, author={Nanxiang Wang and Chen Lin and Michael Bronstein and Philip Torr}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=947KhgKKGG} }
Geometric graph neural networks have showcased exceptional performance in modelling geometric data. These models rely heavily on equivariant operations, encompassing vital techniques such as scalarization and the Clebsch-Gordan tensor product. However, tensor-product-based architectures face substantial computational challenges as the representation order increases, significantly limiting their versatility. Moreover, the interpretability of interactions between steerable components remains elusive. In contrast, scalarization methods benefit from cost-efficient invariant scalar operations while still being capable of outperforming certain tensor-product-based models. To bridge the gap between these approaches, we introduce a conceptual framework that emphasizes the potential flexibility in designing tensor product networks. To provide guidance for efficient framework design and gain deeper insights into steerable components, we conduct a preliminary investigation by pruning tensor product interactions. This approach enables us to directly assess the redundancy and significance of steerable components, paving the way for efficient and effective designs.
Towards Flexible, Efficient, and Effective Tensor Product Networks
[ "Nanxiang Wang", "Chen Lin", "Michael Bronstein", "Philip Torr" ]
Workshop/GLFrontiers
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=7xlWBzaeJN
@inproceedings{ li2023chatpathway, title={ChatPathway: Conversational Large Language Models for Biology Pathway Detection}, author={Yanjing Li and Hannan Xu and Haiteng Zhao and Hongyu Guo and Shengchao Liu}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=7xlWBzaeJN} }
Biological pathways, like protein-protein interactions and metabolic networks, are vital for understanding diseases and drug development. Some databases such as KEGG are designed to store and map these pathways. However, many bioinformatics methods face limitations due to database constraints, and certain deep learning models struggle with the complexities of biochemical reactions involving large molecules and diverse enzymes. Importantly, the thorough exploration of biological pathways demands a deep understanding of scientific literature and past research. Despite this, recent advancements in Large Language Models (LLMs), especially ChatGPT, show promise. We first restructured data from KEGG and augmented it with molecule structural and functional information sourced from UniProt and PubChem. Our study evaluated LLMs, particularly GPT-3.5-turbo and Galactica, in predicting biochemical reactions and pathways using our constructed data. We also assessed its ability to predict novel pathways, not covered in its training dataset, using findings from recently published studies. While GPT demonstrated strengths in pathway mapping, Galactica encountered challenges. This research emphasizes the potential of merging LLMs with biology, suggesting a harmonious blend of human expertise and AI in decoding biological systems.
ChatPathway: Conversational Large Language Models for Biology Pathway Detection
[ "Yanjing Li", "Hannan Xu", "Haiteng Zhao", "Hongyu Guo", "Shengchao Liu" ]
Workshop/GLFrontiers
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=7CAJpRo1Q8
@inproceedings{ fatemi2023talk, title={Talk like a Graph: Encoding Graphs for Large Language Models}, author={Bahare Fatemi and Jonathan Halcrow and Bryan Perozzi}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=7CAJpRo1Q8} }
Graphs are a powerful tool for representing and analyzing complex relationships in real-world applications such as social networks, recommender systems, and computational finance. Reasoning on graphs is essential for drawing inferences about the relationships between entities in a complex system, and to identify hidden patterns and trends. Despite the remarkable progress in automated reasoning with natural text, reasoning on graphs with large language models (LLMs) remains an understudied problem. In this work, we perform the first comprehensive study of encoding graph-structured data as text for consumption by LLMs. We show that LLM performance on graph reasoning tasks varies on three fundamental levels: (1) the graph encoding method, (2) the nature of the graph task itself, and (3) interestingly, the very structure of the graph considered. These novel results provide valuable insight on strategies for encoding graphs as text. Using these insights we illustrate how the correct choice of encoders can boost performance on graph reasoning tasks inside LLMs by 4.8% to 61.8%, depending on the task.
Talk like a Graph: Encoding Graphs for Large Language Models
[ "Bahare Fatemi", "Jonathan Halcrow", "Bryan Perozzi" ]
Workshop/GLFrontiers
poster
2310.04560
[ "" ]
https://huggingface.co/papers/2310.04560
0
3
0
3
1
[]
[]
[]
null
https://openreview.net/forum?id=6UuNTxAB1t
@inproceedings{ inae2023motifaware, title={Motif-aware Attribute Masking for Molecular Graph Pre-training}, author={Eric Inae and Gang Liu and Meng Jiang}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=6UuNTxAB1t} }
Attribute reconstruction is used to predict node or edge features in the pre-training of graph neural networks. Given a large number of molecules, they learn to capture structural knowledge, which is transferable for various downstream property prediction tasks and vital in chemistry, biomedicine, and material science. Previous strategies that randomly select nodes to do attribute masking leverage the information of local neighbors. However, the over-reliance of these neighbors inhibits the model's ability to learn long-range dependencies from higher-level substructures. For example, the model would learn little from predicting three carbon atoms in a benzene ring based on the other three but could learn more from the inter-connections between the functional groups, or called chemical motifs. In this work, we propose and investigate motif-aware attribute masking strategies to capture long-range inter-motif structures by leveraging the information of atoms in neighboring motifs. Once each graph is decomposed into disjoint motifs, the features for every node within a sample motif are masked. The graph decoder then predicts the masked features of each node within the motif for reconstruction. We evaluate our approach on eight molecular property prediction datasets and demonstrate its advantages.
Motif-aware Attribute Masking for Molecular Graph Pre-training
[ "Eric Inae", "Gang Liu", "Meng Jiang" ]
Workshop/GLFrontiers
oral
2309.04589
[ "https://github.com/einae-nd/moama-dev" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=60fJkjHV0r
@inproceedings{ abbahaddou2023graph, title={Graph Neural Networks on Discriminative Graphs of Words}, author={Yassine ABBAHADDOU and Johannes Lutzeyer and Michalis Vazirgiannis}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=60fJkjHV0r} }
In light of the recent success of Graph Neural Networks (GNNs) and their ability to perform inference on complex data structures, many studies apply GNNs to the task of text classification. In most previous methods, a heterogeneous graph, containing both word and document nodes, is constructed using the entire corpus and a GNN is used to classify document nodes. In this work, we explore a new Discriminative Graph of Words Graph Neural Network (DGoW-GNN) approach encapsulating both a novel discriminative graph construction and model to classify text. In our graph construction, containing only word nodes and no document nodes, we split the training corpus into disconnected subgraphs according to their labels and weight edges by the pointwise mutual information of the represented words. Our graph construction, for which we provide theoretical motivation, allows us to reformulate the task of text classification as the task of walk classification. We also propose a new model for the graph-based classification of text, which combines a GNN and a sequence model. We evaluate our approach on seven benchmark datasets and find that it is outperformed by several state-of-the-art baseline models. We analyse reasons for this performance difference and hypothesise under which conditions it is likely to change. Our code is publicly available at: https://github.com/abbahaddou/DGOW.
Graph Neural Networks on Discriminative Graphs of Words
[ "Yassine ABBAHADDOU", "Johannes Lutzeyer", "Michalis Vazirgiannis" ]
Workshop/GLFrontiers
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=5RA0ysv8PX
@inproceedings{ trivedi2023estimating, title={Estimating Epistemic Uncertainty of Graph Neural Networks using Stochastic Centering}, author={Puja Trivedi and Mark Heimann and Rushil Anirudh and Danai Koutra and Jayaraman J. Thiagarajan}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=5RA0ysv8PX} }
While graph neural networks (GNNs) are widely used for node and graph representation learning tasks, the reliability of GNN uncertainty estimates under distribution shifts remains relatively under-explored. Indeed, while \textit{post-hoc} calibration strategies can be used to improve in-distribution calibration, they need not also improve calibration under distribution shift. However, techniques which produce GNNs with better \textit{intrinsic} uncertainty estimates are particularly valuable, as they can always be combined with post-hoc strategies later. Therefore, in this work, we propose G-$\Delta$UQ, a novel training framework designed to improve intrinsic GNN uncertainty estimates. Our framework adapts the principle of stochastic data centering to graph data through novel graph anchoring strategies, and is able to support partially stochastic GNNs. While, the prevalent wisdom is that fully stochastic networks are necessary to obtain reliable estimates, we find that the functional diversity induced by our anchoring strategies when sampling hypotheses renders this unnecessary and allows us to support \gduq~ on pretrained models. Indeed, through extensive evaluation under covariate, concept and graph size shifts, we show that G-$\Delta$UQ leads to better calibrated GNNs for node and graph classification. Further, it also improves performance on the uncertainty-based tasks of out-of-distribution detection and generalization gap estimation. Overall, our work provides insights into uncertainty estimation for GNNs, and demonstrates the utility of G-$\Delta$UQ in obtaining reliable estimates.
Estimating Epistemic Uncertainty of Graph Neural Networks using Stochastic Centering
[ "Puja Trivedi", "Mark Heimann", "Rushil Anirudh", "Danai Koutra", "Jayaraman J. Thiagarajan" ]
Workshop/GLFrontiers
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=4tGqks76l7
@inproceedings{ hajiramezanali2023on, title={On the Consistency of {GNN} Explainability Methods}, author={Ehsan Hajiramezanali and Sepideh Maleki and Alex Tseng and Aicha BenTaieb and Gabriele Scalia and Tommaso Biancalani}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=4tGqks76l7} }
Despite the widespread utilization of post-hoc explanation methods for graph neural networks (GNNs) in high-stakes settings, there has been a lack of comprehensive evaluation regarding their quality and reliability. This evaluation is challenging primarily due to non-Euclidean nature of the data, arbitrary size, and complex topological structure. In this context, we argue that the \emph{consistency} of GNN explanations, denoting the ability to produce similar explanations for input graphs with minor structural changes that do not alter their output predictions, is a key requirement for effective post-hoc GNN explanations. To fulfill this gap, we introduce a novel metric based on Fused Gromov-Wasserstein distance to quantify consistency.
On the Consistency of GNN Explainability Methods
[ "Ehsan Hajiramezanali", "Sepideh Maleki", "Alex Tseng", "Aicha BenTaieb", "Gabriele Scalia", "Tommaso Biancalani" ]
Workshop/GLFrontiers
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=421Y6z81UZ
@inproceedings{ vora2023gnn, title={{GNN} Predictions on k-hop Egonets Boosts Adversarial Robustness}, author={Jian Vora}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=421Y6z81UZ} }
Like many other deep learning models, Graph Neural Networks (GNNs) have been shown to be susceptible to adversarial attacks, i.e., the addition of crafted imperceptible noise to input data changes the model predictions drastically. We propose a very simple method k-HOP-PURIFY which makes node predictions on a k-hop Egonet centered at the node instead of the entire graph boosts adversarial accuracies. This could be used both as i) a post-processing step after applying popular defenses or ii) as a standalone defense method which is comparable to many other competitors. The method is extremely lightweight and scalable (takes 4 lines of code to implement) unlike many other defense methods which are computationally expensive or rely on heuristics. We show performance gains through extensive experimentation across various types of attacks (poison/evasion, targetted/untargeted), perturbation rates, and defenses implemented in the DeepRobust Library.
GNN Predictions on k-hop Egonets Boosts Adversarial Robustness
[ "Jian Vora" ]
Workshop/GLFrontiers
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=1JkgXzKKdo
@inproceedings{ younesian2023grapes, title={{GRAPES}: Learning to Sample Graphs for Scalable Graph Neural Networks}, author={Taraneh Younesian and Thiviyan Thanapalasingam and Emile van Krieken and Daniel Daza and Peter Bloem}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=1JkgXzKKdo} }
Graph neural networks (GNNs) learn the representation of nodes in a graph by aggregating the neighborhood information in various ways. As these networks grow in depth, their receptive field grows exponentially due to the increase in neighborhood sizes, resulting in high memory costs. Graph sampling solves memory issues in GNNs by sampling a small ratio of the nodes in the graph. This way, GNNs can scale to much larger graphs. Most sampling methods focus on fixed sampling heuristics, which may not generalize to different structures or tasks. We introduce GRAPES, an adaptive graph sampling method that learns to identify sets of influential nodes for training a GNN classifier. GRAPES uses a GFlowNet to learn node sampling probabilities given the classification objectives. We evaluate GRAPES across several small- and large-scale graph benchmarks and demonstrate its effectiveness in accuracy and scalability. In contrast to existing sampling methods, GRAPES maintains high accuracy even with small sample sizes and, therefore, can scale to very large graphs. Our code is publicly available at https://anonymous.4open.science/r/GRAPES.
GRAPES: Learning to Sample Graphs for Scalable Graph Neural Networks
[ "Taraneh Younesian", "Thiviyan Thanapalasingam", "Emile van Krieken", "Daniel Daza", "Peter Bloem" ]
Workshop/GLFrontiers
poster
2310.03399
[ "https://github.com/dfdazac/grapes" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=0oY0LPPH4e
@inproceedings{ zhang2023robust, title={Robust Hierarchical Scene Graph Generation}, author={Ce Zhang and Simon Stepputtis and Joseph Campbell and Katia Sycara and Yaqi Xie}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=0oY0LPPH4e} }
The ability to quickly understand scenes from visual observations via structured representations, known as Scene Graph Generation (SGG), is a crucial component of perception models. Despite recent advancements, most existing models assume perfect observations, an often-unrealistic condition in real-world scenarios. Such models can struggle with visual inputs affected by natural corruptions such as sunlight glare, extreme weather conditions, and smoke. Drawing inspiration from human hierarchical reasoning skills (i.e., from higher to lower levels) as a defense against corruption, we propose a new framework called Hierarchical Knowledge Enhanced Robust Scene Graph Generation (HiKER-SGG). First, we create a hierarchical knowledge graph, facilitating machine comprehension of this structured knowledge. Then we bridge between the constructed graph and the initial scene graph and perform message passing for hierarchical graph reasoning. Finally, we propose a hierarchical prediction head to enable the model to predict from a higher to lower level, thus enhancing robustness against corruptions that frequently impact only fine-grained details. Experiments on various settings confirm the superior performance of the proposed framework with both clean and corrupted images.
Robust Hierarchical Scene Graph Generation
[ "Ce Zhang", "Simon Stepputtis", "Joseph Campbell", "Katia Sycara", "Yaqi Xie" ]
Workshop/GLFrontiers
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=0lZjoxTZFb
@inproceedings{ christie2023higherorder, title={Higher-Order Expander Graph Propagation}, author={Thomas Christie and Yu He}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=0lZjoxTZFb} }
Graph neural networks operate on graph-structured data via exchanging messages along edges. One limitation of this message passing paradigm is the over-squashing problem. Over-squashing occurs when messages from a node's expanded receptive field are compressed into fixed-size vectors, potentially causing information loss. To address this issue, recent works have explored using expander graphs, which are highly-connected sparse graphs with low diameters, to perform message passing. However, current methods on expander graph propagation only consider pair-wise interactions, ignoring higher-order structures in complex data. To explore the benefits of capturing these higher-order correlations while still leveraging expander graphs, we introduce higher-order expander graph propagation. We propose two methods for constructing bipartite expanders and evaluate their performance on both synthetic and real-world datasets.
Higher-Order Expander Graph Propagation
[ "Thomas Christie", "Yu He" ]
Workshop/GLFrontiers
poster
2311.07966
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=0YeJyvv2rO
@inproceedings{ yue2023mcgc, title={{MCGC}: an {MLP}-based supervised Contrastive learning framework for Graph Classification}, author={Xiao Yue and Bo Liu and Andrew Meng and Guangzhi Qu}, booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning}, year={2023}, url={https://openreview.net/forum?id=0YeJyvv2rO} }
Graph Neural Networks (GNNs) have been widely used for tasks involving graph-structured data. These networks create matrix representations of graphs by aggregating nodes information from neighbor nodes recursively. Integrating with contrastive learning, graph contrastive learning has shown enhanced performance on graph-level tasks. However, architectures of graph contrastive learning frameworks become complicated due to the sophisticated structures of GNN-based encoder and necessity of both encoder and projection head. In this paper, we proposed a significantly simplified MLP-based supervised contrastive learning framework for graph classification tasks, coined as MCGC, which does not incorporate any GNN layers. Experimental results on graph benchmark datasets and ablation studies indicate that, despite not utilizing GNN layers, our framework achieved comparable or even superior performance on graph classification tasks against some state-of-the-art models.
MCGC: an MLP-based supervised Contrastive learning framework for Graph Classification
[ "Xiao Yue", "Bo Liu", "Andrew Meng", "Guangzhi Qu" ]
Workshop/GLFrontiers
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zsNbN3NkzR
@inproceedings{ anonymous2023irreducible, title={Irreducible Curriculum for Language Model Pretraining}, author={Anonymous}, booktitle={NeurIPS Workshop on Attributing Model Behavior at Scale}, year={2023}, url={https://openreview.net/forum?id=zsNbN3NkzR} }
Automatic data selection and curriculum design for training large language models is challenging, with only a few existing methods showing improvements over standard training. Furthermore, current schemes focus on domain-level selection, overlooking the more fine-grained contributions of each individual training point. It is difficult to apply traditional datapoint selection methods on large language models: most online batch selection methods perform two-times forward or backward passes, which introduces considerable extra costs with large-scale models. To mitigate these obstacles, we propose irreducible curriculum as a curriculum learning algorithm for language model pretraining, which prioritizes samples with higher learnability. Specifically, to avoid prohibitive extra computation overhead, we simulate the sample loss along the main model's training trajectory using a small-scale proxy model. Our experiments on the RedPajama-1B dataset demonstrate a consistent improvement on validation perplexity across all 7 domains compared to random uniform baseline and the anti-curriculum strategy. Our method also reduces the sharpness of the network and illustrates a better 5-shot accuracy on MMLU benchmarks.
Irreducible Curriculum for Language Model Pretraining
null
Workshop/ATTRIB
poster
2310.15389
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xjmOQRf1OI
@inproceedings{ anonymous2023evaluating, title={Evaluating the Utility of Model Explanations for Model Development}, author={Anonymous}, booktitle={NeurIPS Workshop on Attributing Model Behavior at Scale}, year={2023}, url={https://openreview.net/forum?id=xjmOQRf1OI} }
One of the motivations for explainable AI is to allow humans to make better and more informed decisions regarding the use and deployment of AI models. But careful evaluations are needed to assess whether this expectation has been fulfilled. Current evaluations mainly focus on algorithmic properties of explanations, and those that involve human subjects often employ subjective questions to test human's perception of explanation usefulness, without being grounded in objective metrics and measurements. In this work, we evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development. We conduct a mixed-methods user study involving image data to evaluate saliency maps generated by SmoothGrad, GradCAM, and an oracle explanation on two tasks: model selection and counterfactual simulation. To our surprise, we did not find evidence of significant improvement on these tasks when users were provided with any of the saliency maps, even the synthetic oracle explanation designed to be simple to understand and highly indicative of the answer. Nonetheless, explanations did help users more accurately describe the models. These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
Evaluating the Utility of Model Explanations for Model Development
null
Workshop/ATTRIB
poster
2312.06032
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wyalhhLXCa
@inproceedings{ anonymous2023the, title={The Importance of Prompt Tuning for Automated Neuron Explanations}, author={Anonymous}, booktitle={NeurIPS Workshop on Attributing Model Behavior at Scale}, year={2023}, url={https://openreview.net/forum?id=wyalhhLXCa} }
Recent advances have greatly increased the capabilities of large language models (LLMs), but our understanding of the models and their safety has not progressed as fast. In this paper we aim to understand LLMs deeper by studying their individual neurons. We build upon previous work showing large language models such as GPT-4 can be useful in explaining what each neuron in a language model does. Specifically, we analyze the effect of the prompt used to generate explanations and show that reformatting the explanation prompt in a more natural way can significantly improve neuron explanation quality and greatly reduce computational cost. We demonstrate the effects of our new prompts in three different ways, incorporating both automated and human evaluations.
The Importance of Prompt Tuning for Automated Neuron Explanations
null
Workshop/ATTRIB
poster
2310.06200
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=uSvN2oozRK
@inproceedings{ anonymous2023does, title={Does It Know?: Probing for Uncertainty in Language Model Latent Beliefs}, author={Anonymous}, booktitle={NeurIPS Workshop on Attributing Model Behavior at Scale}, year={2023}, url={https://openreview.net/forum?id=uSvN2oozRK} }
Understanding a language model's beliefs about its truthfulness is crucial for building more trustworthy, factually accurate large language models. The recent method of Contrast-Consistent Search (CCS) measures this "latent belief" via a linear probe on intermediate activations of a language model, trained in an unsupervised manner to classify inputs as true or false. As an extension of CCS, we propose Uncertainty-detecting CCS (UCCS), which encapsulates finer-grained notions of truth, such as uncertainty or ambiguity. Concretely, UCCS teaches a probe, using only unlabeled data, to classify a model's latent belief on input text as true, false, or uncertain. We find that UCCS is an effective unsupervised-trained selective classifier, using its uncertainty class to filter out low-confidence truth predictions, leading to improved accuracy across a diverse set of models and tasks. To properly evaluate UCCS predictions of truth and uncertainty, we introduce a toy dataset, named Temporally Measured Events (TYMES), which comprises true or falsified facts, paired with timestamps, extracted from recent news articles from the past several years. TYMES can be combined with any language model's training cutoff date to systematically produce a subset of data beyond (literally, occurring after) the knowledge limitations of the model. TYMES serves as a valuable proof-of-concept for how we can benchmark uncertainty or time-sensitive world knowledge in language models, a setting which includes but extends beyond our UCCS evaluations.
Does It Know?: Probing and Benchmarking Uncertainty in Language Model Latent Beliefs
null
Workshop/ATTRIB
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=tiLbFR4bJW
@inproceedings{ anonymous2023attribution, title={Attribution Patching Outperforms Automated Circuit Discovery}, author={Anonymous}, booktitle={NeurIPS Workshop on Attributing Model Behavior at Scale}, year={2023}, url={https://openreview.net/forum?id=tiLbFR4bJW} }
Automated interpretability research has recently attracted attention as a potential research direction that could scale explanations of neural network behavior to large models. Existing automated circuit discovery work applies activation patching to identify subnetworks responsible for solving specific tasks (circuits). In this work, we show that a simple method based on attribution patching outperforms all existing methods while requiring just two forward passes and a backward pass. We apply a linear approximation to activation patching to estimate the importance of each edge in the computational subgraph. Using this approximation, we prune the least important edges of the network. We survey the performance and limitations of this method, finding that averaged over all tasks our method has greater AUC from circuit recovery than other methods.
Attribution Patching Outperforms Automated Circuit Discovery
null
Workshop/ATTRIB
poster
2310.10348
[ "https://github.com/aaquib111/acdcpp" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=nb3aaC9Hr8
@inproceedings{ anonymous2023in, title={In Search of a Data Transformation that Accelerates Neural Field Training}, author={Anonymous}, booktitle={NeurIPS Workshop on Attributing Model Behavior at Scale}, year={2023}, url={https://openreview.net/forum?id=nb3aaC9Hr8} }
Neural field is a special type of neural network that represents a single datum. We study whether we can speed up the training of such neural networks, by fitting a transformed version of the target datum; one can recover the original signal by inverting back the signal represented by the trained neural field. We empirically find that very simple data transformations, such as color inversion or random pixel shuffling, can substantially speed up or slow down the training. In particular, to our surprise, we observe that an image with randomly shuffled pixels can be fit much faster, despite having a very large frequency.
In Search of a Data Transformation that Accelerates Neural Field Training
null
Workshop/ATTRIB
poster
2311.17094
[ "https://github.com/effl-lab/dt4neural-field" ]
https://huggingface.co/papers/2311.17094
3
2
0
4
1
[]
[]
[ "lyunm1206/Interactive_Loss_Landscapes" ]
null
https://openreview.net/forum?id=lG08cfNhlc
@inproceedings{ anonymous2023automatic, title={Automatic Discovery of Visual Circuits}, author={Anonymous}, booktitle={NeurIPS Workshop on Attributing Model Behavior at Scale}, year={2023}, url={https://openreview.net/forum?id=lG08cfNhlc} }
To date, most discoveries of subnetworks that implement human-interpretable computations in deep vision models have involved close study of single units and large amounts of human labor. We explore scalable methods for extracting the subgraph of a vision model’s computational graph that underlies a particular capability. In this paper, we formulate capabilities as mappings of human-interpretable visual concepts to intermediate feature representations. We introduce a new method for identifying these subnetworks: specifying a visual concept using a few examples, and then tracing the interdependence of neuron activations across layers, or their functional connectivity. We find that our approach extracts circuits that causally affect model output, and that editing these circuits can defend large pretrained models from adversarial attacks.
Automatic Discovery of Visual Circuits
null
Workshop/ATTRIB
poster
2404.14349
[ "https://github.com/multimodal-interpretability/visual-circuits" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=lDeysxpH6W
@inproceedings{ anonymous2023mining, title={Mining the Diamond Miner: Mechanistic Interpretability on the Video PreTraining Agent}, author={Anonymous}, booktitle={NeurIPS Workshop on Attributing Model Behavior at Scale}, year={2023}, url={https://openreview.net/forum?id=lDeysxpH6W} }
Although decision-making systems based on reinforcement learning (RL) can be widely used in a variety of applications, their lack of interpretability raises concerns, especially in high-stakes scenarios. In contrast, Mechanistic Interpretability (MI) has shown potential in breaking down complex deep neural networks into understandable components in language and vision tasks. Accordingly, in this study, we apply MI to understand the behavior of a Video PreTraining (VPT) agent, exhibiting human-level proficiency in numerous Minecraft tasks. Our exploration is centered on the task of diamond mining and its associated subtasks, such as crafting wooden logs and iron pickaxes. By employing circuit analysis, we aim to decode the network's representation of these tasks and subtasks. We find a significant head in the VPT model encoding for an attacking action, although its ablation doesn't markedly affect the agent's performance. Our findings indicate that this approach can provide useful insights into the agent's behavior.
Mining the Diamond Miner: Mechanistic Interpretability on the Video PreTraining Agent
null
Workshop/ATTRIB
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=klVoedPgtY
@inproceedings{ anonymous2023threshold, title={Threshold {KNN}-Shapley: A Linear-Time and Privacy-Friendly Approach to Data Valuation}, author={Anonymous}, booktitle={NeurIPS Workshop on Attributing Model Behavior at Scale}, year={2023}, url={https://openreview.net/forum?id=klVoedPgtY} }
Data valuation aims to quantify the usefulness of individual data sources in training machine learning (ML) models, and is a critical aspect of data-centric ML research. However, data valuation faces significant yet frequently overlooked privacy challenges despite its importance. This paper studies these challenges with a focus on KNN-Shapley, one of the most practical data valuation methods nowadays. We first emphasize the inherent privacy risks of KNN-Shapley, and demonstrate the significant technical difficulties in adapting KNN-Shapley to accommodate differential privacy (DP). To overcome these challenges, we introduce \emph{TKNN-Shapley}, a refined variant of KNN-Shapley that is privacy-friendly, allowing for straightforward modifications to incorporate DP guarantee (\emph{DP-}TKNN-Shapley). We show that DP-TKNN-Shapley has several advantages and offers a superior privacy-utility tradeoff compared to naively privatized KNN-Shapley in discerning data quality. Moreover, even non-private TKNN-Shapley achieves comparable performance as KNN-Shapley. Overall, our findings suggest that TKNN-Shapley is a promising alternative to KNN-Shapley, particularly for real-world applications involving sensitive data.
Threshold KNN-Shapley: A Linear-Time and Privacy-Friendly Approach to Data Valuation (Workshop Version)
null
Workshop/ATTRIB
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=k4cSPcBxX0
@inproceedings{ anonymous2023how, title={How Does Colour and Shape Goal Misgeneralization Happen in Reinforcement Learning}, author={Anonymous}, booktitle={NeurIPS Workshop on Attributing Model Behavior at Scale}, year={2023}, url={https://openreview.net/forum?id=k4cSPcBxX0} }
We explore colour versus shape goal misgeneralization originally demonstrated by Di Langosco et al. (2022) in the Procgen Maze environment, where, given an ambiguous choice, the agents seem to prefer generalization based on colour rather than shape. After training over 1,000 agents in a simplified version of the environment and evaluating them on over 10 million episodes, we conclude that the behaviour can be attributed to the agents learning to detect the goal object through a specific colour channel. This choice is arbitrary. Additionally, we show how, due to underspecification, the preferences can change when retraining the agents using exactly the same procedure except for using a different random seed for the training run. Finally, we demonstrate the existence of outliers in out-of-distribution behaviour based on training random seed alone.
Colour versus Shape Goal Misgeneralization in Reinforcement Learning: A Case Study
null
Workshop/ATTRIB
poster
2312.03762
[ "https://github.com/KarolisRam/colour-shape-goal-misgeneralization" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ie6hRGMCp2
@inproceedings{ anonymous2023adversarial, title={Adversarial Attacks on Neuron Interpretation via Activation Maximization}, author={Anonymous}, booktitle={NeurIPS Workshop on Attributing Model Behavior at Scale}, year={2023}, url={https://openreview.net/forum?id=ie6hRGMCp2} }
Feature visualization is one of the most popular techniques to interpret the internal behavior of individual units of trained deep neural networks. Based on activation maximization, they consist of finding $\textit{synthetic}$ or $\textit{natural}$ inputs that maximize neuron activations. This paper introduces an optimization framework that aims to deceive feature visualization through adversarial model manipulation. It consists of fine-tuning a pre-trained model with a specifically introduced loss that aims to maintain model performance, while also significantly changing feature visualization. We provide evidence of the success of this manipulation on several pre-trained models for the ImageNet classification task.
Adversarial Attacks on Neuron Interpretation via Activation Maximization
null
Workshop/ATTRIB
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=i0x5EKSDdq
@inproceedings{ anonymous2023divergence, title={Divergence at the Interpolation Threshold: Identifying, Interpreting {\textbackslash}\& Ablating the Sources of a Deep Learning Puzzle}, author={Anonymous}, booktitle={NeurIPS Workshop on Attributing Model Behavior at Scale}, year={2023}, url={https://openreview.net/forum?id=i0x5EKSDdq} }
Machine learning models misbehave, often in unexpected ways. One prominent misbehavior is when the test loss diverges at the interpolation threshold, perhaps best known from its distinctive appearance in double descent. While considerable theoretical effort has gone into understanding generalization of overparameterized models, less effort has been made at understanding why the test loss misbehaves at the interpolation threshold. Moreover, analytically solvable models in this area employ a range of assumptions and use complex techniques from random matrix theory, statistical mechanics, and kernel methods, making it difficult to assess when and why test error might diverge. In this work, we analytically study the simplest supervised model - ordinary linear regression - and show intuitively and rigorously when and why a divergence occurs at the interpolation threshold using basic linear algebra. We identify three interpretable factors that, when all present, cause the divergence. We demonstrate on real data that linear models' test losses diverge at the interpolation threshold and that the divergence disappears when we ablate any one of the three identified factors. We conclude with insights on recent discoveries in nonlinear models regarding superposition and double descent.
Divergence at the Interpolation Threshold: Identifying, Interpreting Ablating the Sources of a Deep Learning Puzzle
null
Workshop/ATTRIB
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=hGtjFfO27f
@inproceedings{ anonymous2023the, title={The Reversal Curse: {LLM}s trained on ''A is B'' fail to learn ''B is A''}, author={Anonymous}, booktitle={NeurIPS Workshop on Attributing Model Behavior at Scale}, year={2023}, url={https://openreview.net/forum?id=hGtjFfO27f} }
We expose a surprising failure of generalization in auto-regressive large language models (LLMs). If a model is trained on a sentence of the form "*A is B*", it will not automatically generalize to the reverse direction "*B is A*". This is the **Reversal Curse**. For instance, if a model is trained on "Olaf Scholz was the ninth Chancellor of Germany", it will not automatically be able to answer the question, "Who was the ninth Chancellor of Germany?". Moreover, the likelihood of the correct answer ("Olaf Scholz") will not be higher than for a random name. Thus, models exhibit a basic failure of logical deduction and do not generalize a prevalent pattern in their training set (i.e. if "*A is B*" occurs, "*B is A*" is more likely to occur). We provide evidence for the Reversal Curse by finetuning GPT-3 and Llama-1 on fictitious statements such as "Uriah Hawthorne is the composer of *Abyssal Melodies*" and showing that they fail to correctly answer "Who composed *Abyssal Melodies?*". The Reversal Curse is robust across model sizes and model families and is not alleviated by data augmentation. We also evaluate ChatGPT (GPT-3.5 and GPT-4) on questions about real-world celebrities, such as "Who is Tom Cruise's mother? [A: Mary Lee Pfeiffer]" and the reverse "Who is Mary Lee Pfeiffer's son?". GPT-4 correctly answers questions like the former 79% of the time, compared to 33% for the latter. This shows a failure of logical deduction that we hypothesize is caused by the Reversal Curse.
The Reversal Curse: LLMs trained on "A is B" fail to learn "B is A"
null
Workshop/ATTRIB
poster
2309.12288
[ "https://github.com/lukasberglund/reversal_curse" ]
https://huggingface.co/papers/2309.12288
2
3
0
7
1
[]
[ "lberglund/reversal_curse" ]
[]
null
https://openreview.net/forum?id=giMJzZIuzr
@inproceedings{ anonymous2023the, title={The Geometry of Truth: Emergent Linear Structure in Large Language Model Representations of True/False Datasets}, author={Anonymous}, booktitle={NeurIPS Workshop on Attributing Model Behavior at Scale}, year={2023}, url={https://openreview.net/forum?id=giMJzZIuzr} }
Large Language Models (LLMs) have impressive capabilities, but are also prone to outputting falsehoods. Recent work has developed techniques for inferring whether a LLM is telling the truth by training probes on the LLM's internal activations. However, this line of work is controversial, with some authors pointing out failures of these probes to generalize in basic ways, among other conceptual issues. In this work, we curate high-quality datasets of true/false statements and use them to study in detail the structure of LLM representations of truth, drawing on three lines of evidence: 1. Visualizations of LLM true/false statement representations, which reveal clear linear structure. 2. Transfer experiments in which probes trained on one dataset generalize to different datasets. 3. Causal evidence obtained by surgicallly intervening in a LLM's forward pass, causing it to treat false statements as true and vice versa. Overall, we present evidence that language models linearly represent the truth or falsehood of factual statements. We also introduce a novel technique, mass-mean probing, which generalizes better and is more causally implicated in model outputs than other probing techniques.
The Geometry of Truth: Emergent Linear Structure in Large Language Model Representations of True/False Datasets
null
Workshop/ATTRIB
poster
2310.06824
[ "https://github.com/saprmarks/geometry-of-truth" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=csilNwWnoG
@inproceedings{ anonymous2023efficient, title={Efficient Data Valuation for Weighted Nearest Neighbor Algorithms}, author={Anonymous}, booktitle={NeurIPS Workshop on Attributing Model Behavior at Scale}, year={2023}, url={https://openreview.net/forum?id=csilNwWnoG} }
Data Shapley is a principled way to assess the importance of individual training data sources for machine learning (ML) applications. However, it often comes with computational challenges in calculating exact Data Shapley scores. KNN-Shapley \citep{jia2019efficient}, which assigns data value leveraging the efficiently computable Data Shapley score of $K$ nearest neighbors (KNN), has gained popularity as a viable alternative due to its computationally efficient nature. However, \cite{jia2019efficient} only gives a practical algorithm for computing Data Shapley for unweighted KNN, but weighted KNN is more prevalently used in practice. This work addresses the computational challenges of calculating the exact Data Shapley for weighted KNN classifiers (WKNN-Shapley). By making small adjustments to KNN configurations, we recast the computation of WKNN-Shapley into a counting problem and introduce an $O(K^2 N^2)$ algorithm, presenting a notable improvement from the naive, impractical $O(N^K)$ algorithm. We also develop a deterministic approximation algorithm that further improves computational efficiency while maintaining the key fairness properties of the Shapley value. These advancements position WKNN-Shapley as a compelling alternative to KNN-Shapley. In particular, WKNN-Shapley can select high-quality data points and improve the performance of retrieval-augmented language models.
Efficient Data Valuation for Weighted Nearest Neighbor Algorithms
null
Workshop/ATTRIB
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=cGui9S5GqL
@inproceedings{ anonymous2023is, title={Is This the Subspace You Are Looking for? An Interpretability Illusion for Subspace Activation Patching}, author={Anonymous}, booktitle={NeurIPS Workshop on Attributing Model Behavior at Scale}, year={2023}, url={https://openreview.net/forum?id=cGui9S5GqL} }
Mechanistic interpretability aims to understand model behaviors in terms of specific, interpretable features, often hypothesized to manifest as low-dimensional subspaces of activations. Specifically, recent studies have explored subspace interventions (such as activation patching) as a way to both manipulate model behavior and attribute the features behind it to given subspaces. In this work, we demonstrate that these two aims diverge, potentially leading to an illusory sense of interpretability. Counterintuitively, even if a subspace intervention modifies end-to-end model behavior in the desired way, this effect may be achieved by activating a \emph{dormant parallel pathway} leveraging a component that is \emph{causally disconnected} from model outputs. We demonstrate this phenomenon in a distilled mathematical example, in two real-world domains (the indirect object identification task and factual recall), and present evidence for its prevalence in practice. In the context of factual recall, we further show a link to rank-1 fact editing, providing a mechanistic explanation for previous work observing an inconsistency between fact editing performance and fact localization. Finally, we remark on what a success case of subspace activation patching looks like.
Is This the Subspace You Are Looking for? An Interpretability Illusion for Subspace Activation Patching
null
Workshop/ATTRIB
poster
2311.17030
[ "https://github.com/amakelov/activation-patching-illusion" ]
https://huggingface.co/papers/2311.17030
3
0
0
3
1
[]
[]
[]
null
https://openreview.net/forum?id=bqOzmRPGCX
@inproceedings{ anonymous2023object, title={Object Detection in Deep Neural Networks Differs from Humans in the Periphery}, author={Anonymous}, booktitle={NeurIPS Workshop on Attributing Model Behavior at Scale}, year={2023}, url={https://openreview.net/forum?id=bqOzmRPGCX} }
To understand how strategies used by object detection models compare to those in human vision, we simulate peripheral vision in object detection models at the input stage. We collect human data on object change detection in the periphery and compare it to detection models with a simulated periphery. We find that unlike humans, models are highly sensitive to the texture-like transformation in peripheral vision. Not only do models under-perform compared to humans, they do not follow the same clutter effects as humans even when fixing the model task to closely mimic the human one. Training on peripheral input boosts performance on the change detection task, but appears to aid object localization in the periphery much more than object identification. This suggests that human-like performance is not attributable to input data alone, and to fully address the differences we see in human and model detection, farther downstream changes may be necessary. In the future, improving alignment between object detection models and human representations could help us build models with more human-explainable detection strategies.
Object Detection in Deep Neural Networks Differs from Humans in the Periphery
null
Workshop/ATTRIB
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=Zqj3YQ1QAC
@inproceedings{ anonymous2023formal, title={Formal Definition of Fingerprints Improves Attribution of Generative Models}, author={Anonymous}, booktitle={NeurIPS Workshop on Attributing Model Behavior at Scale}, year={2023}, url={https://openreview.net/forum?id=Zqj3YQ1QAC} }
Recent works have shown that generative models leave traces of their underlying generative process on the generated samples, broadly referred to as fingerprints of a generative model, and have studied their utility in detecting synthetic images from real ones. However, the extent to which these fingerprints can distinguish between various types of synthetic images and help identify the underlying generative process remain under-explored. In particular, the very definition of a fingerprint remains unclear, to our knowledge. To that end, in this work, we formalize the definition of artifact and fingerprint in generative models, propose an algorithm for computing them in practice, and finally study how different design parameters affect the model fingerprints and their attributability. We find that using our proposed definition can significantly improve the performance on the task of identifying the underlying generative process from samples (model attribution) compared to existing methods. Additionally, we study the structure of the fingerprints and observe that it is very predictive of the effect of different design choices on the generative process.
Formal Definition of Fingerprints Improves Attribution of Generative Models
null
Workshop/ATTRIB
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ZP7hgFNXQy
@inproceedings{ anonymous2023attributing, title={Attributing Learned Concepts in Neural Networks to Training Data}, author={Anonymous}, booktitle={NeurIPS Workshop on Attributing Model Behavior at Scale}, year={2023}, url={https://openreview.net/forum?id=ZP7hgFNXQy} }
By now there is substantial evidence that deep learning models learn certain human-interpretable features as part of their internal representations of data. As having the right (or wrong) concepts is critical to trustworthy machine learning systems, it is natural to ask which inputs from the model's original training set were most important for learning a concept at a given layer. To answer this, we combine data attribution methods with methods for probing the concepts learned by a model. Training network and probe ensembles for two concept datasets on a range of network layers, we use the recently developed TRAK method for large-scale data attribution. We find some evidence for *convergence*, where removing the 10,000 top attributing images for a concept and retraining the model does not change the location of the concept in the network nor the probing sparsity of the concept. This suggests that rather than being highly dependent on a few specific examples, the features that inform the development of a concept are spread in a more diffuse manner across its exemplars, implying robustness in concept formation.
Attributing Learned Concepts in Neural Networks to Training Data
null
Workshop/ATTRIB
oral
2310.03149
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=XUIYn3jo5T
@inproceedings{ anonymous2023when, title={When Less is More: Investigating Data Pruning for Pretraining {LLM}s at Scale}, author={Anonymous}, booktitle={NeurIPS Workshop on Attributing Model Behavior at Scale}, year={2023}, url={https://openreview.net/forum?id=XUIYn3jo5T} }
Large volumes of text data have contributed significantly to the development of large language models (LLMs) in recent years. To date, efforts to prune these datasets to higher quality subsets have relied on hand-crafted heuristics encoded as rule-based filters. In this work, we explore scalable estimates of data quality that can be used to systematically measure the quality of pretraining data, namely perplexity, the Error L2-Norm, and memorization. These metrics are used to rank and prune pretraining corpora, and we subsequently compare LLMs trained on these pruned datasets. We find that perplexity outperforms other scoring methods and improves over our no-pruning baseline while training on as little as 30\% of the original training dataset. Our work sets a foundation for strategies in automatically curating high quality corpora and suggests that large amounts of pretraining data can be removed while retaining performance.
When Less is More: Investigating Data Pruning for Pretraining LLMs at Scale
null
Workshop/ATTRIB
poster
2309.04564
[ "" ]
https://huggingface.co/papers/2309.04564
5
15
0
6
1
[]
[]
[]
null
https://openreview.net/forum?id=WGjQt3aDn7
@inproceedings{ anonymous2023a, title={A Simple and Efficient Baseline for Data Attribution on Images}, author={Anonymous}, booktitle={NeurIPS Workshop on Attributing Model Behavior at Scale}, year={2023}, url={https://openreview.net/forum?id=WGjQt3aDn7} }
Data attribution methods play a crucial role in understanding machine learning models, providing insight into which training data points are most responsible for model outputs during deployment. However, current state-of-the-art approaches require a large ensemble of as many as 300,000 models to accurately attribute model predictions. These approaches therefore come at a high computational cost, are memory intensive, and are hard to scale to large models or datasets. In this work, we focus on a minimalist baseline that relies on the image features from a pretrained self-supervised backbone to retrieve images from the dataset. Our method is model-agnostic and scales easily to large datasets. We show results on CIFAR-10 and ImageNet, achieving strong performance that rivals or outperforms state-of-the-art approaches at a fraction of the compute or memory cost. Contrary to prior work, our results reinforce the intuition that a model's prediction on one image is most impacted by visually similar training samples. Our approach serves as a simple and efficient baseline for data attribution on images.
A Simple and Efficient Baseline for Data Attribution on Images
null
Workshop/ATTRIB
poster
2311.03386
[ "https://github.com/vasusingla/simple-data-attribution" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=LMV6HuZJ2L
@inproceedings{ anonymous2023exploring, title={Exploring Dataset-Scale Indicators of Data Quality}, author={Anonymous}, booktitle={NeurIPS Workshop on Attributing Model Behavior at Scale}, year={2023}, url={https://openreview.net/forum?id=LMV6HuZJ2L} }
Modern computer vision foundation models are trained on massive amounts of data, incurring large economic and environmental costs. Recent research has suggested that improving data quality can significantly reduce the need for data quantity. But what constitutes data quality in computer vision? We posit that the quality of a given dataset can be decomposed into distinct sample-level and dataset-level constituents, and that the former have been more extensively studied than the latter. We ablate the effects of two important dataset-level constituents: label set design, and class balance. By monitoring these constituents using key indicators we provide, researchers and practitioners can better anticipate model performance, measured in terms of its accuracy and robustness to distribution shifts.
Exploring Dataset-Scale Indicators of Data Quality
null
Workshop/ATTRIB
poster
2311.04016
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=KLVL0Kj6E8
@inproceedings{ anonymous2023selfselect, title={Self-Select: Optimizing Instruction Selection for Large Language Models}, author={Anonymous}, booktitle={NeurIPS Workshop on Attributing Model Behavior at Scale}, year={2023}, url={https://openreview.net/forum?id=KLVL0Kj6E8} }
The same question can often be presented in different ways, depending on the audience and the intent with which it is being posed. To determine whether large language models (LLMs) demonstrate preferences for one phrasing over another regardless of semantic content, we introduce \textit{Self-Select}, a method for selection of a preferred instruction template, and generation of high-quality synthetic data samples. This algorithm makes use of a \textit{meta-prompt} to decide on an instruction template, given a task and candidate templates then generates $n$ new samples using the chosen template. We evaluate \textit{Self-Select} on numerical reasoning and sentiment classification tasks, using a variety of instruction-tuned and base models, providing insights into their abilities and biases. We find that permuting the instruction template ordering in the prompt leads to vastly different choice distributions, suggesting that selections of a specific template can be attributed to inductive biases rather than semantic understanding, even after instruction-tuning.
Self-Select: Optimizing Instruction Selection for Large Language Models
null
Workshop/ATTRIB
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=Hi9zCFiiNw
@inproceedings{ anonymous2023speculative, title={Speculative Behavior: An Approach to Large Language Model Evaluation and Optimization}, author={Anonymous}, booktitle={NeurIPS Workshop on Attributing Model Behavior at Scale}, year={2023}, url={https://openreview.net/forum?id=Hi9zCFiiNw} }
Trained Large Language Models (LLMs) have gained significant interest due to their ability to interpret natural language instructions and address a wide range of tasks with high proficiency. However, in practice, these models pose multiple challenges. On one hand, it is exceedingly difficult to control and ensure that the model's behavior remains consistent, harmless, and safe. On the other hand, the most advanced models are delivered via APIs as black-box services, making it challenging to guarantee their proper behavior. Addressing these challenges has become an urgent concern, especially in environments where a model's response can impact safety and trustworthiness. Many recent studies focus on the evaluation of models using benchmarks based on community-curated datasets. However, this form of evaluation is prone to data leakage and premature dataset obsolescence. Moreover, it doesn't necessarily align with all the specific goals that may be desired. One alternative for aligning specific objectives with the model behavior is fine-tuning, but this process is time-consuming and might be prohibitively expensive for many organizations. In this study, we propose the idea of measuring the model's behavior towards specific objectives through the concept of Speculative Behavior Equivalence (SBE). We introduce a general, agnostic approach that can be adapted to various models and tailored to the unique metrics of individual cases whilst remaining constrained to specific budgets. Additionally, we formulate the Speculative Behavior-Based Optimization problem (CSBO), which presents an opportunity to leverage AutoML techniques in the field of LLMs for optimizing behavior.
Speculative Behavior: An Approach to Large Language Model Evaluation and Optimization
null
Workshop/ATTRIB
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=HVyDhCc7LW
@inproceedings{ anonymous2023large, title={Large Language Model Attributions: A Unified Perspective}, author={Anonymous}, booktitle={NeurIPS Workshop on Attributing Model Behavior at Scale}, year={2023}, url={https://openreview.net/forum?id=HVyDhCc7LW} }
As businesses, products, and services spring up around large language models, the trustworthiness of these models hinges on the verifiability of their outputs. However, methods for explaining language model outputs largely fall across two distinct fields of study which both use the term "attribution" to refer to entirely separate techniques: citation generation and training data attribution. In many modern applications, such as legal document generation and medical question answering, both types of attributions are important. In this work, we argue for and present a unified framework of large language model attributions. We show how existing methods of different types of attribution fall under the unified framework. We also use the framework to discuss real-world use cases where one or both types of attributions are required. We believe that this unified framework will guide the use case driven development of systems that leverage both types of attribution, as well as the standardization of their evaluation.
Unifying Corroborative and Contributive Attributions in Large Language Models
null
Workshop/ATTRIB
oral
2311.12233
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=G8RRE6zDGm
@inproceedings{ anonymous2023algorithm, title={Algorithm Selection with Priority Order for Instances}, author={Anonymous}, booktitle={NeurIPS Workshop on Attributing Model Behavior at Scale}, year={2023}, url={https://openreview.net/forum?id=G8RRE6zDGm} }
Reliability in medical image diagnostics is a required trait for any artificial system. Currently, most approaches rely on highly trained and specific models to leverage the feature quality learned from a particular type of medium, such as X-rays, NMR, PET scans and others. While this approach aligns with the standard human expert perspective, it also limits artificial systems to the representations learned from the dataset distribution. To gain a better understanding of how different media affect specific tasks, we explore task-specific feature transfer between domains. In this work, we propose the possibility of merging features from various areas to harness feature transfer in outlier cases. For this purpose, we develop an Algorithm Selection (AS) method that chooses algorithms trained on different sets of medical images and for different classification tasks. The AS system is then applied to a different classification task. The AS represents a set of methods that, given a problem and a range of existing algorithms, selects the best algorithm on a case- by-case basis. The results demonstrate the advantages of incorporating algorithms from different tasks and datasets in a supervised manner. By considering algorithms trained on diverse datasets, we can effectively capture outliers that might otherwise be neglected by more specific algorithms.
Algorithm Selection with Priority Order for Instances
null
Workshop/ATTRIB
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=Ei4k9xurmC
@inproceedings{ anonymous2023prototype, title={Prototype Generation: Robust Feature Visualisation for Data Independent Interpretability}, author={Anonymous}, booktitle={NeurIPS Workshop on Attributing Model Behavior at Scale}, year={2023}, url={https://openreview.net/forum?id=Ei4k9xurmC} }
We introduce Prototype Generation, a stricter and more robust form of feature visualisation for model-agnostic, data-independent interpretability of image classification models. We demonstrate its ability to generate inputs that result in natural activation paths, countering previous claims that feature visualisation algorithms are untrustworthy due to the unnatural internal activations. We substantiate these claims by quantitatively measuring similarity between the internal activations of our generated prototypes and natural images. We also demonstrate how the interpretation of generated prototypes yields important insights, highlighting spurious correlations and biases learned by models which quantitative methods over test-sets cannot identify.
Prototype Generation: Robust Feature Visualisation for Data Independent Interpretability
null
Workshop/ATTRIB
poster
2309.17144
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=EKvqw9k3lC
@inproceedings{ anonymous2023backtracking, title={Backtracking Mathematical Reasoning of Language Models to the Pretraining Data}, author={Anonymous}, booktitle={NeurIPS Workshop on Attributing Model Behavior at Scale}, year={2023}, url={https://openreview.net/forum?id=EKvqw9k3lC} }
In-context learning and chain-of-thought prompting have demonstrated surprising performance improvements on mathematical reasoning benchmarks. Therefore, understanding the underlying factors enabling these capabilities is crucial. However, the specific aspects of pretraining data that equip models with mathematical reasoning capabilities remain largely unexplored and are less studied systematically. In this study, we identify subsets of model pretraining data that contribute to math reasoning ability of the model, and evaluate it on several mathematical operations (e.g. addition, multiplication) and tasks (e.g. the asdiv dataset). We measure the importance of such subsets by continual training of the model on pretraining data subsets, and then we quantify the change in performance on the mathematical benchmark to assess their importance. If a subset results in an improved performance, we conjecture that such subset contributes to a model's overall mathematical ability. Our results unveil that while training on math-only data contributes to simple arithmetic abilities, it does not solely explain performance on more complex reasoning abilities like chain-of-thought reasoning. We also find that code data contributes to chain-of-thought reasoning while reducing the arithmetic performance.
Backtracking Mathematical Reasoning of Language Models to the Pretraining Data
null
Workshop/ATTRIB
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=CgHj6c4t5g
@inproceedings{ anonymous2023forbidden, title={Forbidden Facts: An Investigation of Competing Objectives in Llama 2}, author={Anonymous}, booktitle={NeurIPS Workshop on Attributing Model Behavior at Scale}, year={2023}, url={https://openreview.net/forum?id=CgHj6c4t5g} }
LLMs often face competing pressures (for example helpfulness vs. harmlessness). To understand how models resolve such conflicts, we study Llama-2-7b-chat on the \textit{forbidden fact} task. Specifically, we instruct Llama 2 to truthfully complete a factual recall statement while forbidding it from saying the correct answer. This often makes the model give incorrect answers. We decompose Llama 2 into 1057 different components, and rank each one with respect to how useful it is for forbidding the correct answer. We find that in aggregate, 41 components are enough to reliably implement the full suppression behavior. However, we find that these components are fairly heterogeneous and that many operate using faulty heuristics. We find that one of these heuristics can be exploited via manually designed adversarial attacks, which we call California Attacks. Our results highlight some roadblocks standing in the way of being able to successfully interpret advanced ML systems.
Forbidden Facts: An Investigation of Competing Objectives in Llama 2
null
Workshop/ATTRIB
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=9eJv5PS27Q
@inproceedings{ anonymous2023towards, title={Towards Best Practices of Activation Patching in Language Models: Metrics and Methods}, author={Anonymous}, booktitle={NeurIPS Workshop on Attributing Model Behavior at Scale}, year={2023}, url={https://openreview.net/forum?id=9eJv5PS27Q} }
Mechanistic interpretability seeks to understand the internal mechanisms of machine learning models, where localization—identifying the important model components—is a key step. Activation patching, also known as causal tracing or interchange intervention, is a standard technique for this task (Vig et al., 2020), but the literature contains many variants with little consensus on the choice of hyperparameters or methodology. In this work, we systematically examine the impact of methodological details in activation patching, including evaluation metrics and corruption methods. In several settings of localization and circuit discovery in language models, we find that varying these hyperparameters could lead to disparate interpretability results. Backed by empirical observations, we give conceptual arguments for why certain metrics or methods may be preferred. Finally, we provide recommendations for the best practices of activation patching going forwards.
Towards Best Practices of Activation Patching in Language Models: Metrics and Methods
null
Workshop/ATTRIB
poster
2309.16042
[ "" ]
https://huggingface.co/papers/2309.16042
2
3
0
2
1
[]
[]
[]
null
https://openreview.net/forum?id=6KMr75JLvC
@inproceedings{ anonymous2023outliers, title={Outliers with Opposing Signals Have an Outsized Effect on Neural Network Optimization}, author={Anonymous}, booktitle={NeurIPS Workshop on Attributing Model Behavior at Scale}, year={2023}, url={https://openreview.net/forum?id=6KMr75JLvC} }
We identify a new phenomenon in neural network optimization which arises from the interaction of depth and a particular heavy-tailed structure in natural data. Our result offers intuitive explanations for several previously reported observations about network training dynamics and demonstrates how a small number training points can have an unusually large effect on a network's optimization trajectory and predictions. Experimentally, we demonstrate the significant influence of paired groups of outliers in the training data with strong \emph{opposing signals}: consistent, large magnitude features which dominate the network output and occur in both groups with similar frequency. Due to these outliers, early optimization enters a narrow valley which carefully balances the opposing groups; subsequent sharpening causes their loss to rise rapidly, oscillating between high on one group and then the other, until the overall loss spikes. We complement these experiments with a theoretical analysis of a two-layer linear network on a simple model of opposing signals. Our finding enables new qualitative predictions of behavior during and after training which we confirm experimentally. It also provides a new lens through which to study how specific data influence the learned parameters.
Outliers with Opposing Signals Have an Outsized Effect on Neural Network Optimization
null
Workshop/ATTRIB
oral
2311.04163
[ "" ]
https://huggingface.co/papers/2311.04163
1
1
0
2
1
[]
[]
[]
null
https://openreview.net/forum?id=5CDRc8VMhS
@inproceedings{ anonymous2023attention, title={Attention Lens: A Tool for Mechanistically Interpreting the Attention Head Information Retrieval Mechanism}, author={Anonymous}, booktitle={NeurIPS Workshop on Attributing Model Behavior at Scale}, year={2023}, url={https://openreview.net/forum?id=5CDRc8VMhS} }
Transformer-based Large Language Models (LLMs) are the state-of-the-art for natural language tasks. Recent work has attempted to decode, by reverse engineering the role of linear layers, the internal mechanisms by which LLMs arrive at their final predictions for text completion tasks. Yet little is known about the specific role of attention heads in producing the final token prediction. We propose Attention Lens, a tool that enables researchers to translate the outputs of attention heads into vocabulary tokens via learned attention-head-specific transformations called lenses. Preliminary findings from our trained lenses indicate that attention heads play highly specialized roles in language models. The code for Attention Lens is available at github.com/anonymized-for-review.
Attention Lens: A Tool for Mechanistically Interpreting the Attention Head Information Retrieval Mechanism
null
Workshop/ATTRIB
poster
2310.16270
[ "https://github.com/msakarvadia/attentionlens" ]
https://huggingface.co/papers/2310.16270
0
1
0
8
1
[]
[]
[]
null
https://openreview.net/forum?id=4yJx3HgedY
@inproceedings{ anonymous2023estimating, title={Estimating the Generalization in Deep Neural Networks via Sparsity}, author={Anonymous}, booktitle={NeurIPS Workshop on Attributing Model Behavior at Scale}, year={2023}, url={https://openreview.net/forum?id=4yJx3HgedY} }
Generalization is the key capability for deep neural networks (DNNs). However, it is challenging to give a reliable measure of the generalization ability of a DNN via only its nature. In this paper, we propose a novel method for estimating the generalization gap based on network sparsity. Two key sparsity quantities are extracted from the training results alone, which could present close relationship with model generalization. Then a simple linear model involving two key quantities are constructed to give accurate estimation of the generalization gap. By training DNNs with a wide range of generalization gap on popular datasets, we show that our key quantities and linear model could be efficient tools for estimating the generalization gap of DNNs.
Estimating the Generalization in Deep Neural Networks via Sparsity
null
Workshop/ATTRIB
poster
2104.00851
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=13VkFDTKHH
@inproceedings{ anonymous2023data, title={Data Attribution for Segmentation Models}, author={Anonymous}, booktitle={NeurIPS Workshop on Attributing Model Behavior at Scale}, year={2023}, url={https://openreview.net/forum?id=13VkFDTKHH} }
The quality of segmentation models is driven by their training datasets labeled with detailed segmentation masks. How does the composition of such a training dataset contribute to the performance of the resulting segmentation model? In this work, we take a step towards attaining such an understanding by applying the lens of data attribution to it. To this end, We first identify specific behaviors of these models to attribute, and then provide a method for computing such attributions efficiently. We validate the resulting attributions, and leverage them to both identify harmful labeling errors and curate a $50$\% subset of the MS COCO training dataset that leads to a $2.79$\% $\pm$ $0.49$\% increase in mIOU over the full dataset.
Data Attribution for Segmentation Models
null
Workshop/ATTRIB
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zV0gv4a4Yj
@inproceedings{ cintas2023characterizing, title={Characterizing pre-trained and task-adapted molecular representations}, author={Celia Cintas and Payel Das and Jarret Ross and Brian Belgodere and Girmaw Abebe Tadesse and Vijil Chenthamarakshan and Jannis Born and Skyler Speakman}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=zV0gv4a4Yj} }
Pre-trained deep learning models are emerging fast as a tool for enhancing scientific workflow and accelerating scientific discovery. Representation learning is a fundamental task to study the molecular structure–property relationship, which is then leveraged for predicting the molecular properties or designing new molecules with desired attributes. However, evaluating the emerging "zoo" of pre-trained models for various downstream tasks remains challenging. We propose an unsupervised method to characterize embeddings of pre-trained models through the lens of non-parametric group property-driven subset scanning (SS). We assess its detection capabilities with extensive experiments on diverse molecular benchmarks (ZINC-250K, MOSES, MoleculeNet) across predictive chemical language models (MoLFormer, ChemBERTa) and molecular graph generative models (GraphAF, GCPN). We further evaluate how representations evolve as a result of domain adaptation by finetuning or low-dimensional projection.Experiments reveal notable information condensation in the pre-trained embeddings upon task-specific fine-tuning as well as projection techniques. For example, among the top-$120$ most-common elements in the embedding (out of $\approx 700$), only $11$ property-driven elements are shared between the three tasks (BACE, BBBP, and HIV), while $\approx 70$-$80$ of those are unique to each task. This work provides a post-hoc quality evaluation method for representation learning models and domain adaptation methods that is task and modality-agnostic.
Characterizing pre-trained and task-adapted molecular representations
[ "Celia Cintas", "Payel Das", "Jarret Ross", "Brian Belgodere", "Girmaw Abebe Tadesse", "Vijil Chenthamarakshan", "Jannis Born", "Skyler Speakman" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ydiDTfdbzP
@inproceedings{ harvey2023duality, title={Duality of Bures and Shape Distances with Implications for Comparing Neural Representations}, author={Sarah E Harvey and Brett W. Larsen and Alex H Williams}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=ydiDTfdbzP} }
A multitude of (dis)similarity measures between neural networks representations have been proposed, resulting in a fragmented research landscape. Most (dis)similarity measures fall into one of two categories. First, measures such as linear regression, canonical correlations analysis (CCA), and shape distances, all learn explicit mappings between neural units to quantify similarity while accounting for expected invariances. Second, measures such as representational similarity analysis (RSA), centered kernel alignment (CKA), and normalized Bures similarity (NBS) all quantify similarity in summary statistics that are already invariant to such symmetries (e.g. by comparing stimulus-by-stimulus kernel matrices). Here, we take steps towards unifying these two broad categories of methods by observing that the cosine of the Riemannian shape distance (from category 1) is equal to NBS (from category 2). We explore how this connection leads to new interpretations of shape distances and NBS, and draw contrasts of these measures with CKA, a popular similarity measure in the deep learning literature.
Duality of Bures and Shape Distances with Implications for Comparing Neural Representations
[ "Sarah E Harvey", "Brett W. Larsen", "Alex H Williams" ]
Workshop/UniReps
oral
2311.11436
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=yC6b3hqyf8
@inproceedings{ zhuang2023wavspa, title={WavSpA: Wavelet Space Attention for Boosting Transformers' Long Sequence Learning Ability}, author={Yufan Zhuang and Zihan Wang and Fangbo Tao and Jingbo Shang}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=yC6b3hqyf8} }
Transformer and its variants are fundamental neural architectures in deep learning. Recent works show that learning attention in the Fourier space can improve the long sequence learning capability of Transformers. We argue that wavelet transform shall be a better choice because it captures both position and frequency information with linear time complexity. Therefore, in this paper, we systematically study the synergy between wavelet transform and Transformers. We propose Wavelet Space Attention (WavSpA) that facilitates attention learning in a learnable wavelet coefficient space which replaces the attention in Transformers by (1) applying forward wavelet transform to project the input sequences to multi-resolution bases, (2) conducting attention learning in the wavelet coefficient space, and (3) reconstructing the representation in input space via backward wavelet transform. Extensive experiments on the Long Range Arena demonstrate that learning attention in the wavelet space using either fixed or adaptive wavelets can consistently improve Transformer’s performance and also significantly outperform learning in Fourier space. We further show our method can enhance Transformer’s reasoning extrapolation capability over distance on the LEGO chain-of-reasoning task.
WavSpA: Wavelet Space Attention for Boosting Transformers' Long Sequence Learning Ability
[ "Yufan Zhuang", "Zihan Wang", "Fangbo Tao", "Jingbo Shang" ]
Workshop/UniReps
poster
2210.01989
[ "" ]
https://huggingface.co/papers/2210.01989
1
0
0
4
1
[]
[]
[]
null
https://openreview.net/forum?id=v1qKZooY4z
@inproceedings{ zhao2023neucore, title={{NEUCORE}: Neural Concept Reasoning for Composed Image Retrieval}, author={Shu Zhao and Huijuan Xu}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=v1qKZooY4z} }
Composed image retrieval which combines a reference image and a text modifier to identify the desired target image is a challenging task, and requires the model to comprehend both vision and language modalities and their interactions. Existing approaches focus on holistic multi-modal interaction modeling, and ignore the composed and complimentary property between the reference image and text modifier. In order to better utilize the complementarity of multi-modal inputs for effective information fusion and retrieval, we move the multi-modal understanding to fine-granularity at concept-level, and learn the multi-modal concept alignment to identify the visual location in reference or target images corresponding to text modifier. Toward the end, we propose a NEUral COncept REasoning (NEUCORE) model which incorporates multi-modal concept alignment and progressive multi-modal fusion over aligned concepts. Specifically, considering that text modifier may refer to semantic concepts not existing in the reference image and requiring to be added into the target image, we learn the multi-modal concept alignment between the text modifier and the concatenation of reference and target images, under multiple-instance learning framework with image and sentence level weak supervision. Furthermore, based on aligned concepts, to form discriminative fusion features of the input modalities for accurate target image retrieval, we propose a progressive fusion strategy with unified execution architecture instantiated by the attended language semantic concepts. Our proposed approach is evaluated on three datasets and achieves state-of-the-art results.
NEUCORE: Neural Concept Reasoning for Composed Image Retrieval
[ "Shu Zhao", "Huijuan Xu" ]
Workshop/UniReps
poster
2310.01358
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=uviLSCIsvt
@inproceedings{ liu2023grokking, title={Grokking as Simplification: A Nonlinear Complexity Perspective}, author={Ziming Liu and Ziqian Zhong and Max Tegmark}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=uviLSCIsvt} }
We attribute grokking, the phenomenon where generalization is much delayed after memorization, to compression. We define linear mapping number (LMN) to measure network complexity, which is a generalized version of linear region number for ReLU networks. LMN can nicely characterize neural network compression before generalization. Although $L_2$ norm has been popular to characterize model complexity, we argue in favor of LMN for a number of reasons: (1) LMN can be naturally interpreted as information/computation, while $L_2$ cannot. (2) In the compression phase, LMN has nice linear relations with test losses, while $L_2$ is correlated with test losses in a complicated nonlinear way. (3) LMN also reveals an intriguing phenomenon of the XOR network switching between two generalization solutions, while $L_2$ does not. Besides explaning grokking, we argue that LMN is a promising candidate as the neural network version of the Kolmogorov complexity, since it explicitly considers local or conditioned linear computations aligned with the nature of modern artificial neural networks.
Grokking as Simplification: A Nonlinear Complexity Perspective
[ "Ziming Liu", "Ziqian Zhong", "Max Tegmark" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=uoUOz427RD
@inproceedings{ schmitt2023leveraging, title={Leveraging Self-Consistency for Data-Efficient Amortized Bayesian Inference}, author={Marvin Schmitt and Daniel Habermann and Paul-Christian B{\"u}rkner and Ullrich Koethe and Stefan T. Radev}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=uoUOz427RD} }
We propose a method to improve the efficiency and accuracy of amortized Bayesian inference (ABI) by leveraging universal symmetries in the probabilistic joint model $p(\theta, y)$ of parameters $\theta$ and data $y$. In a nutshell, we invert Bayes' theorem and estimate the marginal likelihood based on approximate representations of the joint model. Upon perfect approximation, the marginal likelihood is constant across all parameter values by definition. However, approximation error leads to undesirable variance in the marginal likelihood estimates across different parameter values. We formulate violations of this symmetry as a loss function to accelerate the learning dynamics of conditional neural density estimators. We apply our method to a bimodal toy problem with an explicit likelihood (likelihood-based) and a realistic model with an implicit likelihood (simulation-based).
Leveraging Self-Consistency for Data-Efficient Amortized Bayesian Inference
[ "Marvin Schmitt", "Daniel Habermann", "Paul-Christian Bürkner", "Ullrich Koethe", "Stefan T. Radev" ]
Workshop/UniReps
poster
2310.04395
[ "https://github.com/marvinschmitt/self-consistency-abi" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ul5OdS6pfU
@inproceedings{ aczel2023efficient, title={Efficient Multimodal Alignment: To Freeze or Not to Freeze?}, author={Till Aczel and Roger Wattenhofer}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=ul5OdS6pfU} }
Language-image pretraining creates a joint representation space between the two modalities where images and texts with similar semantic information lay close to each other. Language-image models are often trained from scratch without taking advantage of unimodal pretrained models. By aligning the representation spaces of two modality-specific encoders, our model achieves 74.7% accuracy on the ImagenNet1K validation set, at two orders of magnitude lower training cost. In this work, we highlight the importance of unfreezing the CLS tokens of uni-modal transformer encoders to create a joint embedding space. Freezing the image and text CLS tokens reduces the mean accuracy from 37.5% to 19.4% on the 38 evaluation benchmarks.
Efficient Multimodal Alignment: To Freeze or Not to Freeze?
[ "Till Aczel", "Roger Wattenhofer" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=uVZB637Dgx
@inproceedings{ wu2023what, title={What Mechanisms Does Knowledge Distillation Distill?}, author={Cindy Wu and Ekdeep Singh Lubana and Bruno Kacper Mlodozeniec and Robert Kirk and David Krueger}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=uVZB637Dgx} }
Knowledge distillation is a commonly-used compression method in ML due to the popularity of increasingly large-scale models, but it is unclear if all the information a teacher model contains is distilled into the smaller student model. We aim to formalize the concept of `knowledge' to investigate how knowledge is transferred during distillation, focusing on shared invariant outputs to counterfactual changes of dataset latent variables (we call these latents mechanisms). We define a student model to be a good stand-in model for a teacher if it shares the teacher's learned mechanisms, and find that Jacobian matching and contrastive representation learning are viable methods by which to train such models. While these methods do not result in perfect transfer of mechanisms, we show they often improve student fidelity or mitigate simplicity bias (as measured by the teacher-to-student KL divergence and accuracy on various out-of-distribution test datasets), especially on datasets with spurious statistical correlations.
What Mechanisms Does Knowledge Distillation Distill?
[ "Cindy Wu", "Ekdeep Singh Lubana", "Bruno Kacper Mlodozeniec", "Robert Kirk", "David Krueger" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=uKWqDnLI3o
@inproceedings{ friedman2023comparing, title={Comparing Representational and Functional Similarity in Small Transformer Language Models}, author={Dan Friedman and Andrew Kyle Lampinen and Lucas Dixon and Danqi Chen and Asma Ghandeharioun}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=uKWqDnLI3o} }
In many situations, it would be helpful to be able to characterize the solution learned by a neural network, including for answering scientific questions (e.g. how do architecture changes affect generalization) and addressing practical concerns (e.g. auditing for potentially unsafe behavior). One approach is to try to understand these models by studying the representations that they learn---for example, comparing whether two networks learn similar representations. However, it is not always clear how much representation-level analyses can tell us about how a model makes predictions. In this work, we explore this question in the context of small Transformer language models, which we train on a synthetic, hierarchical language task. We train models with different sizes and random initializations, evaluating performance over the course of training and on a variety of systematic generalization splits. We find that existing methods for measuring representation similarity are not always correlated with behavioral metrics---i.e. models with similar representations do not always make similar predictions---and the results vary depending on the choice of representation. Our results highlight the importance of understanding representations in terms of the role they play in the neural algorithm.
Comparing Representational and Functional Similarity in Small Transformer Language Models
[ "Dan Friedman", "Andrew Kyle Lampinen", "Lucas Dixon", "Danqi Chen", "Asma Ghandeharioun" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=tOW3IWHw8G
@inproceedings{ toosi2023representational, title={Representational constraints underlying similarity between task-optimized neural systems}, author={Tahereh Toosi}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=tOW3IWHw8G} }
In this study, we investigate the similarity of representations between biological and artificial visual systems that are optimized for object recognition. We propose that this similarity could be a result of constraints on the representation of task-optimized systems, which necessitate the development of an abstraction from the input stimuli. To measure this, we constructed a two-dimensional coordination system in which we measured the distance of each neural representation from the pixel space and the class space. Our results show that proximity in this space predicts the similarity of neural representations between different visual systems. We observe that the trajectories of representations in any given task-optimized visual neural network start close to the pixel space and gradually move towards higher abstract representations such as categories. This suggests that the similarity between different task-optimized systems is due to constraints on representational trajectories, as revealed by the abstraction space. We present abstraction space as a simple yet effective analysis tool to draw inferences on the representations of neural network and to uncover the constraints that lead to similar representations in different visual systems.
Representational constraints underlying similarity between task-optimized neural systems
[ "Tahereh Toosi" ]
Workshop/UniReps
poster
2312.08545
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=s9mbWqr3VI
@inproceedings{ xiao2023a, title={A Compact Representation for Bayesian Neural Networks By Removing Permutation Symmetry}, author={Tim Z. Xiao and Weiyang Liu and Robert Bamler}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=s9mbWqr3VI} }
Bayesian neural networks (BNNs) are a principled approach to modeling predictive uncertainties in deep learning, which are important in safety-critical applications. Since exact Bayesian inference over the weights in a BNN is intractable, various approximate inference methods exist, among which sampling methods such as Hamiltonian Monte Carlo (HMC) are often considered the gold standard. While HMC provides high-quality samples, it lacks interpretable summary statistics because its sample mean and variance is meaningless in neural networks due to permutation symmetry. In this paper, we first show that the role of permutations can be meaningfully quantified by a number of transpositions metric. We then show that the recently proposed rebasin method allows us to summarize HMC samples into a compact representation that provides a meaningful explicit uncertainty estimate for each weight in a neural network, thus unifying sampling methods with variational inference. We show that this compact representation allows us to compare trained BNNs directly in weight space across sampling methods and variational inference, and to efficiently prune neural networks trained without explicit Bayesian frameworks by exploiting uncertainty estimates from HMC.
A Compact Representation for Bayesian Neural Networks By Removing Permutation Symmetry
[ "Tim Z. Xiao", "Weiyang Liu", "Robert Bamler" ]
Workshop/UniReps
poster
2401.00611
[ "https://github.com/timxzz/abi_with_rebasin" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=rNiKAZifvI
@inproceedings{ cappell2023reward, title={ReWa{RD}: Retinal Waves for Pre-Training Artificial Neural Networks Mimicking Real Prenatal Development}, author={Benjamin Cappell and Andreas Stoll and Chukwudi Williams Umah and Bernhard Egger}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=rNiKAZifvI} }
Computational models trained on a large amount of natural images are the state-of-the-art to study human vision -- usually adult vision. Computational models of infant vision and its further development are gaining more and more attention in the community. In this work we aim at the very beginning of our visual experience -- pre- and post-natal retinal waves which suggest to be a pre-training mechanism for the human visual system at a very early stage of development. We see this approach as an instance of biologically plausible data driven inductive bias through pre-training. We built a computational model that mimics this development mechanism by pre-training different artificial convolutional neural networks with simulated retinal wave images. The resulting features of this biologically plausible pre-training closely match the V1 features of the human visual system. We show that the performance gain by pre-training with retinal waves is similar to a state-of-the art pre-training pipeline. Our framework contains the retinal wave generator, as well as a training strategy, which can be a first step in a curriculum learning based training diet for various models of development. We release code, data and trained networks to build the basis for future work on visual development and based on a curriculum learning approach including prenatal development to support studies of innate vs. learned properties of the human visual system. An additional benefit of our pre-trained networks for neuroscience or computer vision applications is the absence of biases inherited from datasets like ImageNet.
ReWaRD: Retinal Waves for Pre-Training Artificial Neural Networks Mimicking Real Prenatal Development
[ "Benjamin Cappell", "Andreas Stoll", "Chukwudi Williams Umah", "Bernhard Egger" ]
Workshop/UniReps
poster
2311.17232
[ "https://github.com/bennyca/reward" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=rGCabZfV3d
@inproceedings{ ferrante2023multimodal, title={Multimodal decoding of human brain activity into images and text}, author={Matteo Ferrante and Tommaso Boccato and Furkan Ozcelik and Rufin VanRullen and Nicola Toschi}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=rGCabZfV3d} }
Every day, the human brain processes an immense volume of visual information, relying on intricate neural mechanisms to perceive and interpret these stimuli. Recent breakthroughs in functional magnetic resonance imaging (fMRI) have enabled scientists to extract visual information from human brain activity patterns. In this study, we present an innovative method for decoding brain activity into meaningful images and captions, with a specific focus on brain captioning due to its enhanced flexibility as compared to brain decoding into images. Our approach takes advantage of cutting-edge image captioning models and incorporates a unique image reconstruction pipeline that utilizes latent diffusion models and depth estimation. We utilized the Natural Scenes Dataset, a comprehensive fMRI dataset from eight subjects who viewed images from the COCO dataset. We employed the Generative Image-to-text Transformer (GIT) as our backbone for captioning and propose a new image reconstruction pipeline based on latent diffusion models. The method involves training regularized linear regression models between brain activity and extracted features. Additionally, we incorporated depth maps from the ControlNet model to further guide the reconstruction process. We propose a multimodal based approach that leverage similarities between neural and deep learning presentation and by learning alignment between these spaces, we produce textual description and image reconstruction from brain activity. We evaluate our methods using quantitative metrics for both generated captions and images. Our brain captioning approach outperforms existing methods, while our image reconstruction pipeline generates plausible images with improved spatial relationships. In conclusion, we demonstrate significant progress in brain decoding, showcasing the enormous potential of integrating vision and language to better understand human cognition. Our approach provides a flexible platform for future research, with potential applications based on a combination of high-level semantic information coming from text and low-level image shape information coming from depth maps and initial guess images.
Multimodal decoding of human brain activity into images and text
[ "Matteo Ferrante", "Tommaso Boccato", "Furkan Ozcelik", "Rufin VanRullen", "Nicola Toschi" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]