bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 197
792
| abstract
stringlengths 303
3.45k
| title
stringlengths 10
159
| authors
sequencelengths 1
28
⌀ | id
stringclasses 44
values | type
stringclasses 16
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 444
values | n_linked_authors
int64 -1
9
| upvotes
int64 -1
42
| num_comments
int64 -1
13
| n_authors
int64 -1
92
| paper_page_exists_pre_conf
int64 0
1
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
11
| Spaces
sequencelengths 0
100
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://openreview.net/forum?id=TOLUNEz5kI | @inproceedings{
briola2023homological,
title={Homological Convolutional Neural Networks},
author={Antonio Briola and Yuanrong Wang and Silvia Bartolucci and Tomaso Aste},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=TOLUNEz5kI}
} | Deep learning methods have demonstrated outstanding performances on classification and regression tasks on homogeneous data types (e.g., image, audio, and text data). However, tabular data still pose a challenge, with classic machine learning approaches being often computationally cheaper and equally effective than increasingly complex deep learning architectures. The challenge arises from the fact that, in tabular data, the correlation among features is weaker than the one from spatial or semantic relationships in images or natural language, and the dependency structures need to be modeled without any prior information. In this work, we propose a novel deep learning architecture that exploits the data structural organization through topologically constrained network representations to gain relational information from sparse tabular inputs. The resulting model leverages the power of convolution and is centered on a limited number of concepts from network topology to guarantee: (i) a data-centric and deterministic building pipeline; (ii) a high level of interpretability over the inference process; and (iii) an adequate room for scalability. We test our model on $18$ benchmark datasets against $5$ classic machine learning and $3$ deep learning models, demonstrating that our approach reaches state-of-the-art performances on these challenging datasets. The code to reproduce all our experiments is provided at https://github.com/FinancialComputingUCL/HomologicalCNN. | Homological Convolutional Neural Networks | [
"Antonio Briola",
"Yuanrong Wang",
"Silvia Bartolucci",
"Tomaso Aste"
] | Workshop/NeurReps | poster | 2308.13816 | [
"https://github.com/financialcomputingucl/homologicalcnn"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=TF2RcrcTP2 | @inproceedings{
shewmake2023visual,
title={Visual Scene Representation with Hierarchical Equivariant Sparse Coding},
author={Christian A Shewmake and Domas Buracas and Hansen Lillemark and Jinho Shin and Erik J Bekkers and Nina Miolane and Bruno Olshausen},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=TF2RcrcTP2}
} | We propose a hierarchical neural network architecture for unsupervised learning of equivariant part-whole decompositions of visual scenes. In contrast to the global equivariance of group-equivariant networks, the proposed architecture exhibits equivariance to part-whole transformations throughout the hierarchy, which we term hierarchical equivariance. The model achieves such internal representations via hierarchical Bayesian inference, which gives rise to rich bottom-up, top-down, and lateral information flows, hypothesized to underlie the mechanisms of perceptual inference in visual cortex. We demonstrate these useful properties of the model on a simple dataset of scenes with multiple objects under independent rotations and translations. | Visual Scene Representation with Hierarchical Equivariant Sparse Coding | [
"Christian A Shewmake",
"Domas Buracas",
"Hansen Lillemark",
"Jinho Shin",
"Erik J Bekkers",
"Nina Miolane",
"Bruno Olshausen"
] | Workshop/NeurReps | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=QcxL26Y23o | @inproceedings{
cannistraci2023from,
title={From Bricks to Bridges: Product of Invariances to Enhance Latent Space Communication},
author={Irene Cannistraci and Luca Moschella and Marco Fumero and Valentino Maiorca and Emanuele Rodol{\`a}},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=QcxL26Y23o}
} | It has been observed that representations learned by distinct neural networks conceal structural similarities when the models are trained under similar inductive biases. From a geometric perspective, identifying the classes of transformations and the related invariances that connect these representations is fundamental to unlocking applications, such as merging, stitching, and reusing different neural modules. However, estimating task-specific transformations a priori can be challenging and expensive due to several factors (e.g., weights initialization, training hyperparameters, or data modality). To this end, we introduce a versatile method to directly incorporate a set of invariances into the representations, constructing a product space of invariant components on top of the latent representations without requiring prior knowledge about the optimal invariance to infuse. We validate our solution on classification and reconstruction tasks, observing consistent latent similarity and downstream performance improvements in a zero-shot stitching setting. The experimental analysis comprises three modalities (vision, text, and graphs), twelve pretrained foundational models, eight benchmarks, and several architectures trained from scratch. | From Bricks to Bridges: Product of Invariances to Enhance Latent Space Communication | [
"Irene Cannistraci",
"Luca Moschella",
"Marco Fumero",
"Valentino Maiorca",
"Emanuele Rodolà"
] | Workshop/NeurReps | oral | 2310.01211 | [
""
] | https://huggingface.co/papers/2310.01211 | 0 | 0 | 0 | 5 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=NWyf3wb330 | @inproceedings{
han2023symmetrybased,
title={Symmetry-based Learning of Radiance Fields for Rigid Objects},
author={Zhiwei Han and Stefan Matthes and Hao Shen and Yuanting Liu},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=NWyf3wb330}
} | In this work, we present SymObjectRF, a symmetry-based method that learns object-centric representations for rigid objects from one dynamic scene without hand-crafted annotations. SymObjectRF learns the appearance and surface geometry of all dynamic object in their canonical poses and represents individual object within its canonical pose using a canonical object field (COF). SymObjectRF imposes group equivariance on rendering pipeline by transforming 3D point samples from world coordinate to object canonical poses. Subsequently, a permutation-invariant compositional renderer combines the color and density values queried from the learned COFs and reconstructs the input scene via volume rendering. SymObjectRF is then optimized by minimizing scene reconstruction loss. We show the feasibility of SymObjectRF in learning object-centric representations both theoretically and empirically. | Symmetry-based Learning of Radiance Fields for Rigid Objects | [
"Zhiwei Han",
"Stefan Matthes",
"Hao Shen",
"Yuanting Liu"
] | Workshop/NeurReps | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=NJS6568y79 | @inproceedings{
ballester2023decorrelating,
title={Decorrelating neurons using persistence},
author={Rub{\'e}n Ballester and Carles Casacuberta and Sergio Escalera},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=NJS6568y79}
} | We propose a novel way to regularise deep learning models by reducing high correlations between neurons. For this, we present two regularisation terms computed from the weights of a minimum spanning tree of the clique whose vertices are the neurons of a given network (or a sample of those), where weights on edges are correlation dissimilarities. We explore their efficacy by performing a set of proof-of-concept experiments, for which our new regularisation terms outperform some popular ones. We demonstrate that, in these experiments, naive minimisation of all correlations between neurons obtains lower accuracies than our regularisation terms. This suggests that redundancies play a significant role in artificial neural networks, as evidenced by some studies in neuroscience for real networks. We include a proof of differentiability of our regularisers, thus developing the first effective topological persistence-based regularisation terms that consider the whole set of neurons and that can be applied to a feedforward architecture in any deep learning task such as classification, data generation, or regression. | Decorrelating neurons using persistence | [
"Rubén Ballester",
"Carles Casacuberta",
"Sergio Escalera"
] | Workshop/NeurReps | poster | 2308.04870 | [
"https://github.com/rballeba/decorrelatingneuronsusingpersistence"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=Mrssld4TyD | @inproceedings{
geng2023scalar,
title={Scalar Invariant Networks with Zero Bias},
author={Chuqin Geng and Xiaojie Xu and Haolin Ye and Xujie Si},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=Mrssld4TyD}
} | Just like weights, bias terms are learnable parameters in many popular machine learning models, including neural networks. Biases are believed to enhance the representational power of neural networks, enabling them to tackle various tasks in computer vision. Nevertheless, we argue that biases can be disregarded for some image-related tasks such as image classification, by considering the intrinsic distribution of images in the input space and desired model properties from first principles. Our empirical results suggest that zero-bias neural networks can perform comparably to normal networks for practical image classification tasks. Furthermore, we demonstrate that zero-bias neural networks possess a valuable property known as scalar (multiplicative) invariance. This implies that the network's predictions remain unchanged even when the contrast of the input image is altered. We further extend the scalar invariance property to more general cases, thereby attaining robustness within specific convex regions of the input space. We believe dropping bias terms can be considered as a geometric prior when designing neural network architecture for image classification, which shares the spirit of adapting convolutions as the translational invariance prior. | Scalar Invariant Networks with Zero Bias | [
"Chuqin Geng",
"Xiaojie Xu",
"Haolin Ye",
"Xujie Si"
] | Workshop/NeurReps | poster | 2211.08486 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=Mo5qZaBl8v | @inproceedings{
nguyen2023fast,
title={Fast Temporal Wavelet Graph Neural Networks},
author={Duc Thien Nguyen and Tuan Nguyen and Truong Son Hy and Risi Kondor},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=Mo5qZaBl8v}
} | Spatio-temporal signals forecasting plays an important role in numerous domains, especially in neuroscience and transportation. The task is challenging due to the highly intricate spatial structure, as well as the non-linear temporal dynamics of the network. To facilitate reliable and timely forecast for the human brain and traffic networks, we propose the Fast Temporal Wavelet Graph Neural Networks (FTWGNN) that is both time- and memory-efficient for learning tasks on timeseries data with the underlying graph structure, thanks to the theories of multiresolution analysis and wavelet theory on discrete spaces. We employ Multiresolution Matrix Factorization (MMF) (Kondor et al., 2014) to factorize the highly dense graph structure and compute the corresponding sparse wavelet basis that allows us to construct fast wavelet convolution as the backbone of our novel architecture. Experimental results on real-world PEMS-BAY, METR-LA traffic datasets and AJILE12 ECoG dataset show that FTWGNN is competitive with the state-of-the-arts while maintaining a low computational footprint. Our PyTorch implementation is publicly available at https://github.com/HySonLab/TWGNN | Fast Temporal Wavelet Graph Neural Networks | [
"Duc Thien Nguyen",
"Tuan Nguyen",
"Truong Son Hy",
"Risi Kondor"
] | Workshop/NeurReps | poster | 2302.08643 | [
"https://github.com/hysonlab/twgnn"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=LQoejMxeiv | @inproceedings{
kelshaw2023manifoldaugmented,
title={Manifold-augmented Eikonal Equations: Geodesic Distances and Flows on Differentiable Manifolds.},
author={Daniel Kelshaw and Luca Magri},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=LQoejMxeiv}
} | Manifolds discovered by machine learning models provide a compact representation of the underlying data. Geodesics on these manifolds define locally length-minimising curves and provide a notion of distance, which are key for reduced-order modelling, statistical inference, and interpolation. In this work, we propose a model-based parameterisation for distance fields and geodesic flows on manifolds, exploiting solutions of a manifold-augmented Eikonal equation. We demonstrate how the geometry of the manifold impacts the distance field, and exploit the geodesic flow to obtain globally length-minimising curves directly. This work opens opportunities for statistics and reduced-order modelling on differentiable manifolds. | Manifold-augmented Eikonal Equations: Geodesic Distances and Flows on Differentiable Manifolds. | [
"Daniel Kelshaw",
"Luca Magri"
] | Workshop/NeurReps | poster | [
"https://github.com/danielkelshaw/riemax"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=KWUA0n6Dpv | @inproceedings{
suresh2023pitfalls,
title={Pitfalls in Measuring Neural Transferability},
author={Suryaka Suresh and Vinayak Abrol and Anshul Thakur},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=KWUA0n6Dpv}
} | Transferability scores quantify the aptness of the pre-trained models for a downstream task and help in selecting an optimal pre-trained model for transfer learning. This work aims to draw attention to the significant shortcomings of state-of-the-art transferability scores. To this aim, we propose neural collapse-based transferability scores that analyse intra-class variability collapse and inter-class discriminative ability of the penultimate embedding space of a pre-trained model. The experimentation across the image and audio domains demonstrates that such a simple variability analysis of the feature space is more than enough to satisfy the current definition of transferability scores, and there is a requirement for a new generic definition of transferability. Further, building on these results, we highlight new research directions and postulate characteristics of an ideal transferability measure that will be helpful in streamlining future studies targeting this problem. | Pitfalls in Measuring Neural Transferability | [
"Suryaka Suresh",
"Vinayak Abrol",
"Anshul Thakur"
] | Workshop/NeurReps | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=IqUVsae1iK | @inproceedings{
mansfield2023random,
title={Random Field Augmentations for Self-Supervised Representation Learning},
author={Philip Mansfield and Arash Afkanpour and Warren Morningstar and Karan Singhal},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=IqUVsae1iK}
} | Self-supervised representation learning is heavily dependent on data augmentations to specify the invariances encoded in representations. Previous work has shown that applying diverse data augmentations is crucial to downstream performance, but augmentation techniques remain under-explored. In this work, we propose a new family of local transformations based on Gaussian random fields to generate image augmentations for self-supervised representation learning. These transformations generalize the well-established affine and color transformations (translation, rotation, color jitter, etc.) and greatly increase the space of augmentations by allowing transformation parameter values to vary from pixel to pixel. The parameters are treated as continuous functions of spatial coordinates, and modeled as independent Gaussian random fields. Empirical results show the effectiveness of the new transformations for self-supervised representation learning. Specifically, we achieve a 1.7% top-1 accuracy improvement over baseline on ImageNet downstream classification, and a 3.6% improvement on out-of-distribution iNaturalist downstream classification. However, due to the flexibility of the new transformations, learned representations are sensitive to hyperparameters. While mild transformations improve representations, we observe that strong transformations can degrade the structure of an image, indicating that balancing the diversity and strength of augmentations is important for improving generalization of learned representations. | Random Field Augmentations for Self-Supervised Representation Learning | [
"Philip Mansfield",
"Arash Afkanpour",
"Warren Morningstar",
"Karan Singhal"
] | Workshop/NeurReps | poster | 2311.03629 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=GX4axrya0A | @inproceedings{
yang2023changes,
title={Changes in the geometry of hippocampal representations across brain states},
author={Wannan Yang and Chen Sun and Gyorgy Buzsaki},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=GX4axrya0A}
} | The hippocampus (HPC) is a key structure of the brain's capacity to learn and generalize. One pervasive phenomenon in the brain, but missing in AI, is the presence of different gross brain states. It is known that these different brain states give rise to diverse modes of information processing that are imperative for hippocampus to learn and function, but the mechanisms by which they do so remain unknown. To study this, we harnessed the power of recently developed dimensionality reduction techniques to shed insight on how HPC representations change across brain states. We compared the geometry of HPC neuronal representations when rodents learn to generalize across different environments, and showed that HPC representation could support both pattern separation and generalization. Next, we compared HPC activity during different stages of sleep. Consistent with the literature, we found a robust recapitulation of the previous awake experience during non rapid eye movement sleep (NREM). But interestingly, such geometric correspondence to previous awake experience was not observed during rapid eye movement sleep (REM), suggesting a very different mode of information processing. This is the first known report of UMAP analysis on hippocampal neuronal data during REM sleep. We propose that characterizing and contrasting the geometry of hippocampal representations during different brain states can help understand the brain's mechanisms for learning, and in the future, can even help design next generation of AI that learn and generalize better. | Changes in the geometry of hippocampal representations across brain states | [
"Wannan Yang",
"Chen Sun",
"Gyorgy Buzsaki"
] | Workshop/NeurReps | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=GUNTnnd4Hw | @inproceedings{
li2023entropymcmc,
title={Entropy-{MCMC}: Sampling from Flat Basins with Ease},
author={Bolian Li and Ruqi Zhang},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=GUNTnnd4Hw}
} | Bayesian deep learning counts on the quality of posterior distribution estimation. However, the posterior of deep neural networks is highly multi-modal in nature, with local modes exhibiting varying generalization performances. Given a practical budget, sampling from the original posterior can lead to suboptimal performances, as some samples may become trapped in "bad" modes and suffer from overfitting. Leveraging the observation that "good" modes with low generalization error often reside in flat basins of the energy landscape, we propose to bias the sampling on the posterior toward these flat regions. Specifically, we introduce an auxiliary guiding variable, the stationary distribution of which resembles a smoothed posterior free from sharp modes, to lead the MCMC sampler to flat basins. We prove the convergence of our method and further show that it converges faster than several existing flatness-aware methods in the strongly convex setting. Empirical results demonstrate that our method can successfully sample from flat basins of the posterior, and outperforms all compared baselines on multiple benchmarks including classification, calibration and out-of-distribution detection. | Entropy-MCMC: Sampling from Flat Basins with Ease | [
"Bolian Li",
"Ruqi Zhang"
] | Workshop/NeurReps | poster | 2310.05401 | [
"https://github.com/lblaoke/emcmc"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=EoyeHdfJ6l | @inproceedings{
maurel2023rototranslation,
title={Roto-translation Equivariant {YOLO} for Aerial Images},
author={Benjamin Maurel and Samy Blusseau and Santiago Velasco-Forero and Teodora Petrisor},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=EoyeHdfJ6l}
} | This work introduces Eq-YOLO, an Equivariant One-Stage Object Detector based on YOLO-v8 incorporating group convolutions to handle rotational transformations. We show the interest of using equivariant-transforms to improve the detection performance on rotated data over the regular YOLO-v8 model while dividing the number of parameters to train by a factor greater than three. | Roto-translation Equivariant YOLO for Aerial Images | [
"Benjamin Maurel",
"Samy Blusseau",
"Santiago Velasco-Forero",
"Teodora Petrisor"
] | Workshop/NeurReps | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=DPkBeXZV7a | @inproceedings{
sbail{\`o}2023emergence,
title={Emergence of Latent Binary Encoding in Deep Neural Network Classifiers},
author={Luigi Sbail{\`o} and Luca Ghiringhelli},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=DPkBeXZV7a}
} | We observe the emergence of binary encoding within the latent space of deep-neural-network classifiers.
Such binary encoding is induced by introducing a linear penultimate layer, which is equipped during training with a loss function that grows as $\exp(\vec{x}^2)$, where $\vec{x}$ are the coordinates in the latent space.
The phenomenon we describe represents a specific instance of a well-documented occurrence known as \textit{neural collapse}, which arises in the terminal phase of training and entails the collapse of latent class means to the vertices of a simplex equiangular tight frame (ETF).
We show that binary encoding accelerates convergence toward the simplex ETF and enhances classification accuracy. | Emergence of Latent Binary Encoding in Deep Neural Network Classifiers | [
"Luigi Sbailò",
"Luca Ghiringhelli"
] | Workshop/NeurReps | poster | 2310.08224 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=CwJIpWzgDP | @inproceedings{
schaeffer2023testing,
title={Testing Assumptions Underlying a Unified Theory for the Origin of Grid Cells},
author={Rylan Schaeffer and Mikail Khona and Adrian Bertagnoli and Sanmi Koyejo and Ila Fiete},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=CwJIpWzgDP}
} | Representing and reasoning about physical space is fundamental to animal survival, and the mammalian lineage expresses a wealth of specialized neural representations that encode space. Grid cells, whose discovery earned a Nobel prize, are a striking example: a grid cell is a neuron that fires if and only if the animal is spatially located at the vertices of a regular triangular lattice that tiles all explored two-dimensional environments. Significant theoretical work has gone into understanding why mammals have learned these particular representations, and recent work has proposed a ``unified theory for the computational and mechanistic origin of grid cells," claiming to answer why the mammalian lineage has learned grid cells. However, the Unified Theory makes a series of highly specific assumptions about the target readouts of grid cells - putatively place cells. In this work, we explicitly identify what these mathematical assumptions are, then test two of the critical assumptions using biological place cell data. At both the population and single-cell levels, we find evidence suggesting that neither of the assumptions are likely true in biological neural representations. These results call the Unified Theory into question, suggesting that biological grid cells likely have a different origin than those obtained in trained artificial neural networks. | Testing Assumptions Underlying a Unified Theory for the Origin of Grid Cells | [
"Rylan Schaeffer",
"Mikail Khona",
"Adrian Bertagnoli",
"Sanmi Koyejo",
"Ila Fiete"
] | Workshop/NeurReps | poster | 2311.16295 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=CeG8jzTL2k | @inproceedings{
granberry2023soequivariant,
title={{SO}(3)-Equivariant Representation Learning in 2D Images},
author={Darnell Granberry and Alireza Nasiri and Jiayi Shou and Alex J Noble and Tristan Bepler},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=CeG8jzTL2k}
} | Imaging physical objects that are free to rotate and translate in 3D is challenging. While
an object’s pose and location do not change its nature, varying them presents problems
for current vision models. Equivariant models account for these nuisance transformations,
but current architectures only model either 2D transformations of 2D signals or 3D trans-
formations of 3D signals. Here, we propose a novel convolutional layer consisting of 2D
projections of 3D filters that models 3D equivariances of 2D signals—critical for capturing
the full space of spatial transformations of objects in imaging domains such as cryo-EM. We
additionally present methods for aggregating our rotation-specific outputs. We demonstrate
improvement on several tasks, including particle picking and pose estimation. | SO(3)-Equivariant Representation Learning in 2D Images | [
"Darnell Granberry",
"Alireza Nasiri",
"Jiayi Shou",
"Alex J Noble",
"Tristan Bepler"
] | Workshop/NeurReps | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=BapLxMxUIm | @inproceedings{
tegn{\'e}r2023selfsupervised,
title={Self-Supervised Latent Symmetry Discovery via Class-Pose Decomposition},
author={Gustaf Tegn{\'e}r and Hedvig Kjellstrom},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=BapLxMxUIm}
} | In this paper, we explore the discovery of latent symmetries of data in a self-supervised manner. By considering sequences of observations undergoing uniform motion, we can extract a shared group transformation from the latent observations. In contrast to previous work, we utilize a latent space in which the group and orbit component are decomposed. We show that this construction facilitates more accurate identification of the properties of the underlying group, which consequently results in an improved performance on a set of sequential prediction tasks. | Self-Supervised Latent Symmetry Discovery via Class-Pose Decomposition | [
"Gustaf Tegnér",
"Hedvig Kjellstrom"
] | Workshop/NeurReps | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=B3bxgPqxGT | @inproceedings{
d{\"o}nmez2023discovering,
title={Discovering Latent Causes and Memory Modification: A Computational Approach Using Symmetry and Geometry},
author={Arif D{\"o}nmez},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=B3bxgPqxGT}
} | We learn from our experiences, even though they are never exactly the same. This implies
that we need to assess their similarity to apply what we have learned from one experience
to another. It is proposed that we “cluster” our experiences based
on hidden latent causes that we infer. It is also suggested that surprises, which occur
when our predictions are incorrect, help us categorize our experiences into distinct groups.
In this paper, we develop a computational theory that emulates these processes based on two basic concepts
from intuitive physics and Gestalt psychology using symmetry and geometry.
We apply our approach to simple tasks that involve inductive reasoning. Remarkably, the output of our computational approach aligns
closely with human responses. | Discovering Latent Causes and Memory Modification: A Computational Approach Using Symmetry and Geometry | [
"Arif Dönmez"
] | Workshop/NeurReps | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=ApeIFsnRvk | @inproceedings{
joseph2023on,
title={On the Information Geometry of Vision Transformers},
author={Sonia Joseph and Kumar Krishna Agrawal and Arna Ghosh and Blake Aaron Richards},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=ApeIFsnRvk}
} | Understanding the structure of high-dimensional representations learned by Vision Transformers (ViTs) provides a pathway toward developing a mechanistic understanding and further improving architecture design. In this work, we leverage tools from information
geometry to characterize representation quality at a per-token (intra-token) level as well as across pairs of tokens (inter-token) in ViTs pretrained for object classification. In particular, we observe that these high-dimensional tokens exhibit a characteristic spectral decay in
the feature covariance matrix. By measuring the rate of this decay (denoted by $\alpha$) for each token across transformer blocks, we discover an $\alpha$ signature, indicative of a transition from lower to higher effective dimensionality. We also demonstrate that tokens can be clustered based on their $\alpha$ signature, revealing that tokens corresponding to nearby spatial patches of the original image exhibit similar $\alpha$ trajectories. Furthermore, for measuring the complexity at the sequence level, we aggregate the correlation between pairs of tokens independently at each transformer block. A higher average correlation indicates a significant overlap between token representations and lower effective complexity. Notably, we observe a U-shaped trend across the model hierarchy, suggesting that token representations are more expressive in the intermediate blocks. Our findings provide a framework for understanding information processing in ViTs while providing tools to prune/merge tokens across blocks, thereby making the architectures more efficient. | On the Information Geometry of Vision Transformers | [
"Sonia Joseph",
"Kumar Krishna Agrawal",
"Arna Ghosh",
"Blake Aaron Richards"
] | Workshop/NeurReps | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=9kxkYY7B3j | @inproceedings{
venkatesh2023the,
title={The Variability of Representations in Mice and Humans Changes with Learning, Engagement, and Attention},
author={Praveen Venkatesh and Corbett C Bennett and Sam Gale and Juri Minxha and Hristos Courellis and Greggory Robert Heller and Tamina Keira Ramirez and Severine Durand and Ueli Rutishauser and Shawn R Olsen and Stefan Mihalas},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=9kxkYY7B3j}
} | In responding to a visual stimulus, cortical neurons exhibit a high degree of variability, and this variability can be correlated across neurons. In this study, we use recordings from both mice and humans to systematically characterize how the variability in the representation of visual stimuli changes with learning, engagement and attention. We observe that in mice, familiarization with a set of images over many weeks reduces the variability of responses, but does not change its shape. Further, switching from passive to active task engagement changes the overall shape by shrinking the neural variability only along the task-relevant direction, leading to a higher signal-to-noise ratio. In a selective attention task in humans wherein multiple distributions are compared, a higher signal-to-noise ratio is obtained via a different mechanism, by mainly increasing the signal of the attended category. These findings show that representation variability can be adjusted with task needs. A potential speculative role for variability, consistent with these findings, is that it helps generalization. | The Variability of Representations in Mice and Humans Changes with Learning, Engagement, and Attention | [
"Praveen Venkatesh",
"Corbett C Bennett",
"Sam Gale",
"Juri Minxha",
"Hristos Courellis",
"Greggory Robert Heller",
"Tamina Keira Ramirez",
"Severine Durand",
"Ueli Rutishauser",
"Shawn R Olsen",
"Stefan Mihalas"
] | Workshop/NeurReps | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=9TQE2xGCbf | @inproceedings{
walker2023explicit,
title={Explicit Neural Surfaces: Learning Continuous Geometry with Deformation Fields},
author={Thomas Walker and Octave Mariotti and Amir Vaxman and Hakan Bilen},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=9TQE2xGCbf}
} | We introduce Explicit Neural Surfaces (ENS), an efficient smooth surface representation that directly encodes topology with a deformation field from a known base domain. We apply this representation to reconstruct explicit surfaces from multiple views, where we use a series of neural deformation fields to progressively transform the base domain into a target shape. By using meshes as discrete surface proxies, we train the deformation fields through efficient differentiable rasterization. Using a fixed base domain allows us to have Laplace-Beltrami eigenfunctions as an intrinsic positional encoding alongside standard extrinsic Fourier features, with which our approach can capture fine surface details. Compared to implicit surfaces, ENS trains faster and has several orders of magnitude faster inference times. The explicit nature of our approach also allows higher-quality mesh extraction whilst maintaining competitive surface reconstruction performance and real-time capabilities. | Explicit Neural Surfaces: Learning Continuous Geometry with Deformation Fields | [
"Thomas Walker",
"Octave Mariotti",
"Amir Vaxman",
"Hakan Bilen"
] | Workshop/NeurReps | poster | 2306.02956 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=9G0e8QrpxP | @inproceedings{
kohler2023symmetric,
title={Symmetric Models for Radar Response Modeling},
author={Colin Kohler and Nathan Vaska and Ramya Muthukrishnan and Whangbong Choi and Jung Yeon Park and Justin Goodwin and Rajmonda Caceres and Robin Walters},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=9G0e8QrpxP}
} | Many radar applications require complex radar signature models that incorporate characteristics of an object's shape and dynamics as well as sensing effects. Even though high-fidelity, first-principles radar simulators are available, they tend to be resource-intensive and do not easily support the requirements of agile and large-scale AI development and evaluation frameworks. Deep learning represents an attractive alternative to these numerical methods, but can have large data requirements and limited generalization ability. In this work, we present the Radar Equivariant Model (REM), the first $SO(3)$-equivaraint model for predicting radar responses from object meshes. By constraining our model to the symmetries inherent to radar sensing, REM is able to achieve a high level reconstruction of signals generated by a first-principles radar model and shows improved performance and sample efficiency over other encoder-decoder models. | Symmetric Models for Radar Response Modeling | [
"Colin Kohler",
"Nathan Vaska",
"Ramya Muthukrishnan",
"Whangbong Choi",
"Jung Yeon Park",
"Justin Goodwin",
"Rajmonda Caceres",
"Robin Walters"
] | Workshop/NeurReps | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=8zWcBUoeR6 | @inproceedings{
wang2023the,
title={The Surprising Effectiveness of Equivariant Models in Domains with Latent Symmetry},
author={Dian Wang and Jung Yeon Park and Neel Sortur and Lawson Wong and Robin Walters and Robert Platt},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=8zWcBUoeR6}
} | Extensive work has demonstrated that equivariant neural networks can significantly improve sample efficiency and generalization by enforcing an inductive bias in the network architecture. These applications typically assume that the domain symmetry is fully described by explicit transformations of the model inputs and outputs. However, many real-life applications contain only latent or partial symmetries which cannot be easily described by simple transformations of the input. In these cases, it is necessary to \emph{learn} symmetry in the environment instead of imposing it mathematically on the network architecture. We discover, surprisingly, that imposing equivariance constraints that do not exactly match the domain symmetry is very helpful in learning the true symmetry in the environment. We differentiate between \emph{extrinsic} and \emph{incorrect} symmetry constraints and show that while imposing incorrect symmetry can impede the model's performance, imposing extrinsic symmetry can actually improve performance. We demonstrate that an equivariant model can significantly outperform non-equivariant methods on domains with latent symmetries. | The Surprising Effectiveness of Equivariant Models in Domains with Latent Symmetry | [
"Dian Wang",
"Jung Yeon Park",
"Neel Sortur",
"Lawson Wong",
"Robin Walters",
"Robert Platt"
] | Workshop/NeurReps | poster | 2211.09231 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=8NA7a1mCdu | @inproceedings{
christiansen2023large,
title={Large language models partially converge toward human-like concept organization},
author={Jonathan Gabel Christiansen and Mathias Gammelgaard and Anders S{\o}gaard},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=8NA7a1mCdu}
} | Large language models show human-like performance in knowledge extraction, reasoning and dialogue, but it remains controversial whether this performance is best explained by memorization and pattern matching, or whether it reflects human-like inferential semantics and world knowledge. Knowledge bases such as WikiData provide large-scale, high-quality representations of inferential semantics and world knowledge. We show that large language models learn to organize concepts in ways that are strikingly similar to how concepts are organized in such knowledge bases. Knowledge bases model collective, institutional knowledge, and large language models seem to induce such knowledge from raw text. We show that bigger and better models exhibit more human-like concept organization, across four families of language models and three knowledge graph embeddings. | Large language models partially converge toward human-like concept organization | [
"Jonathan Gabel Christiansen",
"Mathias Gammelgaard",
"Anders Søgaard"
] | Workshop/NeurReps | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=7jHGa1nS47 | @inproceedings{
wilson2023cayley,
title={Cayley Graph Propagation},
author={JJ Wilson and Petar Veli{\v{c}}kovi{\'c}},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=7jHGa1nS47}
} | In spite of the plethora of success stories with graph neural networks (GNNs) on modelling graph-structured data, they are notoriously vulnerable to tasks which necessitate mixing of information between distant pairs of nodes, especially in the presence of bottlenecks in the graph. For this reason, a significant body of research has dedicated itself to discovering or pre-computing graph structures which ameliorate such bottlenecks. Bottleneck-free graphs are well-known in the mathematical community as *expander graphs*, with prior work—Expander Graph Propagation (EGP)—proposing the use of a well-known expander graph family—the Cayley graphs of the $\mathrm{SL}(2,\mathbb{Z}_n)$ special linear group—as a computational template for GNNs. However, despite its solid theoretical grounding, the actual computational graphs used by EGP are *truncated* Cayley graphs, which causes them to lose expansion properties. In this work, we propose to use the full Cayley graph within EGP, recovering significant improvements on datasets from the Open Graph Benchmark (OGB). Our empirical evidence suggests that the retention of the nodes in the expander graph can provide benefit for graph representation learning, which may provide valuable insight for future models. | Cayley Graph Propagation | [
"JJ Wilson",
"Petar Veličković"
] | Workshop/NeurReps | poster | 2410.03424 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=5W9so5v0OU | @inproceedings{
mochizuki-freeman2023geometry,
title={Geometry of abstract learned knowledge in deep {RL} agents},
author={James Mochizuki-Freeman and Md Rysul Kabir and Mitesh Gulecha and Zoran Tiganj},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=5W9so5v0OU}
} | Data from neural recordings suggest that mammalian brains represent physical and abstract task-relevant variables through low-dimensional neural manifolds. In a recent electrophysiological study (Nieh et al., 2021), mice performed an evidence accumulation task while moving along a virtual track. Nonlinear dimensionality reduction of the population activity revealed that task-relevant variables were jointly mapped in an orderly manner in the low-dimensional space. Here we trained deep reinforcement learning (RL) agents on the same evidence accumulation task and found that their neural activity can be described with a low-dimensional manifold spanned by task-relevant variables. These results provide further insight into similarities and differences between neural dynamics in mammals and deep RL agents. Furthermore, we showed that manifold learning can be used to characterize the representational space of the RL agents with the potential to improve the interpretability of decision-making in RL. | Geometry of abstract learned knowledge in deep RL agents | [
"James Mochizuki-Freeman",
"Md Rysul Kabir",
"Mitesh Gulecha",
"Zoran Tiganj"
] | Workshop/NeurReps | oral | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=4OSJeCAMi6 | @inproceedings{
han2023curvature,
title={Curvature Fields from Shading Fields},
author={Xinran Han and Todd Zickler},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=4OSJeCAMi6}
} | We re-examine the estimation of 3D shape from images that are caused by shading of diffuse Lambertian surfaces. We propose a neural model that is motivated by the well-documented perceptual effect in which shape is perceived from shading without a precise perception of lighting. Our model operates independently in each receptive field and produces a scalar statistic of surface curvature for that field. The model’s architecture builds on previous mathematical analyses of lighting-invariant shape constraints, and it leverages geometric structure to provide equivariance under image rotations and translations. Applying our model in parallel across a dense set of receptive fields produces a curvature field that we show is quite stable under changes to a surface’s albedo pattern (texture) and also to changes in lighting, even when lighting varies spatially across the surface. | Curvature Fields from Shading Fields | [
"Xinran Han",
"Todd Zickler"
] | Workshop/NeurReps | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=3ItzNHPov9 | @inproceedings{
klee2023a,
title={A Comparison of Equivariant Vision Models with ImageNet Pre-training},
author={David Klee and Jung Yeon Park and Robert Platt and Robin Walters},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=3ItzNHPov9}
} | Neural networks pre-trained on large datasets provide
useful embeddings for downstream tasks and allow researchers to iterate
with less compute. For computer vision tasks, ImageNet pre-trained models
can be easily downloaded for fine-tuning.
However, no such pre-trained models are available that are equivariant to image
transformations. In this work, we implement several equivariant versions
of the residual network architecture and publicly release the weights after
training on ImageNet. Additionally, we perform a comparison of enforced vs.
learned equivariance in the largest data regime to date. | A Comparison of Equivariant Vision Models with ImageNet Pre-training | [
"David Klee",
"Jung Yeon Park",
"Robert Platt",
"Robin Walters"
] | Workshop/NeurReps | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=2sLBXyVsPE | @inproceedings{
mcneela2023almost,
title={Almost Equivariance via Lie Algebra Convolutions},
author={Daniel McNeela},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=2sLBXyVsPE}
} | Recently, the $\textit{equivariance}$ of models with respect to a group action has
become an important topic of research in machine learning. Analysis of the built-in equivariance of
existing neural network architectures, as well as the study of methods for building model architectures that
explicitly ``bake in'' equivariance, have become significant research areas in their own right.
However, imbuing an architecture with a specific group equivariance imposes a strong prior on the types of data
transformations that the model expects to see. While strictly-equivariant models enforce symmetries, such
as those due to rotations or translations, real-world data does not always follow such strict equivariances,
be it due to noise in the data or underlying physical laws that encode only approximate or partial symmetries.
In such cases, the prior of strict equivariance can actually prove too strong and cause models
to underperform on real-world data. Therefore, in this work we study a closely related topic,
that of $\textit{almost equivariance}$. We give a practical method for encoding
almost equivariance in models by appealing to the Lie algebra of a Lie group and defining $\textit{Lie algebra convolutions}$.
We demonstrate that Lie algebra convolutions offer several benefits over Lie group convolutions,
including being computationally tractable and well-defined for non-compact groups.
Finally, we demonstrate the validity of our approach by benchmarking against datasets
in fully equivariant and almost equivariant settings. | Almost Equivariance via Lie Algebra Convolutions | [
"Daniel McNeela"
] | Workshop/NeurReps | poster | 2310.13164 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=2EuaV9an6m | @inproceedings{
sonoda2023deep,
title={Deep Ridgelet Transform: Voice with Koopman Operator Constructively Proves Universality of Formal Deep Networks},
author={Sho Sonoda and Yuka Hashimoto and Isao Ishikawa and Masahiro Ikeda},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=2EuaV9an6m}
} | We identify hidden layers inside a deep neural network (DNN) with group actions on the data domain, and formulate a formal deep network as a dual voice transform with respect to the Koopman operator, a linear representation of the group action. Based on the group theoretic arguments, particularly by using Schur's lemma, we show a simple proof of the universality of DNNs. | Deep Ridgelet Transform: Voice with Koopman Operator Constructively Proves Universality of Formal Deep Networks | [
"Sho Sonoda",
"Yuka Hashimoto",
"Isao Ishikawa",
"Masahiro Ikeda"
] | Workshop/NeurReps | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=0vj5llDXVO | @inproceedings{
nguyen2023learning,
title={Learning Symmetrization for Equivariance with Orbit Distance Minimization},
author={Dat Tien Nguyen and Jinwoo Kim and Hongseok Yang and Seunghoon Hong},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=0vj5llDXVO}
} | We present a general framework for symmetrizing an arbitrary neural-network architecture and making it equivariant with respect to a given group. We build upon the proposals of Kim et al. (2023); Kaba et al. (2023) for symmetrization, and improve them by replacing their conversion of neural features into group representations, with an optimization whose loss intuitively measures the distance between group orbits. This change makes our approach applicable to a broader range of matrix groups, such as the Lorentz group O(1, 3), than these two proposals. We experimentally show our method’s competitiveness on the SO(2) image classification task, and also its increased generality on the task with O(1, 3). Our implementation will be made accessible at https://github.com/tiendatnguyen-vision/Orbit-symmetrize. | Learning Symmetrization for Equivariance with Orbit Distance Minimization | [
"Dat Tien Nguyen",
"Jinwoo Kim",
"Hongseok Yang",
"Seunghoon Hong"
] | Workshop/NeurReps | poster | 2311.07143 | [
"https://github.com/tiendatnguyen-vision/orbit-symmetrize"
] | https://huggingface.co/papers/2311.07143 | 0 | 1 | 0 | 4 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=0Atc0bcU6x | @inproceedings{
cesa2023algebraic,
title={Algebraic Topological Networks via the Persistent Local Homology Sheaf},
author={Gabriele Cesa and Arash Behboodi},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=0Atc0bcU6x}
} | In this work, we introduce a novel approach based on algebraic topology to enhance graph convolution and attention modules by incorporating local topological properties of the data. To do so, we consider the framework of sheaf neural networks, which has been previously leveraged to incorporate additional structure into graph neural networks’ features and construct more expressive, non-isotropic messages. Specifically, given an input simplicial complex (e.g. generated by the cliques of a graph or the neighbors in a point cloud), we construct its local homology sheaf, which assigns to each node the vector space of its local homology. The intermediate features of our networks live in these vector spaces and we leverage the associated sheaf Laplacian to construct more complex linear messages between them. Moreover, we extend this approach by considering the persistent version of local homology associated with a weighted simplicial complex (e.g., built from pairwise distances of nodes embeddings). This i) solves the problem of the lack of a natural choice of basis for the local homology vector spaces and ii) makes the sheaf itself differentiable, which enables our models to directly optimize the topology of their intermediate features. | Algebraic Topological Networks via the Persistent Local Homology Sheaf | [
"Gabriele Cesa",
"Arash Behboodi"
] | Workshop/NeurReps | poster | 2311.10156 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=030cPt4d8i | @inproceedings{
marchetti2023neural,
title={Neural Lattice Reduction: A Self-Supervised Geometric Deep Learning Approach},
author={Giovanni Luca Marchetti and Gabriele Cesa and Kumar Pratik and Arash Behboodi},
booktitle={NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations},
year={2023},
url={https://openreview.net/forum?id=030cPt4d8i}
} | Lattice reduction is a combinatorial optimization problem aimed at finding the most orthogonal basis in a given lattice. In this work, we address lattice reduction via deep learning methods. We design a deep neural model outputting factorized unimodular matrices and train it in a self-supervised manner by penalizing non-orthogonal lattice bases. We incorporate the symmetries of lattice reduction into the model by making it invariant and equivariant with respect to appropriate continuous and discrete groups. | Neural Lattice Reduction: A Self-Supervised Geometric Deep Learning Approach | [
"Giovanni Luca Marchetti",
"Gabriele Cesa",
"Kumar Pratik",
"Arash Behboodi"
] | Workshop/NeurReps | poster | 2311.08170 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=x4I3Ze3tP6 | @inproceedings{
agrawal2024do,
title={Do Language Models Know When They're Hallucinating References?},
author={Ayush Agrawal and Mirac Suzgun and Lester Mackey and Adam Kalai},
booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models},
year={2024},
url={https://openreview.net/forum?id=x4I3Ze3tP6}
} | State-of-the-art language models (LMs) are famous for "hallucinating'' references. These fabricated article and book titles lead to harms, obstacles to their use, and public backlash. While other types of LM hallucinations are also important, we propose hallucinated references as the "drosophila'' of research on hallucination in large language models (LLMs), as they are particularly easy to study. We show that simple search engine queries reliably identify such hallucinations, which facilitates evaluation. To begin to dissect the nature of hallucinated LM references, we attempt to classify them using black-box queries to the same LM, without consulting any external resources. Consistency checks done with _direct_ queries about whether the generated reference title is real
(inspired by Kadavath et al. (2022), Lin et al. (2022) and Manakul (2023))
are compared to consistency checks with _indirect_ queries which ask for ancillary details such as the authors of the work. These consistency checks are found to be partially reliable indicators of whether or not the reference is a hallucination.
In particular, we find that LMs often hallucinate _differing_ authors of hallucinated references when queried in independent sessions, while _consistently_ identify authors of real references. This suggests that the hallucination may be more a generation issue than inherent to current training techniques or representation. | Do Language Models Know When They're Hallucinating References? | [
"Ayush Agrawal",
"Mirac Suzgun",
"Lester Mackey",
"Adam Kalai"
] | Workshop/ICBINB | 2023 | 2305.18248 | [
"https://github.com/microsoft/hallucinated-references"
] | https://huggingface.co/papers/2305.18248 | 2 | 0 | 0 | 4 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=w7o14LCw9P | @inproceedings{
zheng2024why,
title={Why Does Chat{GPT} Fall Short in Providing Truthful Answers?},
author={Shen Zheng and Jie Huang and Kevin Chang},
booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models},
year={2024},
url={https://openreview.net/forum?id=w7o14LCw9P}
} | Recent advancements in large language models, such as ChatGPT, have demonstrated significant potential to impact various aspects of human life. However, ChatGPT still faces challenges in providing reliable and accurate answers to user questions. To better understand the model’s particular weaknesses in providing truthful answers, we embark an in-depth exploration of open-domain question answering. Specifically, we undertake a detailed examination of ChatGPT’s failures, categorized into: comprehension, factuality, specificity, and inference. We further pinpoint factuality as the most contributing failure and identify two critical abilities associated with factuality: knowledge memorization and knowledge recall. Through experiments focusing on factuality, we propose several potential enhancement strategies. Our findings suggest that augmenting the model with granular external knowledge and cues for knowledge recall can enhance the model’s factuality in answering questions. | Why Does ChatGPT Fall Short in Providing Truthful Answers? | [
"Shen Zheng",
"Jie Huang",
"Kevin Chang"
] | Workshop/ICBINB | 2023 | 2304.10513 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=vxfkVY2SLj | @inproceedings{
garg2024on,
title={On the performance of Multimodal Language Models},
author={Utsav Garg and Erhan Bas},
booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models},
year={2024},
url={https://openreview.net/forum?id=vxfkVY2SLj}
} | Instruction-tuned large language models (LLMs) have demonstrated promising zero-shot generalization capabilities across various downstream tasks. Recent research has introduced multimodal capabilities to LLMs by integrating independently pretrained vision encoders through model grafting. These multimodal variants undergo instruction tuning, similar to LLMs, enabling effective zero-shot generalization for multimodal tasks. This study conducts a comparative analysis of different multimodal instruction tuning approaches and evaluates their performance across a range of tasks, including complex reasoning, conversation, image captioning, multiple-choice questions (MCQs), and binary classification. Through rigorous benchmarking and ablation experiments, we reveal key insights for guiding architectural choices when incorporating multimodal capabilities into LLMs. However, current approaches have limitations; they do not sufficiently address the need for a diverse multimodal instruction dataset, which is crucial for enhancing task generalization. Additionally, they overlook issues related to truthfulness and factuality when generating responses. These findings illuminate current methodological constraints in adapting language models for image comprehension and provide valuable guidance for researchers and practitioners seeking to harness multimodal versions of LLMs. | On the performance of Multimodal Language Models | [
"Utsav Garg",
"Erhan Bas"
] | Workshop/ICBINB | 2023 | 2310.03211 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=vAiEQBh2AW | @inproceedings{
schwinn2024adversarial,
title={Adversarial Attacks and Defenses in Large Language Models: Old and New Threats},
author={Leo Schwinn and David Dobre and Stephan G{\"u}nnemann and Gauthier Gidel},
booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models},
year={2024},
url={https://openreview.net/forum?id=vAiEQBh2AW}
} | Over the past decade, there has been extensive research aimed at enhancing the robustness of neural networks, yet this problem remains vastly unsolved. Here, one major impediment has been the overestimation of the robustness of new defense approaches due to faulty defense evaluations. Flawed robustness evaluations necessitate rectifications in subsequent works, dangerously slowing down the research and providing a false sense of security. In this context, we will face substantial challenges associated with an impending adversarial arms race in natural language processing, specifically with closed-source Large Language Models (LLMs), such as ChatGPT, Google Bard, or Anthropic’s Claude. We provide a first set of prerequisites to improve the robustness assessment of new approaches and reduce the amount of faulty evaluations. Additionally, we identify embedding space attacks on LLMs as another viable threat model for the purposes of generating malicious content in open-sourced models. Finally, we demonstrate on a recently proposed defense that, without LLM-specific best practices in place, it is easy to overestimate the robustness of a new approach. Code is available at https://anonymous.4open.science/r/LLM_Embedding_Attack-6C3C | Adversarial Attacks and Defenses in Large Language Models: Old and New Threats | [
"Leo Schwinn",
"David Dobre",
"Stephan Günnemann",
"Gauthier Gidel"
] | Workshop/ICBINB | 2023 | 2310.19737 | [
"https://github.com/schwinnl/llm_embedding_attack"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=tGM7rOmJzV | @inproceedings{
chen2024transformerbased,
title={Transformer-Based Large Language Models Are Not General Learners: A Universal Circuit Perspective},
author={Yang Chen and Yitao Liang and Zhouchen Lin},
booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models},
year={2024},
url={https://openreview.net/forum?id=tGM7rOmJzV}
} | Large Language Models (LLMs) have demonstrated remarkable proficiency across diverse tasks, evoking perceptions of ``sparks of Artificial General Intelligence (AGI)". A key question naturally arises: *Can foundation models lead to AGI?* In this work, we try to answer this question partially by formally considering the capabilities of Transformer-based LLMs (T-LLMs) from the perspective of universal circuits.
By investigating the expressive power of realistic T-LLMs as universal circuits, we show that a T-LLM of size $\operatorname{poly}(n)$ cannot perform all the basic operators of input length $O\left(\operatorname{poly}(\log n)\right)$. We also demonstrate that a constant-depth-$\operatorname{poly}(n)$-size log-precision T-LLM cannot faithfully execute prompts of complexity $n$. Our analysis provides a concrete theoretical foundation that T-LLMs can only be universal circuits for limited function classes. In other words, T-LLMs are not general learners. Furthermore, we exhibit that a constant-depth-$\operatorname{poly}(n)$-size log-precision T-LLM can memorize $O\left(\operatorname{poly}(n)\right)$ instances, which could partially explain the seeming inconsistency between LLMs' empirical successes and our negative results. To the best of our knowledge, our work takes the first step towards analyzing the limitations of T-LLMs as general learners within a rigorous theoretical framework. Our results promote the understanding of LLMs' capabilities and highlight the need for innovative architecture designs beyond Transformers to break current limitations. | Transformer-Based Large Language Models Are Not General Learners: A Universal Circuit Perspective | [
"Yang Chen",
"Yitao Liang",
"Zhouchen Lin"
] | Workshop/ICBINB | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=tCZFmDyPFm | @inproceedings{
du2024a,
title={A Study on Improving Reasoning in Language Models},
author={Yuqing Du and Alexander Havrilla and Sainbayar Sukhbaatar and Pieter Abbeel and Roberta Raileanu},
booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models},
year={2024},
url={https://openreview.net/forum?id=tCZFmDyPFm}
} | Accurately carrying out complex reasoning is a crucial component of deployable and reliable language models. While current language models can exhibit this capability with few-shot guidance, accurate reasoning is primarily restricted to larger model sizes. In this work, we explore methods for improving the reasoning capabilities of smaller language models which are more deployable than their larger counterparts. Specifically, we look at variations of supervised learning, online reinforcement learning with PPO, and distillation from larger models. Surprisingly, for reasoning tasks such as CommonsenseQA and GSM8K, we find that simple filtered supervised learning often outperforms reward-conditioned supervised learning, and that simpler iterative supervised learning performs on par with online reinforcement learning. | A Study on Improving Reasoning in Language Models | [
"Yuqing Du",
"Alexander Havrilla",
"Sainbayar Sukhbaatar",
"Pieter Abbeel",
"Roberta Raileanu"
] | Workshop/ICBINB | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=qED8CGow7f | @inproceedings{
lee2024interactive,
title={Interactive Model Correction with Natural Language},
author={Yoonho Lee and Michelle Lam and Helena Vasconcelos and Michael Bernstein and Chelsea Finn},
booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models},
year={2024},
url={https://openreview.net/forum?id=qED8CGow7f}
} | In supervised learning, models are trained to extract correlations from a static dataset. This often leads to models that rely on spurious correlations that fail to generalize to new data distributions, such as a bird classifier that relies on the background of an image. Preventing models from latching on to spurious correlations necessarily requires additional information beyond labeled data. Existing methods incorporate forms of additional instance-level supervision, such as labels for spurious features or additional labeled data from a balanced distribution. Such strategies can become prohibitively costly for large-scale datasets since they require additional annotation at a scale close to the original training data. We hypothesize that far less supervision suffices if we provide targeted feedback about the misconceptions of models trained on a given dataset. We introduce Clarify, a novel natural language interface and method for interactively correcting model misconceptions. Through Clarify, users need only provide a short text description to describe a model's consistent failure patterns, such as ``water background'' for a bird classifier. Then, in an entirely automated way, we use such descriptions to improve the training process by reweighting the training data or gathering additional targeted data. Our empirical results show that non-expert users can successfully describe model misconceptions via Clarify, improving worst-group accuracy by an average of 7.3% in two datasets with spurious correlations. Finally, we use Clarify to find and rectify 31 novel spurious correlations in ImageNet, improving minority-split accuracy from 21.1% to 28.7%. | Interactive Model Correction with Natural Language | [
"Yoonho Lee",
"Michelle Lam",
"Helena Vasconcelos",
"Michael Bernstein",
"Chelsea Finn"
] | Workshop/ICBINB | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=pTEm4Gz7xL | @inproceedings{
tan2024structureaware,
title={Structure-Aware Path Inference for Neural Finite State Transducers},
author={Weiting Tan and Chu-Cheng Lin and Jason Eisner},
booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models},
year={2024},
url={https://openreview.net/forum?id=pTEm4Gz7xL}
} | Finite-state transducers (FSTs) are a traditional approach to string-to-string mapping. Each FST path specifies a possible alignment of input and output strings. Compared to an unstructured seq2seq model, the FST includes an explicit latent alignment variable and equips it with domain-specific hard constraints and featurization, which can improve generalization from small training sets.
Previous work has shown how to score the FST paths with a trainable neural architecture; this improves the model's expressive power by dropping the usual Markov assumption but makes inference more difficult for the same reason. In this paper, we focus on the resulting challenge of imputing the latent alignment path that explains a given pair of input and output strings (e.g. during training). We train three autoregressive approximate models for amortized inference of the path, which can then be used as proposal distributions for importance sampling. All three models perform lookahead. Our most sophisticated (and novel) model leverages the FST structure to consider the graph of future paths; unfortunately, we find that it loses out to the simpler approaches---except on an \emph{artificial} task that we concocted to confuse the simpler approaches. | Structure-Aware Path Inference for Neural Finite State Transducers | [
"Weiting Tan",
"Chu-Cheng Lin",
"Jason Eisner"
] | Workshop/ICBINB | 2023 | 2312.13614 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=nZM9Wxu3vw | @inproceedings{
nayak2024analyzing,
title={Analyzing the factual knowledge of parameter efficient instruction tuned mid-size Large Language Models},
author={Anmol Nayak and Hariprasad Timmapathini},
booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models},
year={2024},
url={https://openreview.net/forum?id=nZM9Wxu3vw}
} | Large Language Models (LLM) have significantly improved Natural Language
Processing (NLP) by enhancing the accuracy, efficiency, and versatility of various
NLP applications, from text generation to language translation, due to their ability
to capture and leverage vast amounts of linguistic and factual knowledge. While
LLM have pushed the boundaries, they typically need to be further instruction
tuned to get improved performance on niche applications. In this paper, we focus
on analyzing the factual knowledge of LLM keeping in mind the practical aspects
of using LLM by: 1) training only a small injection model (having ≈ 0.05 %
of the parameters of the base LLM) using the Low Rank Adapation (LoRA)
parameter efficient technique, and 2) restricting our study to Llama-2-13b-chat and
StableBeluga-13B, which are two mid-size LLM having 13 billion parameters and
are based on the LLama 2 architecture. The injection model is instruction tuned for
Knowledge Base (KB) construction on the LM-KBC 2023 challenge dataset, which
contains subject-relation-object triplets of Wikipedia entities across 21 different
factual relations. Our empirical analysis shows that even after instruction tuning,
the LLM are: 1) deficient in foundational knowledge of many must-know areas
like Geography, 2) unable to effectively use the context supplied in the prompt,
and 3) fragile to subtle changes in prompt at inference. The source code for our
experiments can be found at: https://github.com/Ffc1234/NIPS_ICBINB_
submission | Analyzing the factual knowledge of parameter efficient instruction tuned mid-size Large Language Models | [
"Anmol Nayak",
"Hariprasad Timmapathini"
] | Workshop/ICBINB | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=loTgtzhoI2 | @inproceedings{
georgiev2024beyond,
title={Beyond Erdos-Renyi: Generalization in Algorithmic Reasoning on Graphs},
author={Dobrik Georgiev and Pietro Lio and Jakub Bachurski and Junhua Chen and Tunan Shi},
booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models},
year={2024},
url={https://openreview.net/forum?id=loTgtzhoI2}
} | Neural algorithmic reasoning excels in many graph algorithms, but assessment mainly focuses on the Erdős-Rényi (ER) graph family. This study delves into graph algorithmic models' generalization across diverse distributions. Testing a leading model exposes overreliance on ER graphs for generalization assessment. We further investigate two scenarios: generalisation to every target distribution and single target distributions. Our results suggest that achieving the former is not as trivial and achieving the latter can be aided selecting source distribution via novel Tree Mover's Distance interpretation. | Beyond Erdos-Renyi: Generalization in Algorithmic Reasoning on Graphs | [
"Dobrik Georgiev",
"Pietro Lio",
"Jakub Bachurski",
"Junhua Chen",
"Tunan Shi"
] | Workshop/ICBINB | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=lJWTOSxWgd | @inproceedings{
sharma2024exploring,
title={Exploring and Improving the Spatial Reasoning Abilities of Large Language Models},
author={Manasi Sharma},
booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models},
year={2024},
url={https://openreview.net/forum?id=lJWTOSxWgd}
} | Large Language Models (LLMs) represent formidable tools for sequence modeling, boasting an innate capacity for general pattern recognition. Nevertheless, their broader spatial reasoning capabilities remain insufficiently explored. In this paper, we investigate the zero-shot performance of LLMs when confronted with a limited dataset comprising 3D robotic trajectory data and associated tasks, such as directional and motion labeling. Additionally, we introduce a novel prefix-based prompting mechanism, which yields a 30\% improvement on the 3D trajectory data and an increase of up to 16\% on SpartQA tasks when contrasted with the conventional vanilla prompt baseline (with gains over Chain-of-Thought prompting as well). The experimentation with 3D trajectory data offers an intriguing glimpse into the manner in which LLMs engage with numerical and spatial information, thus laying a solid foundation for the identification of target areas for future enhancements. | Exploring and Improving the Spatial Reasoning Abilities of Large Language Models | [
"Manasi Sharma"
] | Workshop/ICBINB | 2023 | 2312.01054 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=l188N6IZNY | @inproceedings{
heim2024towards,
title={Towards Better Understanding of Domain Shift on Linear-Probed Visual Foundation Models},
author={Eric Heim},
booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models},
year={2024},
url={https://openreview.net/forum?id=l188N6IZNY}
} | Visual foundation models have recently emerged to offer similar promise as their language counterparts: The ability to produce representations of visual data that can be successfully used in a variety of tasks and contexts. One common way this is shown in research literature is through “domain generalization” experiments of linear models trained from representations produced by foundation models (i.e.
linear probes). These experiments largely limit themselves to a small number of benchmark data sets and report accuracy as the single figure of merit, but give little insight beyond these numbers as to how different foundation models represent shifts. In this work we perform an empirical evaluation that expands the scope of previously reported results in order to give better understanding into how domain
shifts are modeled. Namely, we investigate not just how models generalize across domains, but how models may enable domain transfer. Our evaluation spans a number of recent visual foundation models and benchmarks. We find that not only do linear probes fail to generalize on some shift benchmarks, but linear probes trained on some shifted data achieve low train accuracy, indicating that accurate transfer of linear probes is not possible with some visual foundation models. | Towards Better Understanding of Domain Shift on Linear-Probed Visual Foundation Models | [
"Eric Heim"
] | Workshop/ICBINB | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=ZzOinWt0sh | @inproceedings{
homan2024how,
title={How Many Raters Do You Need? Power Analysis for Foundation Models},
author={Christopher M Homan and Shira Wein and Chris Welty and Lora Aroyo},
booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models},
year={2024},
url={https://openreview.net/forum?id=ZzOinWt0sh}
} | Due to their highly stochastic nature, as well as the complexity of the tasks they can perform, foundation models (large machine learning models) are poorly suited for conventional machine learning evaluation methods. This is because machine learning evaluation methods typically assume behavior to be deterministic and simple enough to be measured against gold standard data with unitary, authoritative, "correct" answers using straightforward metrics such as accuracy, precision, and recall. In this work, we propose an evaluation framework suitable for foundation models, which takes into account variance in the responses of both machine model and human rater. Utilizing recent advances in p-value estimation, we investigate the trade-offs between the number of items in a test set, the number of responses per item, the sampling method, and the metric, when measuring the comparative differences between two hypothetical foundation models at various degrees of similarity. When two models are very far apart in their predictive performance, fewer raters are needed to confidently compare them, as expected. However, as the models draw closer, we find that a larger number of annotators than are currently typical in annotation collection are needed to ensure the power analysis correctly reflects the difference in performance. | How Many Raters Do You Need? Power Analysis for Foundation Models | [
"Christopher M Homan",
"Shira Wein",
"Chris Welty",
"Lora Aroyo"
] | Workshop/ICBINB | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=YlhKbQ0zF3 | @inproceedings{
hsu2024can,
title={Can Visual Scratchpads With Diagrammatic Abstractions Augment {LLM} Reasoning?},
author={Joy Hsu and Gabriel Poesia and Jiajun Wu and Noah Goodman},
booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models},
year={2024},
url={https://openreview.net/forum?id=YlhKbQ0zF3}
} | When humans reason about complex text-based questions, we leverage diagrammatic abstractions drawn on a visual scratchpad. In this paper, we introduce and explore the capabilities of Visual-Scratchpad, a method that augments a *large language foundation model* (LLM) with diagrammatic execution and readout. We enable the LLM to generate drawing commands and to readout abstractions from the resulting picture. The visual readout operation uses a *visual foundation model*, optionally finetuned with expert iteration. Here, we show that although Visual-Scratchpad outperforms an inference-only LLM, it surprisingly yields worse performance compared to a single finetuned LLM. Through experiments, we propose that this gap is due to the failure mode of vision foundation models in understanding abstractions in diagrams. | Can Visual Scratchpads With Diagrammatic Abstractions Augment LLM Reasoning? | [
"Joy Hsu",
"Gabriel Poesia",
"Jiajun Wu",
"Noah Goodman"
] | Workshop/ICBINB | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=WIzlQGKgVP | @inproceedings{
mejia2024exploring,
title={Exploring {DINO}: Emergent Properties and Limitations for Synthetic Aperture Radar Imagery},
author={Joseph Alejandro Gallego Mejia and Anna Jungbluth and Laura Mart{\'\i}nez-Ferrer and Francisco Dorr and Matthew Allen and Freddie Kalaitzis and Ra{\'u}l Ramos-Poll{\'a}n},
booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models},
year={2024},
url={https://openreview.net/forum?id=WIzlQGKgVP}
} | Self-supervised learning (SSL) models have recently demonstrated remarkable performance across various tasks, including image segmentation. This study delves into the emergent characteristics of the Self-Distillation with No Labels (DINO) algorithm and its application to Synthetic Aperture Radar (SAR) imagery. We pre-train a vision transformer (ViT)-based DINO model using unlabeled SAR data, and later fine-tune the model to predict high resolution land cover maps. We rigorously evaluate the utility of attention maps generated by the ViT backbone, and compare them with the model's token embedding space. We observe a small improvement in model performance with pre-training compared to training from scratch, and discuss the limitations and opportunities of SSL for remote sensing and land cover segmentation. Beyond small performance increases, we show that ViT attention maps hold great intrinsic value for remote sensing, and could provide useful inputs to other algorithms. With this, our work lays the ground-work for bigger and better SSL models for Earth Observation. | Exploring DINO: Emergent Properties and Limitations for Synthetic Aperture Radar Imagery | [
"Joseph Alejandro Gallego Mejia",
"Anna Jungbluth",
"Laura Martínez-Ferrer",
"Francisco Dorr",
"Matthew Allen",
"Freddie Kalaitzis",
"Raúl Ramos-Pollán"
] | Workshop/ICBINB | 2023 | 2310.03513 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=TGTZwVabpU | @inproceedings{
berglund2024the,
title={The Reversal Curse: {LLM}s trained on ''A is B'' fail to learn ''B is A''},
author={Lukas Berglund and Meg Tong and Maximilian Kaufmann and Mikita Balesni and Asa Stickland and Tomasz Korbak and Owain Evans},
booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models},
year={2024},
url={https://openreview.net/forum?id=TGTZwVabpU}
} | We expose a surprising failure of generalization in auto-regressive large language models (LLMs). If a model is trained on a sentence of the form "*A is B*", it will not automatically generalize to the reverse direction "*B is A*". This is the **Reversal Curse**. For instance, if a model is trained on "Olaf Scholz was the ninth Chancellor of Germany", it will not automatically be able to answer the question, "Who was the ninth Chancellor of Germany?". Moreover, the likelihood of the correct answer ("Olaf Scholz") will not be higher than for a random name. Thus, models exhibit a basic failure of logical deduction and do not generalize a prevalent pattern in their training set (i.e. if "*A is B*" occurs, "*B is A*" is more likely to occur). We provide evidence for the Reversal Curse by finetuning GPT-3 and Llama-1 on fictitious statements such as "Uriah Hawthorne is the composer of *Abyssal Melodies*" and showing that they fail to correctly answer "Who composed *Abyssal Melodies?*". The Reversal Curse is robust across model sizes and model families and is not alleviated by data augmentation. We also evaluate ChatGPT (GPT-3.5 and GPT-4) on questions about real-world celebrities, such as "Who is Tom Cruise's mother? [A: Mary Lee Pfeiffer]" and the reverse "Who is Mary Lee Pfeiffer's son?". GPT-4 correctly answers questions like the former 79% of the time, compared to 33% for the latter. This shows a failure of logical deduction that we hypothesize is caused by the Reversal Curse. | The Reversal Curse: LLMs trained on "A is B" fail to learn "B is A" | [
"Lukas Berglund",
"Meg Tong",
"Maximilian Kaufmann",
"Mikita Balesni",
"Asa Stickland",
"Tomasz Korbak",
"Owain Evans"
] | Workshop/ICBINB | 2023 | 2309.12288 | [
"https://github.com/lukasberglund/reversal_curse"
] | https://huggingface.co/papers/2309.12288 | 2 | 3 | 0 | 7 | 1 | [] | [
"lberglund/reversal_curse"
] | [] |
null | https://openreview.net/forum?id=SGiQxu8zFL | @inproceedings{
kang2024deficiency,
title={Deficiency of Large Language Models in Finance: An Empirical Examination of Hallucination},
author={Haoqiang Kang and Xiao-Yang Liu},
booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models},
year={2024},
url={https://openreview.net/forum?id=SGiQxu8zFL}
} | The hallucination issue is recognized as a fundamental deficiency of large language models (LLMs), especially when applied to fields such as finance, education, and law. Despite the growing concerns, there has been a lack of empirical investigation. In this paper, we provide an empirical examination of LLMs’ hallucination behaviors in financial tasks. First, we empirically investigate LLM model’s ability of explaining financial concepts and terminologies. Second, we assess LLM models’ capacity of querying historical stock prices. Third, to alleviate the hallucination issue, we evaluate the efficacy of four practical methods, including few-shot learning, Decoding by Contrasting Layers (DoLa), the Retrieval Augmentation Generation (RAG) method and the prompt-based tool learning method for a function to generate a query command. Finally, our major finding is that off-the-shelf LLMs experience serious hallucination behaviors in financial tasks. Therefore, there is an urgent need to call for research efforts in mitigating LLMs’ hallucination. | Deficiency of Large Language Models in Finance: An Empirical Examination of Hallucination | [
"Haoqiang Kang",
"Xiao-Yang Liu"
] | Workshop/ICBINB | 2023 | 2311.15548 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=R7OizRVhEu | @inproceedings{
rezk2024is,
title={Is Scaling Learned Optimizers Worth It? Evaluating The Value of Ve{LO}'s 4000 {TPU} Months},
author={Fady Rezk and Antreas Antoniou and Henry Gouk and Timothy Hospedales},
booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models},
year={2024},
url={https://openreview.net/forum?id=R7OizRVhEu}
} | We analyze VeLO (versatile learned optimzer, the largest scale attempt to train a general purpose ``foundational'' optimizer to date. VeLO was trained on thousands of machine learning tasks over 4000 TPU months with the goal of producing an optimizer capable of generalizing to new problems while being hyper-parameter free, and outperforming industry standards such as Adam. We independently evaluate VeLO on the MLcommons optimizer benchmark suite. We find that contrary to initial claims: (1) VeLO has a critical hyper-parameter that needs problem-specific tuning, (2) VeLO does not necessarily outperform competitors in quality of solution found, and (3) VeLO is not faster than competing optimizers at reducing the training loss. These observations call into question VeLO's generality and the value of the investment in training it. | Is Scaling Learned Optimizers Worth It? Evaluating The Value of VeLO's 4000 TPU Months | [
"Fady Rezk",
"Antreas Antoniou",
"Henry Gouk",
"Timothy Hospedales"
] | Workshop/ICBINB | 2023 | 2310.18191 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=PxiuaUKf8y | @inproceedings{
zhang2024pretrained,
title={Pre-trained Language Models Do Not Help Auto-regressive Text-to-Image Generation},
author={Yuhui Zhang and Brandon McKinzie and Zhe Gan and Vaishaal Shankar and Alexander Toshev},
booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models},
year={2024},
url={https://openreview.net/forum?id=PxiuaUKf8y}
} | Recent advances in image tokenizers, such as VQ-VAE, have enabled text-to-image generation using auto-regressive methods, similar to language modeling. However, these methods have yet to leverage pre-trained language models, despite their adaptability to various downstream tasks. In this work, we explore this gap by adapting a pre-trained language model for auto-regressive text-to-image generation, and find that pre-trained language models offer limited help. We provide a two-fold explanation by analyzing tokens from each modality. First, we demonstrate that image tokens possess significantly different semantics compared to text tokens, rendering pre-trained language models no more effective in modeling them than randomly initialized ones. Second, the text tokens in the image-text datasets are too simple compared to normal language model pre-training data, which causes the catastrophic degradation of language models' capability. | Pre-trained Language Models Do Not Help Auto-regressive Text-to-Image Generation | [
"Yuhui Zhang",
"Brandon McKinzie",
"Zhe Gan",
"Vaishaal Shankar",
"Alexander Toshev"
] | Workshop/ICBINB | 2023 | 2311.16201 | [
""
] | https://huggingface.co/papers/2311.16201 | 1 | 0 | 0 | 5 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=OptKBWmreP | @inproceedings{
ren2024selfevaluation,
title={Self-Evaluation Improves Selective Generation in Large Language Models},
author={Jie Ren and Yao Zhao and Tu Vu and Peter J Liu and Balaji Lakshminarayanan},
booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models},
year={2024},
url={https://openreview.net/forum?id=OptKBWmreP}
} | Safe deployment of large language models (LLMs) may benefit from a reliable method for assessing their generated content to determine when to abstain or to selectively generate. While likelihood-based metrics such as perplexity are widely employed, recent research has demonstrated the limitations of using sequence-level probability estimates given by LLMs as reliable indicators of generation quality. Conversely, LLMs have demonstrated strong calibration at the token level, particularly when it comes to choosing correct answers in multiple-choice questions or evaluating true/false statements.
In this work, we reformulate open-ended generation tasks into token-level prediction tasks, and leverage LLMs' superior calibration at the token level. We instruct an LLM to self-evaluate its answers, employing either a multi-way comparison or a point-wise evaluation approach, with the option to include an ``None of the above'' option to express the model's uncertainty explicitly.
We benchmark a range of scoring methods based on self-evaluation and evaluate their performance in selective generation using TruthfulQA and TL;DR. Through extensive experiments with PaLM-2 and GPT-3, we demonstrate that self-evaluation based scores not only improve accuracy, but also correlate better with the overall quality of generated content. | Self-Evaluation Improves Selective Generation in Large Language Models | [
"Jie Ren",
"Yao Zhao",
"Tu Vu",
"Peter J Liu",
"Balaji Lakshminarayanan"
] | Workshop/ICBINB | 2023 | 2312.09300 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=MXey5JIvz2 | @inproceedings{
li2024sentimentpulse,
title={SentimentPulse: Temporal-Aware Custom Language Models vs. {GPT}-3.5 for Consumer Sentiment},
author={Lixiang Li and Nagender Aneja and Alina Nesen and Bharat Bhargava},
booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models},
year={2024},
url={https://openreview.net/forum?id=MXey5JIvz2}
} | Large Language Models are trained on an extremely large corpus of text data to allow better generalization but this blessing can also become a curse and significantly limit their performance in a subset of tasks. In this work, we argue that LLMs are notably behind well-tailored and specifically designed models where the temporal aspect is important in making decisions and the answer depends on the timespan of available training data. We prove our point by comparing two major architectures: first, SentimentPulse, our proposed real-time consumer sentiment analysis approach that leverages custom language models and continual learning techniques, and second, GPT-3 which is tested on the same data. Unlike foundation models, which lack temporal context, our custom language model is pre-trained on time-stamped data, making it uniquely suited for real-time application. Additionally, we employ continual learning techniques to pre-train the model, and then classification and contextual multi-arm bandits to fine-tune the model, enhancing its adaptability and performance over time. We present a comparative analysis of the predictions accuracy of both architectures. To the best of our knowledge, this is the first application of custom language models for real-time consumer sentiment analysis beyond the scope of conventional surveys. | SentimentPulse: Temporal-Aware Custom Language Models vs. GPT-3.5 for Consumer Sentiment | [
"Lixiang Li",
"Nagender Aneja",
"Alina Nesen",
"Bharat Bhargava"
] | Workshop/ICBINB | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=JZaTnRVuuN | @inproceedings{
wu2024compositional,
title={Compositional Generalization in Vision-Language Models uses the Language Modality only},
author={Chenwei Wu and Patrick haffner and Li Li and Stefano Ermon and Rong Ge},
booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models},
year={2024},
url={https://openreview.net/forum?id=JZaTnRVuuN}
} | Compositionality is a common property in many modalities including text and images, but the compositional generalization of multi-modal models is not well-understood. In this paper, we identify two sources of visual-linguistic compositionality: linguistic priors and the interplay between images and texts. We show that current attempts to improve compositional generalization rely on linguistic priors rather than on information in the image, as the strength of the language model in detecting sentences that are syntactically and semantically likely overwhelms the vision part of the model. We find in particular that a benchmark for compositionality mostly favors pure language models. Finally, we propose a new benchmark for compositionality without such linguistic priors | The Role of Linguistic Priors in Measuring Compositional Generalization of Vision-Language Models | [
"Chenwei Wu",
"Patrick haffner",
"Li Erran Li",
"Stefano Ermon",
"Rong Ge"
] | Workshop/ICBINB | 2023 | 2310.02777 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=HnABvwYxc7 | @inproceedings{
balles2024a,
title={A Negative Result on Gradient Matching for Selective Backprop},
author={Lukas Balles and Cedric Archambeau and Giovanni Zappella},
booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models},
year={2024},
url={https://openreview.net/forum?id=HnABvwYxc7}
} | With increasing scale in model and dataset size, the training of deep neural networks becomes a massive computational burden. One approach to speed up the training process is Selective Backprop. For this approach, we perform a forward pass to obtain a loss value for each data point in a minibatch. The backward pass is then restricted to a subset of that minibatch, prioritizing high-loss examples.
We build on this approach, but seek to improve the subset selection mechanism by choosing the (weighted) subset which best matches the mean gradient over the entire minibatch. We use the gradients w.r.t. the model's last layer as a cheap proxy, resulting in virtually no overhead in addition to the forward pass. At the same time, for our experiments we add a simple random selection baseline which has been absent from prior work. Surprisingly, we find that both the loss-based as well as the gradient-matching strategy fail to consistently outperform the random baseline. | A Negative Result on Gradient Matching for Selective Backprop | [
"Lukas Balles",
"Cedric Archambeau",
"Giovanni Zappella"
] | Workshop/ICBINB | 2023 | 2312.05021 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=HBEegN2HcR | @inproceedings{
qamar2024can,
title={Can Segment Anything Model Improve Semantic Segmentation?},
author={Maryam Qamar and Donghoon Kim and Muhammad Salman Ali and Chaoning Zhang and Sung-Ho Bae},
booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models},
year={2024},
url={https://openreview.net/forum?id=HBEegN2HcR}
} | Recently, Segment Anything Model (SAM) has gained considerable attention in the field of computer vision establishing itself as a pioneering foundation model for segmentation. Notably, SAM excels in generating high-quality segmentation masks, yet it lacks in semantic labels. In contrast, conventional semantic segmentation models generate rather accurate semantic labels but often produce suboptimal segmentation masks. The notion of leveraging SAM's superior mask quality to enhance the performance of conventional semantic segmentation models appears intuitive. However, our preliminary experiments reveal that the integration of SAM with these models does not result in any discernible improvement. Specifically, when assessing the performance of SAM's integration into two baseline semantic segmentation models, DeepLab and OneFormer, we find no significant enhancements in the mean Intersection over Union (mIoU) on the Pascal VOC and ade20k datasets. Consequently, we conclude that, as it stands, the highly acclaimed foundational model is not the preferred solution for the semantic segmentation task. Instead, a more cautious and thoughtful approach is imperative to unlock any potential benefits in this context. | Can Segment Anything Model Improve Semantic Segmentation? | [
"Maryam Qamar",
"Donghoon Kim",
"Muhammad Salman Ali",
"Chaoning Zhang",
"Sung-Ho Bae"
] | Workshop/ICBINB | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=GYOXIRXI7W | @inproceedings{
petrov2024when,
title={When Do Prompting and Prefix-Tuning Work? A Theory of Capabilities and Limitations},
author={Aleksandar Petrov and Philip Torr and Adel Bibi},
booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models},
year={2024},
url={https://openreview.net/forum?id=GYOXIRXI7W}
} | Context-based fine-tuning methods like prompting, in-context learning, soft prompting (prompt tuning) and prefix-tuning have gained popularity as they often match the performance of full fine-tuning with a fraction of the parameters. Still, there is little theoretical understanding of how these techniques influence the internal computation of the model and their expressiveness limitations. We show that despite the continuous embedding space being more expressive than the discrete token space, soft-prompting and prefix-tuning are strictly less expressive than full fine-tuning. Concretely, context-based fine-tuning cannot change the relative attention pattern over the content and can only bias the outputs of an attention layer in a fixed direction. While this means that context-based fine-tuning techniques can successfully elicit or combine skills already present in the pretrained model, they cannot learn tasks requiring new attention patterns. | When Do Prompting and Prefix-Tuning Work? A Theory of Capabilities and Limitations | [
"Aleksandar Petrov",
"Philip Torr",
"Adel Bibi"
] | Workshop/ICBINB | 2023 | 2310.19698 | [
"https://github.com/aleksandarpetrov/prefix-tuning-theory"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=FWTqwlHBC5 | @inproceedings{
zhang2024a,
title={A Study on the Calibration of In-context Learning},
author={Hanlin Zhang and YiFan Zhang and Yaodong Yu and Dhruv Madeka and Dean Foster and Eric P. Xing and Himabindu Lakkaraju and Sham M. Kakade},
booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models},
year={2024},
url={https://openreview.net/forum?id=FWTqwlHBC5}
} | Modern auto-regressive models are trained to minimize log loss by predicting the next token. As a result, they are expected to get calibrated answers when framing problems as next-token prediction tasks. We study this for in-context learning (ICL), a widely used way to adapt frozen large language models (LLMs) via crafting prompts and investigate the trade-offs between performance and calibration on a wide range of natural language understanding and reasoning tasks. We conduct extensive experiments to show that such trade-offs may get worse as we increase model size, incorporate more ICL examples, and fine-tune models using instruction or dialog tuning on carefully curated datasets. Furthermore, we find that common recalibration techniques that are widely effective such as temperature scaling may provide limited gains for calibration errors, suggesting that new methods may be required for settings where models are expected to be reliable. | A Study on the Calibration of In-context Learning | [
"Hanlin Zhang",
"YiFan Zhang",
"Yaodong Yu",
"Dhruv Madeka",
"Dean Foster",
"Eric P. Xing",
"Himabindu Lakkaraju",
"Sham M. Kakade"
] | Workshop/ICBINB | 2023 | 2312.04021 | [
"https://github.com/hlzhang109/icl-calibration"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=EooD8NMyQM | @inproceedings{
chen2024segment,
title={Segment Anything Model ({SAM}) Enhances Pseudo-Labels for Weakly Supervised Semantic Segmentation},
author={Tianle Chen and Zheda Mai and Ruiwen Li and Wei-Lun Chao},
booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models},
year={2024},
url={https://openreview.net/forum?id=EooD8NMyQM}
} | Weakly supervised semantic segmentation (WSSS) aims to bypass the need for laborious pixel-level annotation by using only image-level annotation. Most existing methods rely on Class Activation Maps (CAM) to derive pixel-level pseudo-labels and use them to train a fully supervised semantic segmentation model. Although these pseudo-labels are class-aware, indicating the coarse regions for particular classes, they are not object-aware and fail to delineate accurate object boundaries. To address this, we introduce a simple yet effective method harnessing the Segment Anything Model (SAM), a class-agnostic foundation model capable of producing fine-grained instance masks of objects, parts, and subparts. We use CAM pseudo-labels as cues to select and combine SAM masks, resulting in high-quality pseudo-labels that are both class-aware and object-aware. Our approach is highly versatile and can be easily integrated into existing WSSS methods without any modification. Despite its simplicity, our approach shows consistent gain over the state-of-the-art WSSS methods on both PASCAL VOC and MS-COCO datasets. | Segment Anything Model (SAM) Enhances Pseudo-Labels for Weakly Supervised Semantic Segmentation | [
"Tianle Chen",
"Zheda Mai",
"Ruiwen Li",
"Wei-Lun Chao"
] | Workshop/ICBINB | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=C0jJAbMMub | @inproceedings{
ocampo2024zeroshot,
title={Zero-shot capabilities of visual language models with prompt engineering for images of animals},
author={Andrea Tejeda Ocampo and Eric Orenstein and Kakani Young},
booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models},
year={2024},
url={https://openreview.net/forum?id=C0jJAbMMub}
} | Visual Language Models have exhibited impressive performance on new tasks in a zero-shot setting. Language queries enable these large models to classify or detect objects even when presented with a novel concept in a shifted domain. We explore the limits of this capability by presenting Grounding DINO with images and concepts from field images of marine and terrestrial animals. By manipulating the language prompts, we found that the embedding space does not necessarily encode scientific taxonomic organism names, but still yields potentially useful localizations due to a strong sense of general objectness. Grounding DINO struggled with objects in a challenging underwater setting, but improved when fed expressive prompts that explicitly described morphology. These experiments suggest that large models still have room to grow in domain use-cases and illuminate avenues for strengthening their understanding of shape to further improve zero-shot performance. The code to reproduce these experiments is available at: https://github.com/bioinspirlab/deepsea-foundation-2023. | Zero-shot capabilities of visual language models with prompt engineering for images of animals | [
"Andrea Tejeda Ocampo",
"Eric Orenstein",
"Kakani Young"
] | Workshop/ICBINB | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=AUj2IKYdgi | @inproceedings{
panwar2024surprising,
title={Surprising Deviations from Bayesian View in In-Context Learning},
author={Madhur Panwar and Kabir Ahuja and Navin Goyal},
booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models},
year={2024},
url={https://openreview.net/forum?id=AUj2IKYdgi}
} | In-context learning (ICL) is one of the surprising and useful features of large language models and subject of intense research. Recently, stylized meta-learning-like ICL setups have been devised that train transformers on sequences of input-output pairs $(x, f(x))$ using the language modeling loss. The function $f$ comes from a function class and generalization is checked by evaluation on sequences for unseen functions from the same class. One of the main discoveries in this line of research has been that for several function classes, such as linear regression, transformers successfully generalize to new functions in the class. However, the inductive biases of these models resulting in this behavior are not clearly understood. A model with unlimited training data and compute is a Bayesian predictor: it learns the pretraining distribution. In this paper we empirically examine how far this Bayesian perspective can help us understand ICL. To this end, we generalize the previous meta-ICL setup to hierarchical meta-ICL setup which involve unions of multiple task families. We instantiate this setup on multiple function families and find that transformers can do ICL in this setting as well. We make some surprising observations: Transformers can learn to generalize to new function classes that were not seen during pretraining. This requires pretraining on a very small number of function classes and involves deviating from the Bayesian predictor on the pretraining distribution. Further, we discover the phenomenon of 'forgetting', where over the course of pretraining under hierarchical meta-ICL setup, the transformer first generalizes to the full distribution of tasks and later forgets it while fitting the pretraining distribution. | Surprising Deviations from Bayesian View in In-Context Learning | [
"Madhur Panwar",
"Kabir Ahuja",
"Navin Goyal"
] | Workshop/ICBINB | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=8iuTHgTJEY | @inproceedings{
saravanan2024exploring,
title={Exploring Social Bias in Downstream Applications of Text-to-Image Foundation Models},
author={Adhithya Prakash Saravanan and Rafal Kocielnik and Roy Jiang and Pengrui Han and Anima Anandkumar},
booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models},
year={2024},
url={https://openreview.net/forum?id=8iuTHgTJEY}
} | Text-to-image diffusion models have been adopted into key commercial workflows, such as art generation and image editing. Characterizing the implicit social biases they exhibit, such as gender and racial stereotypes, is a necessary first step in avoiding discriminatory outcomes. While existing studies on social bias focus on image generation, the biases exhibited in alternate applications of diffusion-based foundation models remain under-explored. We propose a framework that uses synthetic images to probe two applications of diffusion models, image editing and classification, for social bias. Using our framework, we uncover meaningful and significant inter-sectional social biases in Stable Diffusion, a state-of-the-art open-source text-to-image model. Our findings caution against the uninformed adoption of text-to-image foundation models for downstream tasks and services. | Exploring Social Bias in Downstream Applications of Text-to-Image Foundation Models | [
"Adhithya Prakash Saravanan",
"Rafal Kocielnik",
"Roy Jiang",
"Pengrui Han",
"Anima Anandkumar"
] | Workshop/ICBINB | 2023 | 2312.10065 | [
""
] | https://huggingface.co/papers/2312.10065 | 1 | 0 | 0 | 5 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=8Q84ensxZ1 | @inproceedings{
alazraki2024how,
title={How (not) to ensemble {LVLM}s for {VQA}},
author={Lisa Alazraki and Lluis Castrejon and Mostafa Dehghani and Fantine Huot and Jasper Uijlings and Thomas Mensink},
booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models},
year={2024},
url={https://openreview.net/forum?id=8Q84ensxZ1}
} | This paper studies ensembling in the era of Large Vision-Language Models (LVLMs). Ensembling is a classical method to combine different models to get increased performance. In the recent work on Encyclopedic-VQA the authors examine a wide variety of models to solve their task: from vanilla LVLMs, to models including the caption as extra context, to models augmented with Lens-based retrieval of Wikipedia pages. Intuitively these models are highly complementary, which should make them ideal for ensembling. Indeed, an oracle experiment shows potential gains from 48.8% accuracy (the best single model) all the way up to 67% (best possible ensemble). So it is a trivial exercise to create an ensemble with substantial real gains. Or is it? | How (not) to ensemble LVLMs for VQA | [
"Lisa Alazraki",
"Lluis Castrejon",
"Mostafa Dehghani",
"Fantine Huot",
"Jasper Uijlings",
"Thomas Mensink"
] | Workshop/ICBINB | 2023 | 2310.06641 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=86SnqmSVv2 | @inproceedings{
roberts2024a,
title={A Natural Experiment on {LLM} Data Contamination in Code Generation},
author={Manley Roberts and Himanshu Thakur and Christine Herlihy and Colin White and Samuel Dooley},
booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models},
year={2024},
url={https://openreview.net/forum?id=86SnqmSVv2}
} | Recent claims about the impressive abilities of large language models (LLMs) are often supported by evaluating publicly available benchmarks.
Since LLMs train on wide swaths of the internet, this practice raises concerns of data contamination, i.e., evaluating on examples that are explicitly or implicitly included in the training data.
Data contamination remains notoriously challenging to measure and mitigate, even with partial attempts like controlled experimentation of training data, canary strings, or embedding similarities.
In this work, we conduct the first thorough longitudinal analysis of data contamination in LLMs by using the natural experiment of training cutoffs in GPT models to look at benchmarks released over time.
Specifically, we consider two code/mathematical problem-solving datasets, Codeforces and Project Euler, and find statistically significant trends among LLM pass rate vs. GitHub popularity and release date that provide strong evidence of contamination.
By open-sourcing our dataset, raw results, and evaluation framework, our work paves the way for rigorous analyses of data contamination in modern models. We conclude with a discussion of best practices and future steps for publicly releasing benchmarks in the age of LLMs that train on webscale data. | A Natural Experiment on LLM Data Contamination in Code Generation | [
"Manley Roberts",
"Himanshu Thakur",
"Christine Herlihy",
"Colin White",
"Samuel Dooley"
] | Workshop/ICBINB | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=6Hv4aeezrS | @inproceedings{
chen2024can,
title={Can {LLM}-Generated Misinformation Be Detected?},
author={Canyu Chen and Kai Shu},
booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models},
year={2024},
url={https://openreview.net/forum?id=6Hv4aeezrS}
} | The advent of Large Language Models (LLMs) has made a transformative impact. However, the potential that LLMs such as ChatGPT can be exploited to generate misinformation has posed a serious concern to online safety and public trust. A fundamental research question is: will LLM-generated misinformation cause more harm than human-written misinformation? We propose to tackle this question from the perspective of detection difficulty. We first build a taxonomy of LLM-generated misinformation. Then we categorize and validate the potential real-world methods for generating misinformation with LLMs. Then, through extensive empirical investigation, we discover that LLM-generated misinformation can be harder to detect for humans and detectors compared to human-written misinformation with the same semantics, which suggests it can have more deceptive styles and potentially cause more harm. We also discuss the implications of our discovery on combating misinformation in the age of LLMs and the countermeasures. | Can LLM-Generated Misinformation Be Detected? | [
"Canyu Chen",
"Kai Shu"
] | Workshop/ICBINB | 2023 | 2309.13788 | [
"https://github.com/llm-misinformation/llm-misinformation"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=3darGLCe5t | @inproceedings{
lazovich2024filter,
title={Filter bubbles and affective polarization in user-personalized large language model outputs},
author={Tomo Lazovich},
booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models},
year={2024},
url={https://openreview.net/forum?id=3darGLCe5t}
} | Echoing the history of search engines and social media content rankings, the advent of large language models (LLMs) has led to a push for increased personalization of model outputs to individual users. In the past, personalized recommendations and ranking systems have been linked to the development of filter bubbles (serving content that may confirm a user's existing biases) and affective polarization (strong negative sentiment towards those with differing views). In this work, we explore how prompting a leading large language model, ChatGPT-3.5, with a user's political affiliation prior to asking factual questions about public figures and organizations leads to differing results. We observe that left-leaning users tend to receive more positive statements about left-leaning political figures and media outlets, while right-leaning users see more positive statements about right-leaning entities. This pattern holds across presidential candidates, members of the U.S. Senate, and media organizations with ratings from AllSides. When qualitatively evaluating some of these outputs, there is evidence that particular facts are included or excluded based on the user's political affiliation. These results illustrate that personalizing LLMs based on user demographics carry the same risks of affective polarization and filter bubbles that have been seen in other personalized internet technologies. This ``failure mode" should be monitored closely as there are more attempts to monetize and personalize these models. | Filter bubbles and affective polarization in user-personalized large language model outputs | [
"Tomo Lazovich"
] | Workshop/ICBINB | 2023 | 2311.14677 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=0RwbmLUU2o | @inproceedings{
mohta2024are,
title={Are large language models good annotators?},
author={Jay Mohta and Kenan Ak and Yan Xu and Mingwei Shen},
booktitle={I Can't Believe It's Not Better Workshop: Failure Modes in the Age of Foundation Models},
year={2024},
url={https://openreview.net/forum?id=0RwbmLUU2o}
} | Numerous Natural Language Processing (NLP) tasks require precisely labeled data to ensure effective model training and achieve optimal performance. However, data annotation is marked by substantial costs and time requirements, especially when requiring specialized domain expertise or annotating a large number of samples. In this study, we investigate the feasibility of employing large language models (LLMs) as replacements for human annotators. We assess the zero-shot performance of various LLMs of different sizes to determine their viability as substitutes. Furthermore, recognizing that human annotators have access to diverse modalities, we introduce an image-based modality using the BLIP-2 architecture to evaluate LLM annotation performance. Among the tested LLMs, Vicuna-13b demonstrates competitive performance across diverse tasks. To assess the potential for LLMs to replace human annotators, we train a supervised model using labels generated by LLMs and compare its performance with models trained using human-generated labels. However, our findings reveal that models trained with human labels consistently outperform those trained with LLM-generated labels. We also highlights the challenges faced by LLMs in multilingual settings, where their performance significantly diminishes for tasks in languages other than English. | Are large language models good annotators? | [
"Jay Mohta",
"Kenan Ak",
"Yan Xu",
"Mingwei Shen"
] | Workshop/ICBINB | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=zwqlV7HoaT | @inproceedings{
martins2023sparse,
title={Sparse Modern Hopfield Networks},
author={Andre Martins and Vlad Niculae and Daniel C McNamee},
booktitle={Associative Memory {\&} Hopfield Networks in 2023},
year={2023},
url={https://openreview.net/forum?id=zwqlV7HoaT}
} | Ramsauer et al. (2021) recently pointed out a connection between modern Hopfield networks and attention heads in transformers. In this paper, we extend their framework to a broader family of energy functions which can be written as a difference of a quadratic regularizer and a Fenchel-Young loss (Blondel et al., 2020), parametrized by a generalized negentropy function $\Omega$. By working with Tsallis negentropies, the resulting update rules become end-to-end differentiable sparse transformations, establishing a new link to adaptively sparse transformers (Correia et al., 2019) and allowing for exact convergence to single memory patterns. Experiments on simulated data show a higher tendency to avoid metastable states. | Sparse Modern Hopfield Networks | [
"Andre Martins",
"Vlad Niculae",
"Daniel C McNamee"
] | Workshop/AMHN | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=yAI92fMOkD | @inproceedings{
yampolskaya2023controlling,
title={Controlling the bifurcations of attractors in modern Hopfield networks},
author={Maria Yampolskaya and Pankaj Mehta},
booktitle={Associative Memory {\&} Hopfield Networks in 2023},
year={2023},
url={https://openreview.net/forum?id=yAI92fMOkD}
} | Hopfield networks model complex systems with attractor states. However, there are many systems where attractors are not static. Attractors may undergo bifurcations under certain conditions; for example, cell fates have been described as attractor states that can be stabilized or destabilized by signalling. In the case of neural networks, retrieving a sequence of memories involves changing attractor states. We provide an extension to the modern Hopfield network that connects network dynamics to the landscape of any potential. With our model, it is possible to control the bifurcations of attractors and simulate the resulting neuron dynamics. By introducing controlled bifurcations, our formulation expands the application of Hopfield models to real-world contexts where attractors do not remain static. | Controlling the bifurcations of attractors in modern Hopfield networks | [
"Maria Yampolskaya",
"Pankaj Mehta"
] | Workshop/AMHN | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=uMQiDWxCKd | @inproceedings{
sun2023associative,
title={Associative Transformer Is A Sparse Representation Learner},
author={Yuwei Sun and Hideya Ochiai and Zhirong Wu and Stephen Lin and Ryota Kanai},
booktitle={Associative Memory {\&} Hopfield Networks in 2023},
year={2023},
url={https://openreview.net/forum?id=uMQiDWxCKd}
} | Emerging from the monolithic pairwise attention mechanism in conventional Transformer models, there is a growing interest in leveraging sparse interactions that align more closely with biological principles. Approaches including the Set Transformer and the Perceiver employ cross-attention consolidated with a latent space that forms an attention bottleneck with limited capacity. Building upon recent neuroscience studies of the Global Workspace Theory and associative memory, we propose the Associative Transformers (AiT). AiT induces low-rank explicit memory that serves as both priors to guide bottleneck attention in shared workspace and attractors within associative memory of a Hopfield network. We show that AiT is a sparse representation learner, learning distinct priors through the bottlenecks that are complexity-invariant to input quantities and dimensions. AiT demonstrates its superiority over methods such as the Set Transformer, Vision Transformer, and Coordination in various vision tasks. | Associative Transformer Is A Sparse Representation Learner | [
"Yuwei Sun",
"Hideya Ochiai",
"Zhirong Wu",
"Stephen Lin",
"Ryota Kanai"
] | Workshop/AMHN | oral | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=sYAm62gWbo | @inproceedings{
chaudhry2023long,
title={Long Sequence Hopfield Memory},
author={Hamza Tahir Chaudhry and Jacob A Zavatone-Veth and Dmitry Krotov and Cengiz Pehlevan},
booktitle={Associative Memory {\&} Hopfield Networks in 2023},
year={2023},
url={https://openreview.net/forum?id=sYAm62gWbo}
} | Sequence memory is an essential attribute of natural and artificial intelligence that enables agents to encode, store, and retrieve complex sequences of stimuli and actions. Computational models of sequence memory have been proposed where recurrent Hopfield-like neural networks are trained with temporally asymmetric Hebbian rules. However, these networks suffer from limited sequence capacity (maximal length of the stored sequence) due to interference between the memories. Inspired by recent work on Dense Associative Memories, we expand the sequence capacity of these models by introducing a nonlinear interaction term, enhancing separation between the patterns. We derive novel scaling laws for sequence capacity with respect to network size, significantly outperforming existing scaling laws for models based on traditional Hopfield networks, verify these theoretical results with numerical simulation, and demonstrate their usefulness in overlapping patterns. Finally, we describe a biologically-plausible implementation, with connections to motor neuroscience. | Long Sequence Hopfield Memory | [
"Hamza Tahir Chaudhry",
"Jacob A Zavatone-Veth",
"Dmitry Krotov",
"Cengiz Pehlevan"
] | Workshop/AMHN | oral | 2306.04532 | [
"https://github.com/pehlevan-group/longsequencehopfieldmemory"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=qvD4lx2iV0 | @inproceedings{
meersch2023training,
title={Training a Hopfield Variational Autoencoder with Equilibrium Propagation},
author={Tom Van Der Meersch and Johannes Deleu and Thomas Demeester},
booktitle={Associative Memory {\&} Hopfield Networks in 2023},
year={2023},
url={https://openreview.net/forum?id=qvD4lx2iV0}
} | On dedicated analog hardware, equilibrium propagation is an energy-efficient alternative to backpropagation. In spite of its theoretical guarantees, its application in the AI domain remains limited to the discriminative setting. Meanwhile, despite its high computational demands, generative AI is on the rise. In this paper, we demonstrate the application of Equilibrium Propagation in training a variational autoencoder (VAE) for generative modeling. Leveraging the symmetric nature of Hopfield networks, we propose using a single model to serve as both the encoder and decoder which could effectively halve the required chip size for VAE implementations, paving the way for more efficient analog hardware configurations. | Training a Hopfield Variational Autoencoder with Equilibrium Propagation | [
"Tom Van Der Meersch",
"Johannes Deleu",
"Thomas Demeester"
] | Workshop/AMHN | poster | 2311.15047 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=pgPAsSv5ga | @inproceedings{
zhao2023incontext,
title={In-Context Exemplars as Clues to Retrieving from Large Associative Memory},
author={Jiachen ZHAO},
booktitle={Associative Memory {\&} Hopfield Networks in 2023},
year={2023},
url={https://openreview.net/forum?id=pgPAsSv5ga}
} | Recently, large language models (LLMs) have made remarkable progress in natural language processing (NLP). The most representative ability of LLMs is in-context learning (ICL), which enables LLMs to learn patterns from in-context exemplars without training. However, there remains limited intuition for how in-context learning works. In this paper, we present a novel perspective on prompting LLMs by conceptualizing it as contextual retrieval from a model of associative memory, which can be biologically plausible. We establish a theoretical interpretation of ICL based on an extension of the framework of Hopfield Networks. Based on our theory, we further analyze how in-context exemplars influence the performance of ICL. Our study sheds new light on the mechanism of ICL by connecting it to memory retrieval, with potential implications for advancing the understanding of LLMs. | In-Context Exemplars as Clues to Retrieving from Large Associative Memory | [
"Jiachen ZHAO"
] | Workshop/AMHN | poster | 2311.03498 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=mvSmkxqdxp | @inproceedings{
haputhanthri2023enhanced,
title={Enhanced cue associated memory in temporally consistent recurrent neural networks},
author={Udith Haputhanthri and Liam Storan and Adam Shai and Surya Ganguli and Mark Schnitzer and Hidenori Tanaka and Fatih Dinc},
booktitle={Associative Memory {\&} Hopfield Networks in 2023},
year={2023},
url={https://openreview.net/forum?id=mvSmkxqdxp}
} | Recurrent connections are instrumental in creating memories and performing time-delayed computations. During their training, networks often explore distinct topological regions across the parameter space, each with unique attractor structures that serve specific computational purposes. However, the mechanisms that facilitate these topological transitions, so called bifurcations, toward an optimal parameter space configuration remain poorly understood. In this workshop paper, we investigated the learning process of recurrent neural networks in memory-assisted computation and developed a regularization strategy to encourage bifurcations that enhance memory formation capacity. To begin, we examined a delayed addition task that required the network to retain cue-associated memories for an extended duration. We observed two distinct phases during the learning of recurrent neural networks, separated by a bifurcation. In the initial \textit{search phase}, both train and test loss values remained stable as the network searched for beneficial bifurcations leading to optimal parameter configurations. In the subsequent \textit{rapid comprehension phase}, the loss values rapidly decreased, and the network quickly learned the task while preserving its topology but updating its geometry. During our analysis, we observed that the gradient direction, \textit{i.e.}, learning signal, was aligned with the optimal descent direction in the second but not the first phase. To aid learning in the search phase, we developed a temporal consistency regularization that incentivized a subset of neurons to have slow time dynamics, which subsequently decreased the duration of the search. Next, we tested the stability of the learned attractors with and without the temporal consistency regularization, via noise injection experiments, where we uncovered a more robust attractor subspace formation in the former. Finally, we enforced temporal consistency in a randomly initialized chaotic recurrent neural network to obtain several cue-associated fixed points in an unsupervised, online, and biologically plausible manner. Our results provide a deeper understanding of the role of bifurcations in enhancing associative memory by driving networks toward the desired attractor formation. | Enhanced cue associated memory in temporally consistent recurrent neural networks | [
"Udith Haputhanthri",
"Liam Storan",
"Adam Shai",
"Surya Ganguli",
"Mark Schnitzer",
"Hidenori Tanaka",
"Fatih Dinc"
] | Workshop/AMHN | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=lrfoJwxRWq | @inproceedings{
lu2023learning,
title={Learning Sequence Attractors in Recurrent Networks with Hidden Neurons},
author={Yao Lu and Si Wu},
booktitle={Associative Memory {\&} Hopfield Networks in 2023},
year={2023},
url={https://openreview.net/forum?id=lrfoJwxRWq}
} | The brain is targeted for processing temporal sequence information. It remains largely unclear how the brain learns to store and retrieve sequence memories. Here, we study how networks of Hopfield type learn sequence attractors to store predefined pattern sequences and retrieve them robustly. We show that to store arbitrary pattern sequences, it is necessary for the network to include hidden neurons even though their role in displaying sequence memories is indirect. We develop a local learning algorithm to learn sequence attractors in the networks with hidden neurons. The algorithm is proven to converge and lead to sequence attractors. We demonstrate that our model can store and retrieve sequences robustly on synthetic and real-world datasets. We hope that this study provides new insights in understanding sequence memory and temporal information processing in the brain. | Learning Sequence Attractors in Hopfield Networks with Hidden Neurons | [
"Yao Lu",
"Si Wu"
] | Workshop/AMHN | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=lO61aZlteS | @inproceedings{
schaeffer2023associative,
title={Associative Memory Under the Probabilistic Lens: Improved Transformers \& Dynamic Memory Creation},
author={Rylan Schaeffer and Mikail Khona and Nika Zahedi and Ila R Fiete and Andrey Gromov and Sanmi Koyejo},
booktitle={Associative Memory {\&} Hopfield Networks in 2023},
year={2023},
url={https://openreview.net/forum?id=lO61aZlteS}
} | Clustering is a fundamental unsupervised learning problem, and recent work showed modern continuous associative memory (AM) networks can learn to cluster data via a novel unconstrained continuous relaxation of the discrete clustering optimization problem. In this work, we demonstrate that the energy function of that AM network can be viewed as the scaled negative log likelihood of a Gaussian mixture model, and that the dynamics of the AM network can be viewed as performing expectation maximization via gradient ascent rather than via closed-form coordinate ascent. Based on this insight, we show that a widespread practical implementation choice - self-attention with pre-layer normalization - approximates clustering on the hypersphere with inhomogeneous von Mises-Fisher likelihoods, suggesting a future experiment to improve transformers. We additionally leverage this connection to propose a novel AM network with the ability to create new memories during learning, as necessitated by the data, by drawing on tools from combinatorial stochastic processes and Bayesian nonparametrics. | Associative Memory Under the Probabilistic Lens: Improved Transformers Dynamic Memory Creation | [
"Rylan Schaeffer",
"Mikail Khona",
"Nika Zahedi",
"Ila R Fiete",
"Andrey Gromov",
"Sanmi Koyejo"
] | Workshop/AMHN | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=hkV9CvCOjH | @inproceedings{
ambrogioni2023in,
title={In search of dispersed memories: Generative diffusion models are associative memory networks},
author={Luca Ambrogioni},
booktitle={Associative Memory {\&} Hopfield Networks in 2023},
year={2023},
url={https://openreview.net/forum?id=hkV9CvCOjH}
} | Hopfield networks are widely used in neuroscience as simplified theoretical models of biological associative memory. The original Hopfield networks store memories by encoding patterns of binary associations, which result in a synaptic learning mechanism known as Hebbian learning rule. Modern Hopfield networks can achieve exponential capacity scaling by using highly non-linear energy functions. However, the energy function of these newer models cannot be straightforwardly compressed into binary synaptic couplings and it does not directly provide new synaptic learning rules. In this work we show that generative diffusion models can be interpreted as energy-based models and that, when trained on discrete patterns, their energy function is equivalent to that of modern Hopfield networks. This equivalence allows us to interpret the supervised training of diffusion models as a synaptic learning process that encodes the associative dynamics of a modern Hopfield network in the weight structure of a deep neural network. Accordingly, in our experiments we show that the storage capacity of a continuous modern Hopfield network is identical to the capacity of a diffusion model. Our results establish a strong link between generative modeling and the theoretical neuroscience of memory, which provide a powerful computational foundation for the reconstructive theory of memory, where creative generation and memory recall can be seen as parts of a unified continuum. | In search of dispersed memories: Generative diffusion models are associative memory networks | [
"Luca Ambrogioni"
] | Workshop/AMHN | oral | 2309.17290 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=hXef89mdlH | @inproceedings{
tyulmankov2023memorization,
title={Memorization and consolidation in associative memory networks},
author={Danil Tyulmankov and Kim Stachenfeld and Dmitry Krotov and Larry Abbott},
booktitle={Associative Memory {\&} Hopfield Networks in 2023},
year={2023},
url={https://openreview.net/forum?id=hXef89mdlH}
} | Humans, animals, and machines can store and retrieve long-term memories of individual items, while at the same time consolidating and learning general representations of categories that discard the individual examples from which the representations were constructed. Classical neural networks model only one or the other of these two regimes. In this work, we propose a biologically motivated model that can not only consolidate representations of common items but also memorize exceptional ones. Critically, we consider the unsupervised learning regime where exceptional items are not labeled as such a priori, so the signal to either memorize or consolidate items must be generated by the network itself. We propose a number of metrics for this control signal and compare them for two different algorithms inspired by traditional imbalanced data learning approaches -- loss reweighting and importance sampling. Overall, our model serves not only as a framework for concurrent memorization and consolidation processes in biological systems, but also as a simple illustration of related phenomena in large-scale machine learning models, as well as a potential method for debiasing artificial intelligence algorithms. | Memorization and consolidation in associative memory networks | [
"Danil Tyulmankov",
"Kim Stachenfeld",
"Dmitry Krotov",
"Larry Abbott"
] | Workshop/AMHN | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=gzFuhvumGn | @inproceedings{
li2023modeling,
title={Modeling Recognition Memory with Predictive Coding and Hopfield Networks},
author={Tianjin Li and Mufeng Tang and Rafal Bogacz},
booktitle={Associative Memory {\&} Hopfield Networks in 2023},
year={2023},
url={https://openreview.net/forum?id=gzFuhvumGn}
} | Associative memory (AM) and recognition memory (RM) are fundamental in human and machine cognition. RM refers to an ability to recognize if the stimulus has been seen before, or is novel. Neuroscience studies reveal that regions such as the hippocampus, known for AM, are also involved in RM. Inspired by repetition suppression in the brain, this work presents an energy-based approach to RM, where a model learns by adjusting an energy function. We employed this energy-based approach to Hopfield Networks (HNs) and Predictive Coding Networks (PCNs). Our simulations indicate that PCN outperforms HNs in RM tasks, especially with correlated patterns.
In this work, we also unify the theoretical understanding of HN and PCN in RM, revealing that both perform metric learning. This theory is crucial in explaining PCN's superior performance in handling correlated data as it reveals that PCNs employ a statistical whitening step in its metric learning, which refines the distinction between familiar and novel stimuli. Overall, the superior performance of PCN, as well as the unique error neurons in its circuit implementation matching repetition suppression, provide a plausible account of how the brain performs RM, within the network architecture known to also support AM. | Modeling Recognition Memory with Predictive Coding and Hopfield Networks | [
"Tianjin Li",
"Mufeng Tang",
"Rafal Bogacz"
] | Workshop/AMHN | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=guPW3ACk2L | @inproceedings{
cabannes2023associative,
title={Associative Memories with Heavy-Tailed Data},
author={Vivien Cabannes and Elvis Dohmatob and Alberto Bietti},
booktitle={Associative Memory {\&} Hopfield Networks in 2023},
year={2023},
url={https://openreview.net/forum?id=guPW3ACk2L}
} | Learning arguably involves the discovery and memorization of abstract rules.
But how associative memories appear in transformer architectures optimized with gradient descent algorithms?
We derive precise scaling laws for a simple input-output associative memory model with respect to parameter size, and discuss the statistical efficiency of different estimators, including optimization-based algorithms.
We provide extensive numerical experiments to validate and interpret theoretical results, including fine-grained visualizations of the stored memory associations. | Associative Memories with Heavy-Tailed Data | [
"Vivien Cabannes",
"Elvis Dohmatob",
"Alberto Bietti"
] | Workshop/AMHN | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=dHmAhYu89E | @inproceedings{
mccarter2023inverse,
title={Inverse distance weighting attention},
author={Calvin McCarter},
booktitle={Associative Memory {\&} Hopfield Networks in 2023},
year={2023},
url={https://openreview.net/forum?id=dHmAhYu89E}
} | We report the effects of replacing the scaled dot-product (within softmax) attention with the negative-log of Euclidean distance. This form of attention simplifies to inverse distance weighting interpolation. Used in simple one hidden layer networks and trained with vanilla cross-entropy loss on classification problems, it tends to produce a key matrix containing prototypes and a value matrix with corresponding logits. We also show that the resulting interpretable networks can be augmented with manually-constructed prototypes to perform low-impact handling of special cases. | Inverse distance weighting attention | [
"Calvin McCarter"
] | Workshop/AMHN | poster | 2310.18805 | [
"https://github.com/calvinmccarter/idw-attention"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=byxEgvdtwO | @inproceedings{
sch{\"a}fl2023modern,
title={Modern Hopfield Networks as Memory for Iterative Learning on Tabular Data},
author={Bernhard Sch{\"a}fl and Lukas Gruber and Angela Bitto-Nemling and Sepp Hochreiter},
booktitle={Associative Memory {\&} Hopfield Networks in 2023},
year={2023},
url={https://openreview.net/forum?id=byxEgvdtwO}
} | While Deep Learning excels in structured data as encountered in vision and natural language processing, it failed to meet its expectations on tabular data. For tabular data, Support Vector Machines (SVMs), Random Forests, and Gradient Boosting are the best performing techniques. We suggest "Hopular", a novel Deep Learning architecture for medium- and small-sized datasets, where each layer is equipped with continuous modern Hopfield networks. Hopular's novelty is that every layer can directly access the original input as well as the whole training set via stored data in the Hopfield networks. Therefore, Hopular can step-wise update its current model and the resulting prediction at every layer like standard iterative learning algorithms. In experiments on small-sized tabular datasets with less than 1,000 samples, Hopular surpasses Gradient Boosting, Random Forests, SVMs, and in particular several Deep Learning methods. In experiments on medium-sized tabular data with about 10,000 samples, Hopular outperforms XGBoost, CatBoost, LightGBM and a state-of-the art Deep Learning method designed for tabular data. Thus, Hopular is a strong alternative to these methods on tabular data. | Modern Hopfield Networks as Memory for Iterative Learning on Tabular Data | [
"Bernhard Schäfl",
"Lukas Gruber",
"Angela Bitto-Nemling",
"Sepp Hochreiter"
] | Workshop/AMHN | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=bv2szxARh2 | @inproceedings{
negri2023random,
title={Random Feature Hopfield Networks generalize retrieval to previously unseen examples},
author={Matteo Negri and Clarissa Lauditi and Gabriele Perugini and Carlo Lucibello and Enrico Maria Malatesta},
booktitle={Associative Memory {\&} Hopfield Networks in 2023},
year={2023},
url={https://openreview.net/forum?id=bv2szxARh2}
} | It has been recently shown that, when an Hopfield Network stores examples generated as superposition of random features, new attractors appear in the model corresponding to such features. In this work we expand that result to superpositions of a finite number of features and we show numerically that the network remains capable of learning the features.
Furthermore, we reveal that the network also develops attractors corresponding to previously unseen examples generated with the same set of features. We support this result with a simple signal-to-noise argument and we conjecture a phase diagram. | Random Feature Hopfield Networks generalize retrieval to previously unseen examples | [
"Matteo Negri",
"Clarissa Lauditi",
"Gabriele Perugini",
"Carlo Lucibello",
"Enrico Maria Malatesta"
] | Workshop/AMHN | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=bNBMnQXRJU | @inproceedings{
davydov2023retrieving,
title={Retrieving \$k\$-Nearest Memories with Modern Hopfield Networks},
author={Alexander Davydov and Sean Jaffe and Ambuj Singh and Francesco Bullo},
booktitle={Associative Memory {\&} Hopfield Networks in 2023},
year={2023},
url={https://openreview.net/forum?id=bNBMnQXRJU}
} | Modern continuous Hopfield networks (MCHNs) are a variant of Hopfield networks that have greater storage capacity and have been shown to have connections to the attention mechanism in transformers. In this paper, we propose a variant of MCHNs, which we call k-Hopfield layers, which is the first Hopfield-type network that retrieves the k-nearest memories to a given input. k-Hopfield layers are differentiable and may serve as (i) a soft approach to k-nearest neighbors, (ii) an augmented form of memory in deep learning architectures and (iii) an alternative to multihead attention in transformers. We empirically demonstrate that increasing k aids in correctly reconstructing a corrupted input. We show that using a k-Hopfield layer as a replacement to multihead attention demonstrates comparable performance in small vision transformers while requiring fewer parameters. | Retrieving k-Nearest Memories with Modern Hopfield Networks | [
"Alexander Davydov",
"Sean Jaffe",
"Ambuj Singh",
"Francesco Bullo"
] | Workshop/AMHN | oral | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=XnAZwqF0iv | @inproceedings{
dohmatob2023a,
title={A Different Route to Exponential Storage Capacity},
author={Elvis Dohmatob},
booktitle={Associative Memory {\&} Hopfield Networks in 2023},
year={2023},
url={https://openreview.net/forum?id=XnAZwqF0iv}
} | Recent developments have sought to overcome the inherent limitations of traditional associative memory models, like Hopfield networks, where storage capacity scales linearly with input dimension.
In this paper, we present a new extension of Hopfield networks that grants precise control over inter-neuron interactions while allowing control of the level of connectivity within the network. This versatile framework encompasses a variety of designs, including classical Hopfield networks, models with polynomial activation functions, and simplicial Hopfield networks as particular cases. Remarkably, a specific instance of our construction, resulting in a new self-attention mechanism, is characterized by quasi-exponential storage capacity and a sparse network structure, aligning with biological plausibility. | A Different Route to Exponential Storage Capacity | [
"Elvis Dohmatob"
] | Workshop/AMHN | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=XTOD2M980W | @inproceedings{
karuvally2023variable,
title={Variable Memory: Beyond the Fixed Memory Assumption in Memory Modeling},
author={Arjun Karuvally and Hava T Siegelmann},
booktitle={Associative Memory {\&} Hopfield Networks in 2023},
year={2023},
url={https://openreview.net/forum?id=XTOD2M980W}
} | Memory models play a pivotal role in elucidating the mechanisms through which biological and artificial neural networks store and retrieve information. Traditionally, these models assume that memories are pre-determined, fixed before inference, and stored within synaptic interactions. Yet, neural networks can also dynamically store memories available only during inference within their activity. This capacity to bind and manipulate information as variables enhances the generalization capabilities of neural networks. Our research introduces and explores the concept of "variable memories." This approach extends the conventional sequence memory models, enabling information binding directly in network activity. By adopting this novel memory perspective, we unveil the underlying computational processes in the learned weights of RNNs on simple algorithmic tasks -- a fundamental question in the mechanistic understanding of neural networks. Our results underscore the imperative to evolve memory models beyond the fixed memory assumption towards more dynamic and flexible memory systems to further our understanding of neural information processing. | Variable Memory: Beyond the Fixed Memory Assumption in Memory Modeling | [
"Arjun Karuvally",
"Hava T Siegelmann"
] | Workshop/AMHN | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=WjHYgEfXiV | @inproceedings{
belhadi2023biologicallyinspired,
title={Biologically-inspired adaptive learning in the Hopfield-network based self-optimization model},
author={Aisha Belhadi},
booktitle={Associative Memory {\&} Hopfield Networks in 2023},
year={2023},
url={https://openreview.net/forum?id=WjHYgEfXiV}
} | A significant portion of the recent growth of artificial intelligence can be attributed to the development of deep learning systems, going hand in hand with the accumulation of Big Data. It therefore makes sense that most often, these systems are based on supervised or reinforcement learning using massive datasets, and reward or error-based rules for training. Though these techniques have achieved impressive levels of accuracy and functionality, rivaling human cognition in some areas, they seem to work very differently from living systems that can learn, make associations and adapt with very sparse data, efficient use of energy and comparatively minimal training iterations. In the world of machine learning, Hopfield networks, with an architecture that allows for unsupervised learning, an associative memory, scaling, and modularity, offer an alternative way of looking at artificial intelligence, that has the potential to hew closer to biological forms of learning. This work distills some mechanisms of adaptation in biological systems, including metaplasticity, homeostasis, and inhibition, and proposes ways in which these features can be incorporated into Hopfield networks through adjustments to the learning rate, modularity, and activation rule. The overall aim is to develop deep learning tools that can recapitulate the advantages of biological systems, and to have a computational method that can plausibly model a wide range of living and adaptive systems of varying levels of complexity. | Biologically-inspired adaptive learning in the Hopfield-network based self-optimization model | [
"Aisha Belhadi"
] | Workshop/AMHN | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=WWTOAKAczk | @inproceedings{
mansingh2023how,
title={How Robust Are Energy-Based Models Trained With Equilibrium Propagation?},
author={Siddharth Mansingh and Michal Kucer and Garrett T. Kenyon and Juston Moore and Michael Teti},
booktitle={Associative Memory {\&} Hopfield Networks in 2023},
year={2023},
url={https://openreview.net/forum?id=WWTOAKAczk}
} | Deep neural networks (DNNs) are easily fooled by adversarial perturbations that are imperceptible to humans. Adversarial training, a process where adversarial examples are added to the training set, is the current state-of-the-art defense against adversarial attacks, but it lowers the model's accuracy on clean inputs, is computationally expensive, and offers less robustness to natural noise. In contrast, energy-based models (EBMs), which were designed for efficient implementation in neuromorphic hardware and physical systems, incorporate feedback connections from each layer to the previous layer, yielding a recurrent, deep-attractor architecture which we hypothesize should make them naturally robust. Our work is the first to explore the robustness of EBMs to both natural corruptions and adversarial attacks, which we do using the CIFAR-10 and CIFAR-100 datasets. We demonstrate that EBMs are more robust than transformers and display comparable robustness to adversarially-trained DNNs on white-box, black-box, and natural perturbations without sacrificing clean accuracy, and without the need for adversarial training or additional training techniques. | How Robust Are Energy-Based Models Trained With Equilibrium Propagation? | [
"Siddharth Mansingh",
"Michal Kucer",
"Garrett T. Kenyon",
"Juston Moore",
"Michael Teti"
] | Workshop/AMHN | poster | 2401.11543 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=Vmndp6HnfR | @inproceedings{
goemaere2023accelerating,
title={Accelerating Hierarchical Associative Memory: A Deep Equilibrium Approach},
author={C{\'e}dric Goemaere and Johannes Deleu and Thomas Demeester},
booktitle={Associative Memory {\&} Hopfield Networks in 2023},
year={2023},
url={https://openreview.net/forum?id=Vmndp6HnfR}
} | Hierarchical Associative Memory models have recently been proposed as a versatile extension of continuous Hopfield networks. In order to facilitate future research on such models, especially at scale, we focus on increasing their simulation efficiency on digital hardware. In particular, we propose two strategies to speed up memory retrieval in these models, which corresponds to their use at inference, but is equally important during training. First, we show how they can be cast as Deep Equilibrium Models, which allows using faster and more stable solvers. Second, inspired by earlier work, we show that alternating optimization of the even and odd layers accelerates memory retrieval by a factor close to two. Combined, these two techniques allow for a much faster energy minimization, as shown in our proof-of-concept experimental results. The code is available at https://github.com/cgoemaere/hamdeq. | Accelerating Hierarchical Associative Memory: A Deep Equilibrium Approach | [
"Cédric Goemaere",
"Johannes Deleu",
"Thomas Demeester"
] | Workshop/AMHN | poster | [
"https://github.com/cgoemaere/hamdeq"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=VOSrMFgWdL | @inproceedings{
bhandarkar2023sequential,
title={Sequential Learning and Retrieval in a Sparse Distributed Memory: The K-winner Modern Hopfield Network},
author={Shaunak Bhandarkar and James Lloyd McClelland},
booktitle={Associative Memory {\&} Hopfield Networks in 2023},
year={2023},
url={https://openreview.net/forum?id=VOSrMFgWdL}
} | Many autoassociative memory models rely on a localist framework, using a neuron or slot for each memory. However, neuroscience research suggests that memories depend on sparse, distributed representations over neurons with sparse connectivity. Accordingly, we extend a canonical localist memory model---the modern Hopfield network (MHN)---to a distributed variant called the K-winner modern Hopfield network, equating the number of synaptic parameters (weights) in the localist and K-winner variants. We study both models' abilities to reconstruct once-presented patterns organized into long presentation sequences, updating the parameters of the best-matching memory neuron (or k best neurons) as each new pattern is presented. We find that K-winner MHN's exhibit superior retention of older memories. | Sequential Learning and Retrieval in a Sparse Distributed Memory: The K-winner Modern Hopfield Network | [
"Shaunak Bhandarkar",
"James Lloyd McClelland"
] | Workshop/AMHN | oral | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=TNw5KrKppB | @inproceedings{
hoover2023energy,
title={Energy Transformer},
author={Benjamin Hoover and Yuchen Liang and Bao Pham and Rameswar Panda and Hendrik Strobelt and Duen Horng Chau and Mohammed J Zaki and Dmitry Krotov},
booktitle={Associative Memory {\&} Hopfield Networks in 2023},
year={2023},
url={https://openreview.net/forum?id=TNw5KrKppB}
} | Our work combines aspects of three promising paradigms in
machine learning, namely, attention mechanism, energy-based models, and
associative memory. Attention is the power-house driving modern deep
learning successes, but it lacks clear theoretical foundations. Energy-based
models allow a principled approach to discriminative and generative
tasks, but the design of the energy functional is not straightforward. At
the same time, Dense Associative Memory models or Modern Hopfield Networks
have a well-established theoretical foundation, and allow an intuitive
design of the energy function. We propose a novel architecture, called the
Energy Transformer (or ET for short), that uses a sequence of attention
layers that are purposely designed to minimize a specifically engineered
energy function, which is responsible for representing the relationships
between the tokens. In this work, we introduce the theoretical foundations
of ET, explore its empirical capabilities using the image completion task,
and obtain strong quantitative results on the graph anomaly detection and
graph classification tasks. | Energy Transformer | [
"Benjamin Hoover",
"Yuchen Liang",
"Bao Pham",
"Rameswar Panda",
"Hendrik Strobelt",
"Duen Horng Chau",
"Mohammed J Zaki",
"Dmitry Krotov"
] | Workshop/AMHN | poster | 2302.07253 | [
"https://github.com/zhuergou/energy-transformer-for-graph-anomaly-detection"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=SiTNMzCwQ4 | @inproceedings{
herron2023modulating,
title={Modulating interactions to control dynamics of neural networks},
author={Lukas Herron and Pablo Sartori and BingKan Xue},
booktitle={Associative Memory {\&} Hopfield Networks in 2023},
year={2023},
url={https://openreview.net/forum?id=SiTNMzCwQ4}
} | Sequential retrieval of stored patterns is a fundamental task that can be performed by neural networks. Previous models of sequential retrieval belong to a general class in which the components of the network are controlled by a slow feedback ("input modulation"). In contrast, we introduce a new class of models in which the feedback modifies the interactions among the components ("interaction modulation"). In particular, we study a model in which the symmetric interactions are modulated. We show that this model is not only capable of retrieving dynamic sequences, but it does so more robustly than a canonical model of input modulation. Our model allows retrieval of patterns with different activity levels, is robust to feedback noise, and has a large dynamic capacity. Our results suggest that interaction modulation may be a new paradigm for controlling network dynamics. | Modulating interactions to control dynamics of neural networks | [
"Lukas Herron",
"Pablo Sartori",
"BingKan Xue"
] | Workshop/AMHN | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=RZmsvaEATv | @inproceedings{
hwang2023generalizable,
title={Generalizable Relational Inference with Cognitive Maps in a Hippocampal Model and in Primates},
author={Jaedong Hwang and Sujaya Neupane and Mehrdad Jazayeri and Ila R Fiete},
booktitle={Associative Memory {\&} Hopfield Networks in 2023},
year={2023},
url={https://openreview.net/forum?id=RZmsvaEATv}
} | We investigate the role of cognitive maps and hippocampal-entorhinal architecture in a mental navigation (MNAV) task by conducting experiment in humans, monkeys and neural network models. Humans can generalize their mental navigation performance to untrained start-target landmark pairs in a given landmark sequence and also rapidly adapt to new sequences. The model uses a continuous-time recurrent neural network (CTRNN) for action decisions and a hippocampal-entorhinal model network, MESH (Memory network with Scaffold and Heteroassociation), for encoding and learning maps. The model is first trained on a navigation-to-sample (NTS) task and tested on MNAV task where no sensory feedback is available, across five different environments (i.e. landmark sequences). The CTRNN with MESH solves MNAV task by reconstructing the next image via path integration and vastly outperforms the model with CTRNN alone. In both NTS and MNAV tasks, MESH-CTRNN model shows better generalization to untrained pairs within each environment and faster adaptation to new environments. Like humans, monkeys also exhibit generalization to untrained landmark pairs in MNAV task. We compared the neural dynamics in monkeys' entorhinal cortex to the dynamics of CTRNN and found behaviorally relevant periodic signals in both. The study demonstrates the importance of hippocampal cognitive maps in enabling data-efficient and generalizable learning in the brain. | Generalizable Relational Inference with Cognitive Maps in a Hippocampal Model and in Primates | [
"Jaedong Hwang",
"Sujaya Neupane",
"Mehrdad Jazayeri",
"Ila R Fiete"
] | Workshop/AMHN | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=O5Se9wGYbh | @inproceedings{
joshi2023modern,
title={Modern Hopfield Network with Local Learning Rules for Class Generalization},
author={Shruti A Joshi and Giri Prashanth and Maksim Bazhenov},
booktitle={Associative Memory {\&} Hopfield Networks in 2023},
year={2023},
url={https://openreview.net/forum?id=O5Se9wGYbh}
} | The Modern Hopfield Network (MHN) model, recently introduced as an extension of Hopfield networks, allows for the memory capacity to scale non-linearly with the size of the network. In previous works, MHNs have been used to store inputs in its connections and reconstruct them from partial inputs. In this work, we examine if MHN can be used for classical classification tasks that require generalization to unseen data from same class. We developed a Modern Hopfield Network based classifier with the number of hidden neurons equal to number of classes in the input data and local learning that is able to perform at the accuracy as MLP on several vision tasks (classification on MNIST, Fashion-MNIST and CIFAR-10). Our approach allows us to perform classification, pattern completion, noise robustness and examining the representation of individual classes within the same network. We identify that temperature determines both accuracy and noise robustness. Overall, in this preliminary report, we propose a simple framework for class generalization using MHN and demonstrates the feasibility of using MHN for machine learning tasks that require generalization. | Modern Hopfield Network with Local Learning Rules for Class Generalization | [
"Shruti A Joshi",
"Giri Prashanth",
"Maksim Bazhenov"
] | Workshop/AMHN | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=MuANyzcyrS | @inproceedings{
wang2023rapid,
title={Rapid Learning without Catastrophic Forgetting in the Morris Water Maze},
author={Raymond Wang and Jaedong Hwang and Akhilan Boopathy and Ila R Fiete},
booktitle={Associative Memory {\&} Hopfield Networks in 2023},
year={2023},
url={https://openreview.net/forum?id=MuANyzcyrS}
} | Machine learning models typically struggle to swiftly adapt to novel tasks while maintaining proficiency on previously trained tasks. This contrasts starkly with animals, which demonstrate these capabilities easily. The differences between ML models and animals must stem from particular neural architectures and representations for memory and memory-policy interactions. We propose a new task that requires rapid and continual learning, the sequential Morris Water Maze (sWM). Drawing inspiration from biology, we show that 1) a content-addressable heteroassociative memory based on the entorhinal-hippocampal circuit with grid cells that retain knowledge across diverse environments, and 2) a spatially invariant convolutional network architecture for rapid adaptation across unfamiliar environments together perform rapid learning, good generalization, and continual learning without forgetting. Our model simultaneously outperforms ANN baselines from both the continual and few-shot learning contexts. It retains knowledge of past environments while rapidly acquiring the skills to navigate new ones, thereby addressing the seemingly opposing challenges of quick knowledge transfer and sustaining proficiency in previously learned tasks. | Rapid Learning without Catastrophic Forgetting in the Morris Water Maze | [
"Raymond Wang",
"Jaedong Hwang",
"Akhilan Boopathy",
"Ila R Fiete"
] | Workshop/AMHN | oral | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=M7yGTXajq5 | @inproceedings{
marin-llobet2023hopfieldenhanced,
title={Hopfield-Enhanced Deep Neural Networks for Artifact-Resilient Brain State Decoding},
author={Arnau Marin-Llobet and Arnau Manasanch and Maria V. Sanchez Vives},
booktitle={Associative Memory {\&} Hopfield Networks in 2023},
year={2023},
url={https://openreview.net/forum?id=M7yGTXajq5}
} | The study of brain states, ranging from highly synchronous to asynchronous neuronal patterns like the sleep-wake cycle, is fundamental for assessing the brain's spatiotemporal dynamics and their close connection to behavior. However, the development of new techniques to accurately identify them still remains a challenge, as these are often compromised by the presence of noise, artifacts, and suboptimal recording quality. In this study, we propose a two-stage computational framework combining Hopfield Networks for artifact data preprocessing with Convolutional Neural Networks (CNNs) for classification of brain states in rat neural recordings under different levels of anesthesia. To evaluate the robustness of our framework, we deliberately introduced noise artifacts into the neural recordings. We evaluated our hybrid Hopfield-CNN pipeline by benchmarking it against two comparative models: a standalone CNN handling the same noisy inputs, and another CNN trained and tested on artifact-free data. Performance across various levels of data compression and noise intensities showed that our framework can effectively mitigate artifacts, allowing the model to reach parity with the clean-data CNN at lower noise levels. Although this study mainly benefits small-scale experiments, the findings highlight the necessity for advanced deep learning and Hopfield Network models to improve scalability and robustness in diverse real-world settings. | Hopfield-Enhanced Deep Neural Networks for Artifact-Resilient Brain State Decoding | [
"Arnau Marin-Llobet",
"Arnau Manasanch",
"Maria V. Sanchez Vives"
] | Workshop/AMHN | poster | 2311.03421 | [
"https://github.com/arnaumarin/hdnn-artifactbrainstate"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=KwZ43TkKUL | @inproceedings{
stomfai2023multidimensional,
title={Multidimensional Hopfield Networks for clustering},
author={Gergely Stomfai and {\L}ukasz Sienkiewicz and Barbara Rychalska},
booktitle={Associative Memory {\&} Hopfield Networks in 2023},
year={2023},
url={https://openreview.net/forum?id=KwZ43TkKUL}
} | We present the Multidimensional Hopfield Network (DHN), a natural generalisation of the Hopfield Network. In our theoretical investigations we focus on DHNs with a certain activation function and provide energy functions for them. We conclude that these DHNs are convergent in finite time, and are equivalent to greedy methods that aim to find graph clusterings of locally minimal cuts. We also show that the general framework of DHNs encapsulates several previously known algorithms used for generating graph embeddings and clusterings. Namely, the Cleora graph embedding algorithm, the Louvain method, and the Newman's method can be cast as DHNs with appropriate activation function and update rule. Motivated by these findings we provide a generalisation of Newman's method to the multidimensional case. | Multidimensional Hopfield Networks for clustering | [
"Gergely Stomfai",
"Łukasz Sienkiewicz",
"Barbara Rychalska"
] | Workshop/AMHN | poster | 2310.07239 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=KEnMXCcB5C | @inproceedings{
wang2023statisticsguided,
title={Statistics-guided Associative Memories},
author={Hongzhi Wang and Satyananda Kashyap and Niharika Shimona D'Souza and Tanveer Syeda-mahmood},
booktitle={Associative Memory {\&} Hopfield Networks in 2023},
year={2023},
url={https://openreview.net/forum?id=KEnMXCcB5C}
} | Content-associative memories such as Hopfield networks have been studied as a good mathematical model of the auto-associative features in the CA3 region of the hippocampal memory system. Modern Hopfield networks (MHN) are generalizations of the classical Hopfield networks with revised energy functions and update rules to expand storage to exponential capacity. However, they are not yet practical due to spurious metastable states leading to recovery errors during memory recall. In this work, we present a fresh perspective on associative memories using joint co-occurrence statistics, and show that accurate recovery of patterns is possible from a partially-specified query using the maximum likelihood principle. In our formulation, memory retrieval is addressed via estimating the joint conditional probability of the retrieved information given the observed associative information. Unlike previous models that have considered independence of features, we do recovery under the maximal dependency assumption to obtain an upper bound on the joint probability of occurrence of features. We show that this new approximation substantially improves associative memory retrieval accuracy on popular benchmark datasets. | Statistics-guided Associative Memories | [
"Hongzhi Wang",
"Satyananda Kashyap",
"Niharika Shimona D'Souza",
"Tanveer Syeda-mahmood"
] | Workshop/AMHN | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=KCB7lcoo9f | @inproceedings{
serricchio2023daydreaming,
title={Daydreaming Hopfield Networks and their surprising effectiveness on correlated data},
author={Ludovica Serricchio and Claudio Chilin and Dario Bocchi and Raffaele Marino and Matteo Negri and Chiara Cammarota and Federico Ricci-Tersenghi},
booktitle={Associative Memory {\&} Hopfield Networks in 2023},
year={2023},
url={https://openreview.net/forum?id=KCB7lcoo9f}
} | In order to improve the storage capacity of the Hopfield model, we develop a version of the dreaming algorithm, called daydreaming, that is not destructive and that converges asymptotically to a stationary coupling matrix. When trained on random uncorrelated examples, the model shows optimal performance in terms of the size of the basins of attraction of stored examples and the quality of reconstruction. We also train the daydreaming algorithm on correlated data obtained via the random-features model and argue that it exploits the correlations to increase even further the storage capacity and the size of the basins of attraction. | Daydreaming Hopfield Networks and their surprising effectiveness on correlated data | [
"Ludovica Serricchio",
"Claudio Chilin",
"Dario Bocchi",
"Raffaele Marino",
"Matteo Negri",
"Chiara Cammarota",
"Federico Ricci-Tersenghi"
] | Workshop/AMHN | poster | 2405.08777 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |