bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
197
792
abstract
stringlengths
303
3.45k
title
stringlengths
10
159
authors
sequencelengths
1
28
id
stringclasses
44 values
type
stringclasses
16 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
444 values
n_linked_authors
int64
-1
9
upvotes
int64
-1
42
num_comments
int64
-1
13
n_authors
int64
-1
92
paper_page_exists_pre_conf
int64
0
1
Models
sequencelengths
0
100
Datasets
sequencelengths
0
11
Spaces
sequencelengths
0
100
null
https://openreview.net/forum?id=rD62RkufYo
@inproceedings{ wang2023predictive, title={Predictive variational autoencoder for learning robust representations of time-series data}, author={Julia Huiming Wang and Dexter Tsin and Tatiana A Engel}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=rD62RkufYo} }
Variational autoencoders (VAEs) have been used extensively to discover low-dimensional latent factors governing neural activity and animal behavior. However, without careful model selection, the uncovered latent factors may reflect noise in the data rather than true underlying features, rendering such representations unsuitable for scientific interpretation. Existing solutions to this problem involve introducing additional measured variables or data augmentations specific to a particular data type. We propose a VAE architecture that predicts the next point in time and show that it mitigates the learning of spurious features. In addition, we introduce a model selection metric based on smoothness over time in the latent space. We show that together these two constraints on VAEs to be smooth over time produce robust latent representations and faithfully recover latent factors on synthetic datasets.
Predictive variational autoencoder for learning robust representations of time-series data
[ "Julia Huiming Wang", "Dexter Tsin", "Tatiana A Engel" ]
Workshop/UniReps
poster
2312.06932
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qqdHkqHmfA
@inproceedings{ aw2023instructiontuned, title={Instruction-tuned {LLM}s with World Knowledge are More Aligned to the Human Brain}, author={Khai Loong Aw and Syrielle Montariol and Badr AlKhamissi and Martin Schrimpf and Antoine Bosselut}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=qqdHkqHmfA} }
Instruction-tuning is a widely adopted method of finetuning that enables large language models (LLMs) to generate output that more closely resembles human responses to natural language queries, in many cases leading to human-level performance on diverse testbeds. However, it remains unclear whether instruction-tuning truly makes LLMs more similar to how humans process language. We investigate the effect of instruction-tuning on brain alignment, the similarity of LLM internal representations to neural activity in the human language system. We assess 25 vanilla and instruction-tuned LLMs across three datasets involving humans reading naturalistic stories and sentences, and discover that instruction-tuning generally enhances brain alignment by an average of 6%. To identify the factors underlying LLM-brain alignment, we compute the correlation between the brain alignment of LLMs and various model properties, such as model size, performance ability on problem-solving benchmarks, and ability on benchmarks requiring world knowledge spanning various domains. Notably, we find a strong positive correlation between brain alignment and model size (r = 0.95), as well as performance on tasks requiring world knowledge (r = 0.81). Our results demonstrate that instruction-tuning LLMs improves both world knowledge representations and human brain alignment, suggesting that mechanisms that encode world knowledge in LLMs also improve representational alignment to the human brain.
Instruction-tuned LLMs with World Knowledge are More Aligned to the Human Brain
[ "Khai Loong Aw", "Syrielle Montariol", "Badr AlKhamissi", "Martin Schrimpf", "Antoine Bosselut" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qNZ3Tyrq9C
@inproceedings{ jiang2023on, title={On Transferring Expert Knowledge from Tabular Data to Images}, author={Jun-Peng Jiang and Han-Jia Ye and Leye Wang and Yang Yang and Yuan Jiang and De-Chuan Zhan}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=qNZ3Tyrq9C} }
Transferring knowledge across modalities has garnered significant attention in the field of machine learning as it enables the utilization of expert knowledge from diverse domains. In particular, the representation of expert knowledge in tabular form, commonly found in fields such as medicine, can greatly enhance the comprehensiveness and accuracy of image-based learning. However, the transfer of knowledge from tabular to image data presents unique challenges due to the distinct characteristics of these data types, making it challenging to determine "how to reuse" and "which subset to reuse". To address this, we propose a novel method called CHannel tAbulaR alignment with optiMal tranSport (CHARMS) that automatically and effectively transfers relevant tabular knowledge. Specifically, by maximizing the mutual information between a group of channels and tabular features, our method modifies the visual embedding and captures the semantics of tabular knowledge. The alignment between channels and attributes helps select the subset of tabular data which contains knowledge to images. Experimental results demonstrate that {\sc Charms} effectively reuses tabular knowledge to improve the performance and interpretability of visual classifiers.
On Transferring Expert Knowledge from Tabular Data to Images
[ "Jun-Peng Jiang", "Han-Jia Ye", "Leye Wang", "Yang Yang", "Yuan Jiang", "De-Chuan Zhan" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=q3nOLvC6bU
@inproceedings{ su2023on, title={On the Robustness of Neural Collapse and the Neural Collapse of Robustness}, author={Jingtong Su and Ya Shi Zhang and Nikolaos Tsilivis and Julia Kempe}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=q3nOLvC6bU} }
Neural Collapse refers to the curious phenomenon in the end of training of a neural network, where feature vectors and classification weights converge to a very simple geometrical arrangement (a simplex). While it has been observed empirically in various cases and has been theoretically motivated, its connection with crucial properties of neural networks, like their generalization and robustness, remains unclear. In this work, we study the stability properties of these simplices. We find that the simplex structure disappears under small adversarial attacks, and that perturbed examples ``leap" between simplex vertices. We further analyze the geometry of networks that are optimized to be robust against adversarial perturbations of the input, and find that Neural Collapse is a pervasive phenomenon in these cases as well, with clean and perturbed representations forming aligned simplices, and giving rise to a robust simple nearest-neighbor classifier. By studying the propagation of the amount of collapse inside the network, we identify novel properties of both robust and non-robust machine learning models, and show that earlier, unlike later layers maintain reliable simplices on perturbed data.
On the Robustness of Neural Collapse and the Neural Collapse of Robustness
[ "Jingtong Su", "Ya Shi Zhang", "Nikolaos Tsilivis", "Julia Kempe" ]
Workshop/UniReps
poster
2311.07444
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=pj7WZFx3l7
@inproceedings{ khan2023mesa, title={MeSa: Masked, Geometric, and Supervised Pre-training for Monocular Depth Estimation}, author={Muhammad Osama Khan and Junbang Liang and Chun-Kai Wang and Shan Yang and Yu Lou}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=pj7WZFx3l7} }
Pre-training has been an important ingredient in developing strong monocular depth estimation models in recent years. For instance, self-supervised learning (SSL) is particularly effective by alleviating the need for large datasets with dense ground-truth depth maps. However, despite these improvements, our study reveals that the later layers of the SOTA SSL method are actually suboptimal. By examining the layer-wise representations, we demonstrate significant changes in these later layers during fine-tuning, indicating the ineffectiveness of their pre-trained features for depth estimation. To address these limitations, we propose MeSa, a unified framework that leverages the complementary strengths of masked, geometric, and supervised pre-training. Hence, MeSa benefits from not only general-purpose representations learnt via masked pre-training but also specialized depth-specific features acquired via geometric and supervised pre-training. Our CKA layer-wise analysis confirms that our pre-training strategy indeed produces improved representations for the later layers, overcoming the drawbacks of the SOTA SSL method. Furthermore, via experiments on the NYUv2 and IBims-1 datasets, we demonstrate that these enhanced representations translate to performance improvements in both the in-distribution and out-of-distribution settings. We also investigate the influence of the pre-training dataset and demonstrate the efficacy of pre-training on LSUN, which yields significantly better pre-trained representations. Overall, our approach surpasses the masked pre-training SSL method by a substantial margin of 17.1% on the RMSE. Moreover, even without utilizing any recently proposed techniques, MeSa also outperforms the most recent methods and establishes a new state-of-the-art for monocular depth estimation on the challenging NYUv2 dataset.
MeSa: Masked, Geometric, and Supervised Pre-training for Monocular Depth Estimation
[ "Muhammad Osama Khan", "Junbang Liang", "Chun-Kai Wang", "Shan Yang", "Yu Lou" ]
Workshop/UniReps
poster
2310.04551
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=pZakRK1QHU
@inproceedings{ ivanitskiy2023linearly, title={Linearly Structured World Representations in Maze-Solving Transformers}, author={Michael Ivanitskiy and Alexander F Spies and Tilman R{\"a}uker and Guillaume Corlouer and Christopher Mathwin and Lucia Quirke and Can Rager and Rusheb Shah and Dan Valentine and Cecilia Diniz Behn and Katsumi Inoue and Samy Wu Fung}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=pZakRK1QHU} }
The emergence of seemingly similar representations across tasks and neural architectures suggests that convergent properties may underlie sophisticated behavior. One form of representation that seems particularly fundamental to reasoning in many artificial (and perhaps natural) networks is the formation of world models, which decompose observed task structures into re-usable perceptual primitives and task-relevant relations. In this work, we show that auto-regressive transformers tasked with solving mazes learn to linearly represent the structure of mazes, and that the formation of these representations coincides with a sharp increase in generalization performance. Furthermore, we find preliminary evidence for Adjacency Heads which may play a role in computing valid paths through mazes.
Linearly Structured World Representations in Maze-Solving Transformers
[ "Michael Ivanitskiy", "Alexander F Spies", "Tilman Räuker", "Guillaume Corlouer", "Christopher Mathwin", "Lucia Quirke", "Can Rager", "Rusheb Shah", "Dan Valentine", "Cecilia Diniz Behn", "Katsumi Inoue", "Samy Wu Fung" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=oWJP0NhcY7
@inproceedings{ lengyel2023a, title={A General Method for Testing Bayesian Models using Neural Data}, author={Gabor Lengyel and Sabyasachi Shivkumar and Ralf M Haefner}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=oWJP0NhcY7} }
Bayesian models have been successful in explaining human and animal behavior, but the extent to which they can also explain neural activity is still an open question. A major obstacle to answering this question is that current methods for generating neural predictions require detailed and specific assumptions about the encoding of posterior beliefs in neural responses, with no consensus or decisive data about the nature of this encoding. Here, we present a new method and prove conditions for its validity, that overcomes these challenges for a wide class of probabilistic encodings -- including the two major classes of neural sampling and distributed distributional codes. Our method tests whether the relationships between the model posteriors for different stimuli match the relationships between the corresponding neural responses -- akin to representational similarity analysis (RSA), a widely used method for nonprobabilistic models. Finally, we present a new model comparison diagnostic for our method, based not on the agreement of the model with the data directly, but on the alignment of the model and data when injecting noise in our neural prediction generation method. We illustrate our method using simulated V1 data and compare two Bayesian models that are practically indistinguishable using behavior alone. Our results show a powerful new way to rigorously test Bayesian models on neural data.
A General Method for Testing Bayesian Models using Neural Data
[ "Gabor Lengyel", "Sabyasachi Shivkumar", "Ralf M Haefner" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=oPGXH9Vm4R
@inproceedings{ stoica2023zipit, title={ZipIt!: Multitask Model Merging without Training}, author={George Stoica and Daniel Bolya and Jakob Brandt Bjorner and Pratik Ramesh and Taylor Hearn and Judy Hoffman}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=oPGXH9Vm4R} }
We tackle the extremely difficult problem of combining distinct models with different initializations, each solving a separate task, into one multi-task model $\textbf{without any additional training}$. Prior work in model merging permutes one model to the space of the other then averages them together. While this works for models trained on the same task, we find that this fails to account for the differences in models trained on disjoint tasks. Thus, we introduce "ZipIt!", a general method for merging two arbitrary models of the same architecture that incorporates two simple strategies. First, in order to account for features that aren't shared between models, we expand the model merging problem to allow for merging features $\textit{within}$ each model by defining a general "zip" operation. Second, we add support for $\textit{partially zipping}$ the models up until a specified layer, naturally creating a multi-head model. We find that these two changes combined account for a staggering 20-50% improvement over prior work,
ZipIt!: Multitask Model Merging without Training
[ "George Stoica", "Daniel Bolya", "Jakob Brandt Bjorner", "Pratik Ramesh", "Taylor Hearn", "Judy Hoffman" ]
Workshop/UniReps
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=nytcN3SMQh
@inproceedings{ moore2023degradation, title={Degradation and plasticity in convolutional neural networks: An investigation of internal representations}, author={Jasmine A Moore and Vibujithan Vigneshwaran and Matthias Wilms and Nils Forkert}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=nytcN3SMQh} }
The architecture and information processing of convolutional neural networks was originally heavily inspired by the biological visual system. In this work, we make use of these similarities to create an in silico model of neurodegenerative diseases affecting the visual system. We examine layer-wise internal representations and accuracy levels of the model as it is subjected to synaptic decay and retraining to investigate if it is possible to capture a biologically realistic profile of visual cognitive decline. Therefore, we progressively decay and freeze model synapses in a highly compressed model trained for object recognition. Between each iteration of progressive model degradation, we retrain the remaining unaffected synapses on subsets of initial training data to simulate continual neuroplasticity. The results of this work show that even with high levels of synaptic decay and limited retraining data, the model is able to regain internal representations similar to that of the unaffected, healthy model. We also demonstrate that throughout a complete cycle of model degradation, the early layers of the model retain high levels of centered kernel alignment similarity, while later layers containing high-level information are much more susceptible to deviate from the healthy model.
Degradation and plasticity in convolutional neural networks: An investigation of internal representations
[ "Jasmine A Moore", "Vibujithan Vigneshwaran", "Matthias Wilms", "Nils Forkert" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=nro8tEfIfw
@inproceedings{ l{\"a}hner2023on, title={On the Direct Alignment of Latent Spaces}, author={Zorah L{\"a}hner and Michael Moeller}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=nro8tEfIfw} }
With the wide adaption of deep learning and pre-trained models rises the question of how to effectively reuse existing latent spaces for new applications.One important question is how the geometry of the latent space changes in-between different training runs of the same architecture and different architectures trained for the same task. Previous works proposed that the latent spaces for similar tasks are approximately isometric. However, in this work we show that method restricted to this assumption perform worse than when just using a linear transformation to align the latent spaces. We propose directly computing a transformation between the latent codes of different architectures which is more efficient than previous approaches and flexible wrt. to the type of transformation used. Our experiments show that aligning the latent space with a linear transformation performs best while not needing more prior knowledge.
On the Direct Alignment of Latent Spaces
[ "Zorah Lähner", "Michael Moeller" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=md0mU6rN2u
@inproceedings{ wang2023samclip, title={{SAM}-{CLIP}: Merging Vision Foundation Models towards Semantic and Spatial Understanding}, author={Haoxiang Wang and Pavan Kumar Anasosalu Vasu and Fartash Faghri and Raviteja Vemulapalli and Mehrdad Farajtabar and Sachin Mehta and Mohammad Rastegari and Oncel Tuzel and Hadi Pouransari}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=md0mU6rN2u} }
The landscape of publicly available vision foundation models (VFMs), such as CLIP and SAM, is expanding rapidly. VFMs are endowed with distinct capabilities stemming from their pretraining objectives. For instance, CLIP excels in semantic understanding, while SAM specializes in spatial understanding for segmentation. In this work, we introduce a simple recipe based on multi-task distillation to efficiently merge VFMs into a unified model that assimilates their expertise. By applying our method to SAM and CLIP, we derive SAM-CLIP: a unified model that amalgamates the strengths of SAM and CLIP into a single backbone, making it apt for edge device applications. We show that SAM-CLIP learns richer visual representations, equipped with both localization and semantic features, suitable for a broad range of vision tasks. We further show that SAM-CLIP not only retains the foundational strengths of its precursor models but also introduces \emph{synergistic functionalities}, most notably in zero-shot semantic segmentation, where SAM-CLIP establishes new state-of-the-art results. It outperforms previous models that are specifically designed for this task by a large margin, including +6.8\% and +5.9\% mean IoU improvement on Pascal-VOC and COCO-Stuff datasets, respectively.
SAM-CLIP: Merging Vision Foundation Models towards Semantic and Spatial Understanding
[ "Haoxiang Wang", "Pavan Kumar Anasosalu Vasu", "Fartash Faghri", "Raviteja Vemulapalli", "Mehrdad Farajtabar", "Sachin Mehta", "Mohammad Rastegari", "Oncel Tuzel", "Hadi Pouransari" ]
Workshop/UniReps
poster
2310.15308
[ "" ]
https://huggingface.co/papers/2310.15308
6
22
4
9
1
[]
[]
[]
null
https://openreview.net/forum?id=mDr0o2WU2n
@inproceedings{ zhou2023comparing, title={Comparing neural models using their perceptual discriminability predictions}, author={Jingyang Zhou and Chanwoo Chun and Ajay Subramanian and Eero P Simoncelli}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=mDr0o2WU2n} }
A variety of methods have been developed to compare models of visual representation. However, internal representations are not uniquely identifiable from perceptual measurements: different representations can generate identical perceptual predictions, and dissimilar model representations (according to existing model comparison methods) do not guarantee dissimilar perceptual predictions. Here, we generalize a previous method (“eigendistortions” - Berardino et al, 2017) to compare models based on their metric tensors. Metric tensors characterize a model’s sensitivity to stimulus perturbations, reflecting both the geometric and stochastic properties of the representation, and providing an explicit prediction of perceptual discriminability. Brute force comparison of model-predicted metric tensors using human perceptual thresholds would require an impossibly large set of measurements, since one needs to perturb a stimulus in all possible orthogonal directions. To circumvent this “perceptual curse of dimensionality”, we compute and measure discrimination capabilities for a small set of most-informative perturbations, reducing the measurement cost from thousands of hours (a conservative estimate) to a single trial. We show that this single measurement, made for a variety of different test stimuli, is sufficient to differentiate models, select models that better match human perception, or generate new models that combine the advantages of both. We demonstrate the power of this method in assessing two examples: 1) comparing models for color discrimination; 2) comparing autoencoders trained with different regularizers.
Comparing neural models using their perceptual discriminability predictions
[ "Jingyang Zhou", "Chanwoo Chun", "Ajay Subramanian", "Eero P Simoncelli" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=m33S4kCJSd
@inproceedings{ lee2023semiensemble, title={Semi-Ensemble: A Simple Approach Over-parameterize Model Interpolation}, author={Jiwoon Lee and Jaeho Lee}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=m33S4kCJSd} }
We develop a unified framework for interpolating two models with various degrees of over-parameterization, having model merging and model ensemble as special cases. Instead of directly interpolating models in their original parameter space, the proposed Semi-Ensemble interpolates the over-parameterized versions of the models in a higher-dimensional joint parameter space. Here, the over-parameterizations recover each endpoint model when projected to some low-dimensional subspace spanned by a fraction of bases. By carefully constructing the joint parameter space, the interpolated model can achieve a smooth tradeoff between the total number of parameters and the model accuracy, outperforming existing baselines. Intriguingly, we show that Semi-ensembles can sometimes achieve a better performance than vanilla ensembles, even with a slightly smaller number of parameters.
Semi-Ensemble: A Simple Approach Over-parameterize Model Interpolation
[ "Jiwoon Lee", "Jaeho Lee" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=l5I9t7GSbm
@inproceedings{ salehi2023clip, title={{CLIP} meets Model Zoo Experts: Pseudo-Supervision for Visual Enhancement}, author={Mohammadreza Salehi and Mehrdad Farajtabar and Maxwell Horton and Fartash Faghri and Hadi Pouransari and Raviteja Vemulapalli and Oncel Tuzel and Ali Farhadi and Mohammad Rastegari and Sachin Mehta}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=l5I9t7GSbm} }
Contrastive language image pretraining (CLIP) is a standard method for training vision-language models. While CLIP is scalable, promptable, and robust to distribution shifts on image classification tasks, it lacks object localization capabilities. This paper studies the following question: Can we augment CLIP training with task-specific vision models from model zoos to improve its visual representations? Towards this end, we leverage open-source task-specific vision models to generate pseudo-labels for an uncurated web-scale image-text dataset. Subsequently, we train CLIP models on these pseudo-labels in addition to the contrastive training on image and text pairs. This simple setup shows substantial improvements of up to 16.3\% across different vision tasks, including segmentation, detection, depth estimation, and surface normal estimation. Importantly, these enhancements are achieved without compromising CLIP's existing capabilities, including its proficiency in promptable zero-shot classification.
CLIP meets Model Zoo Experts: Pseudo-Supervision for Visual Enhancement
[ "Mohammadreza Salehi", "Mehrdad Farajtabar", "Maxwell Horton", "Fartash Faghri", "Hadi Pouransari", "Raviteja Vemulapalli", "Oncel Tuzel", "Ali Farhadi", "Mohammad Rastegari", "Sachin Mehta" ]
Workshop/UniReps
poster
2310.14108
[ "" ]
https://huggingface.co/papers/2310.14108
3
1
0
10
1
[]
[]
[]
null
https://openreview.net/forum?id=j641gZOD7m
@inproceedings{ kuntala2023understanding, title={Understanding Learning Dynamics of Neural Representations via Feature Visualization at Scale}, author={Chandana Kuntala and Carlos R Ponce and Deepak Kumar Sharma and Binxu Wang}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=j641gZOD7m} }
How does feature learning happen during the training of a neural network? We developed an accelerated pipeline to synthesize maximally activating images ("prototypes") for hidden units in a parallel fashion. Through this, we were able to perform feature visualization at scale, and to track the emergence and development of visual features across the training of neural networks. Using this technique, we studied the `developmental' process of features in a convolutional neural network trained from scratch using SimCLR with or without color jittering augmentation. After creating over one million prototypes with our method, tracking and comparing these visual signatures showed that the color-jittering augmentation led to constantly diversifying high-level features during training, while no color-jittering led to more diverse low-level features but less development of high-level features. These results illustrate how feature visualization can be used to understand training dynamics under different training objectives and data distribution.
Understanding Learning Dynamics of Neural Representations via Feature Visualization at Scale
[ "Chandana Kuntala", "Carlos R Ponce", "Deepak Kumar Sharma", "Binxu Wang" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=iZ3me8unMJ
@inproceedings{ ligeralde2023unsupervised, title={Unsupervised learning on spontaneous retinal activity leads to efficient neural representation geometry}, author={Andrew Ligeralde and Yilun Kuang and Thomas Edward Yerxa and Miah N Pitcher and Marla Feller and SueYeon Chung}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=iZ3me8unMJ} }
Prior to the onset of vision, neurons in the developing mammalian retina spontaneously fire in correlated activity patterns known as retinal waves. Experimental evidence suggests that retinal waves strongly influence the emergence of sensory representations before visual experience. We aim to model this early stage of functional development by using movies of neurally active developing retinas as pre-training data for neural networks. Specifically, we pre-train a ResNet-18 with an unsupervised contrastive learning objective (SimCLR) on both simulated and experimentally-obtained movies of retinal waves, then evaluate its performance on image classification tasks. We find that pre-training on retinal waves significantly improves performance on tasks that test object invariance to spatial translation, while slightly improving performance on more complex tasks like image classification. Notably, these performance boosts are realized on held-out natural images even though the pre-training procedure does not include any natural image data. We then propose a geometrical explanation for the increase in network performance, namely that the spatiotemporal characteristics of retinal waves facilitate the formation of separable feature representations. In particular, we demonstrate that networks pre-trained on retinal waves are more effective at separating image manifolds than randomly initialized networks, especially for manifolds defined by sets of spatial translations. These findings indicate that the broad spatiotemporal properties of retinal waves prepare networks for higher order feature extraction.
Unsupervised learning on spontaneous retinal activity leads to efficient neural representation geometry
[ "Andrew Ligeralde", "Yilun Kuang", "Thomas Edward Yerxa", "Miah N Pitcher", "Marla Feller", "SueYeon Chung" ]
Workshop/UniReps
poster
2312.02791
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=iEnGqvuI8W
@inproceedings{ pospisil2023estimating, title={Estimating shape distances on neural representations with limited samples}, author={Dean A Pospisil and Brett W. Larsen and Sarah E Harvey and Alex H Williams}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=iEnGqvuI8W} }
Measuring geometric similarity between high-dimensional network representations is a topic of longstanding interest to neuroscience and deep learning. Although many methods have been proposed, only a few works have rigorously analyzed their statistical efficiency or quantified estimator uncertainty in data-limited regimes. Here, we derive upper and lower bounds on the worst-case convergence of standard estimators of shape distance—a measure of representational dissimilarity proposed by Williams et al. (2021). These bounds reveal the challenging nature of the problem in high-dimensional feature spaces. To overcome these challenges, we introduce a novel method-of-moments estimator with a tunable bias-variance tradeoff parameterized by an upper bound on bias. We show that this estimator achieves superior performance to standard estimators in simulation and on neural data, particularly in high-dimensional settings. Our theoretical work and estimator thus respectively define and dramatically expand the scope of neural data for which geometric similarity can be accurately measured.
Estimating shape distances on neural representations with limited samples
[ "Dean A Pospisil", "Brett W. Larsen", "Sarah E Harvey", "Alex H Williams" ]
Workshop/UniReps
poster
2310.05742
[ "https://github.com/dp4846/eigmom_shape_stats" ]
https://huggingface.co/papers/2310.05742
0
0
0
4
1
[]
[]
[]
null
https://openreview.net/forum?id=hWzWk0TSZ3
@inproceedings{ liu2023growing, title={Growing Brains: Co-emergence of Anatomical and Functional Modularity in Recurrent Neural Networks}, author={Ziming Liu and Mikail Khona and Ila R Fiete and Max Tegmark}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=hWzWk0TSZ3} }
Recurrent neural networks (RNNs) trained on compositional tasks can exhibit functional modularity, in which neurons can be clustered by activity similarity and specialization on a shared computational subtask. Unlike brains, these RNNs do not exhibit anatomical modularity, in which functional clustering is correlated with strong recurrent coupling and spatial localization of functional clusters. Contrasting with functional modularity, which can be ephemerally dependent on the input, anatomically modular networks form a robust substrate for solving the same subtasks in the future. To examine whether it is possible to grow brain-like anatomical modularity, we apply a recent machine learning method, brain-inspired modular training (BIMT), to a network being trained to solve a set of compositional tasks. We find that functional and anatomical clustering emerge together, such that functionally similar neurons also become spatially localized and interconnected. Moreover, compared to standard $L_1$ regularization or no regularization settings, the model exhibits superior performance by optimally balancing task performance and network sparsity. In addition to achieving brain-like organization in RNNs, our findings also suggest that BIMT holds promise for applications in neuromorphic computing and enhancing the interpretability of neural network architectures.
Growing Brains: Co-emergence of Anatomical and Functional Modularity in Recurrent Neural Networks
[ "Ziming Liu", "Mikail Khona", "Ila R Fiete", "Max Tegmark" ]
Workshop/UniReps
poster
2310.07711
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=gG5UGgUzDR
@inproceedings{ klabunde2023towards, title={Towards Measuring Representational Similarity of Large Language Models}, author={Max Klabunde and Mehdi Ben Amor and Michael Granitzer and Florian Lemmerich}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=gG5UGgUzDR} }
Understanding the similarity of the numerous large language models released has many uses, e.g., simplifying model selection, detecting illegal model reuse, and advancing our understanding of what makes LLMs perform well. In this work, we measure the similarity of representations of a set of LLMs with 7B parameters. Our results suggest that some LLMs are substantially different from others. We identify challenges of using representational similarity measures that suggest the need of careful study of similarity scores to avoid false conclusions.
Towards Measuring Representational Similarity of Large Language Models
[ "Max Klabunde", "Mehdi Ben Amor", "Michael Granitzer", "Florian Lemmerich" ]
Workshop/UniReps
poster
2312.02730
[ "https://github.com/mklabunde/llm_repsim" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=dpYcX84x6M
@inproceedings{ wang2023bioinspired, title={Bio-inspired parameter reuse: Exploiting inter-frame representation similarity with recurrence for accelerating temporal visual processing}, author={Zuowen Wang and Longbiao Cheng and Joachim Ott and Pehuen Moure and Shih-Chii Liu}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=dpYcX84x6M} }
Feedforward neural networks are the dominant approach in current computer vision research. They typically do not incorporate recurrence, which is a prominent feature of biological vision brain circuitry. Inspired by biological findings, we introduce $\textbf{RecSlowFast}$, a recurrent slow-fast framework aimed at showing how recurrence can be useful for temporal visual processing. We perform a variable number of recurrent steps of certain layers in a network receiving input video frames, where each recurrent step is equivalent to a feedforward layer with weights reuse. By harnessing the hidden states extracted from the previous input frame, we reduce the computation cost by executing fewer recurrent steps on temporally correlated consecutive frames, while keeping good task accuracy. The early termination of the recurrence can be dynamically determined through newly introduced criteria based on the distance between hidden states and without using any auxiliary scheduler network. RecSlowFast $\textbf{reuses a single set of parameters}$, unlike previous work which requires one computationally heavy network and one light network, to achieve the speed versus accuracy trade-off. Using a new $\textit{Temporal Pathfinder}$ dataset proposed in this work, we evaluate RecSlowFast on a task to continuously detect the longest evolving contour in a video. The slow-fast inference mechanism speeds up the average frame per second by 279% on this dataset with comparable task accuracy using a desktop GPU. We further demonstrate a similar trend on CamVid, a video semantic segmentation dataset.
Bio-inspired parameter reuse: Exploiting inter-frame representation similarity with recurrence for accelerating temporal visual processing
[ "Zuowen Wang", "Longbiao Cheng", "Joachim Ott", "Pehuen Moure", "Shih-Chii Liu" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=dpGns2IHRW
@inproceedings{ crawford2023unicat, title={UniCat: Crafting a Stronger Fusion Baseline for Multimodal Re-Identification}, author={Jennifer Crawford and Haoli Yin and Luke McDermott and Daniel Cummings}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=dpGns2IHRW} }
Multimodal Re-Identification (ReID) is a popular retrieval task that aims to re-identify objects across diverse data streams, prompting many researchers to integrate multiple modalities into a unified representation. While such fusion promises a holistic view, our investigations shed light on potential pitfalls. We uncover that prevailing late-fusion techniques often produce suboptimal latent representations when compared to methods that train modalities in isolation. We argue that this effect is largely due to the inadvertent relaxation of the training objectives on individual modalities when using fusion, what others have termed modality laziness. We present a nuanced point-of-view that this relaxation can lead to certain modalities failing to fully harness available task-relevant information, and yet, offers a protective veil to noisy modalities, preventing them from overfitting to task-irrelevant data. Our findings also show that unimodal concatenation (UniCat) and other late-fusion ensembling of unimodal backbones, when paired with best-known training techniques, exceed the current state-of-the-art performance across several multimodal ReID benchmarks. By unveiling the double-edged sword of "modality laziness", we motivate future research in balancing local modality strengths with global representations.
UniCat: Crafting a Stronger Fusion Baseline for Multimodal Re-Identification
[ "Jennifer Crawford", "Haoli Yin", "Luke McDermott", "Daniel Cummings" ]
Workshop/UniReps
poster
2310.18812
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=clMjnlhn55
@inproceedings{ schaeffer2023testing, title={Testing Assumptions Underlying a Unified Theory for the Origin of Grid Cells}, author={Rylan Schaeffer and Mikail Khona and Adrian Bertagnoli and Sanmi Koyejo and Ila R Fiete}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=clMjnlhn55} }
Representing and reasoning about physical space is fundamental to animal survival, and the mammalian lineage expresses a wealth of specialized neural representations that encode space. Grid cells, whose discovery earned a Nobel prize, are a striking example: a grid cell is a neuron that fires if and only if the animal is spatially located at the vertices of a regular triangular lattice that tiles all explored two-dimensional environments. Significant theoretical work has gone into understanding why mammals have learned these particular representations, and recent work has proposed a ``unified theory for the computational and mechanistic origin of grid cells," claiming to answer why the mammalian lineage has learned grid cells. However, the Unified Theory makes a series of highly specific assumptions about the target readouts of grid cells - putatively place cells. In this work, we explicitly identify what these mathematical assumptions are, then test two of the critical assumptions using biological place cell data. At both the population and single-cell levels, we find evidence suggesting that neither of the assumptions are likely true in biological neural representations. These results call the Unified Theory into question, suggesting that biological grid cells likely have a different origin than those obtained in trained artificial neural networks.
Testing Assumptions Underlying a Unified Theory for the Origin of Grid Cells
[ "Rylan Schaeffer", "Mikail Khona", "Adrian Bertagnoli", "Sanmi Koyejo", "Ila R Fiete" ]
Workshop/UniReps
poster
2311.16295
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=aP2a5i1iUf
@inproceedings{ zhao2023understanding, title={Understanding Mode Connectivity via Parameter Space Symmetry}, author={Bo Zhao and Nima Dehmamy and Robin Walters and Rose Yu}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=aP2a5i1iUf} }
It has been observed that the global minimum of neural networks is connected by curves on which train and test loss is almost constant. This phenomenon, often referred to as mode connectivity, has inspired various applications such as model ensembling and fine-tuning. However, despite empirical evidence, a theoretical explanation is still lacking. We explore the connectedness of minimum through a new approach, parameter space symmetry. By relating topology of symmetry groups to topology of minima, we provide the number of connected components of full-rank linear networks. In particular, we show that skip connections reduce the number of connected components. We then prove mode connectivity up to permutation for linear networks. We also provide explicit expressions for connecting curves in minimum induced by symmetry.
Understanding Mode Connectivity via Parameter Space Symmetry
[ "Bo Zhao", "Nima Dehmamy", "Robin Walters", "Rose Yu" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ZphCrdgqKp
@inproceedings{ bao2023channel, title={Channel Vision Transformers: An Image Is Worth C x 16 x 16 Words}, author={Yujia Bao and Srinivasan Sivanandan and Theofanis Karaletsos}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=ZphCrdgqKp} }
Vision Transformer (ViT) has emerged as a powerful architecture in the realm of modern computer vision. However, its application in certain imaging fields, such as microscopy and satellite imaging, presents unique challenges. In these domains, images often contain multiple channels, each carrying semantically distinct and independent information. Furthermore, the model must demonstrate robustness to sparsity in input channels, as they may not be densely available during training or testing. In this paper, we propose a modification to the ViT architecture that enhances reasoning across the input channels and introduce Hierarchical Channel Sampling (HCS) as an additional regularization technique to ensure robustness when only partial channels are presented during test time. Our proposed model, ChannelViT, constructs patch tokens independently from each input channel and utilizes a learnable channel embedding that is added to the patch tokens, similar to positional embeddings. We evaluate the performance of ChannelViT on ImageNet, JUMP-CP (microscopy cell imaging), and So2Sat (satellite imaging). Our results show that ChannelViT outperforms ViT on classification tasks and generalizes well, even when a subset of input channels is used during testing. Across our experiments, HCS proves to be a powerful regularizer, independent of the architecture employed, suggesting itself as a straightforward technique for robust ViT training. Lastly, we find that ChannelViT generalizes effectively even when there is limited access to all channels during training, highlighting its potential for multi-channel imaging under real-world conditions with sparse sensors.
Channel Vision Transformers: An Image Is Worth C x 16 x 16 Words
[ "Yujia Bao", "Srinivasan Sivanandan", "Theofanis Karaletsos" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ZPmx1ncKpr
@inproceedings{ oublal2023discov, title={DisCoV: Disentangling Time Series Representations via Contrastive based \$l\$-Variational Inference}, author={Khalid Oublal and Said Ladjal and David Benhaiem and Emmanuel LE BORGNE and Fran{\c{c}}ois Roueff}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=ZPmx1ncKpr} }
Improving the generalization capabilities of current machine learning models and improving interpretability are major goals of learning disentangled representations of time series. Nevertheless, time-series disentanglement methods have mainly focused on identifying the independent factors of variation in the data. This overlooks that the causal factors underlying real-world data are often not statistically independent. In this paper, we investigate the problem of learning disentangled representations for the electricity consumption of customers’ appliances in the context of Non-Intrusive Load Monitoring (NILM) (or energy disaggregation), which allows users to understand and optimise their consumption in order to reduce their carbon footprint. Our goal is to disentangle the role of each attribute in total aggregated consumption. In contrast to existing methods that assume attribute independence, we recognise correlations between attributes in real-world time series. To meet this challenge, we use weakly supervised contrastive disentangling, facilitating the generalisation of the representation across various correlated scenarios and new households. We show that Disentangling the latent space using Contrastive on Variational inference (DISCOV) can enhance the downstream task. Furthermore, we find that existing metrics to measure disentanglement are inadequate for the specificity of time series data. To bridge such a gap, an alignment time metric has been introduced as a way to assess the quality of disentanglement. We argue that on-going efforts in the domain of NILM need to rely on causal scenarios rather than solely on statistical independence. Code is available at https://oublalkhalid.github.io/DISCOV/.
DISCOV: A Time Series Representations Disentanglement via Contrastive for Non-Intrusive Load Monitoring (NILM)
[ "Khalid Oublal", "Said Ladjal", "David Benhaiem", "Emmanuel LE BORGNE", "François Roueff" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=WuFcps7SNL
@inproceedings{ yu2023mixture, title={Mixture of Multimodal Interaction Experts}, author={Haofei Yu and Paul Pu Liang and Ruslan Salakhutdinov and Louis-Philippe Morency}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=WuFcps7SNL} }
Multimodal machine learning, which studies the information and interactions across various input modalities, has made significant advancements in understanding the relationship between images and descriptive text. Yet, this is just a portion of the potential multimodal interactions in the real world, such as sarcasm in conflicting utterance and gestures. Notably, the current methods for capturing this shared information often don't extend well to these more nuanced interactions. Current models, in fact, show particular weaknesses with disagreement and synergistic interactions, sometimes performing as low as 50\% in binary classification. In this paper, we address this problem via a new approach called mixture of multimodal interaction experts. This method automatically classifies datapoints from unlabeled multimodal dataset by their intereaction types, then employs specialized models for each specific interaction. Based on our experiments, this approach has improved performance on these challenging interactions to more than 10%, leading to an overall increase of 2% for tasks like sarcasm prediction. As a result, not only does interaction quantification provide new insights for dataset analysis, but also simple approaches to obtain state-of-the-art performance.
Mixture of Multimodal Interaction Experts
[ "Haofei Yu", "Paul Pu Liang", "Ruslan Salakhutdinov", "Louis-Philippe Morency" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=WgQZNoQ5AB
@inproceedings{ wu2023invertedattention, title={Inverted-Attention Transformers can Learn Object Representations: Insights from Slot Attention}, author={Yi-Fu Wu and Klaus Greff and Gamaleldin Fathy Elsayed and Michael Curtis Mozer and Thomas Kipf and Sjoerd van Steenkiste}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=WgQZNoQ5AB} }
Visual reasoning is supported by a causal understanding of the physical world, and theories of human cognition suppose that a necessary step to causal understanding is the discovery and representation of high-level entities like objects. Slot Attention is a popular method aimed at object-centric learning, and its popularity has resulted in dozens of variants and extensions. To help understand the core assumptions that lead to successful object-centric learning, we take a step back and identify the minimal set of changes to a standard Transformer architecture to obtain the same performance as the specialized Slot Attention models. We systematically evaluate the performance and scaling behaviour of several ``intermediate'' architectures on seven image and video datasets from prior work. Our analysis reveals that by simply inverting the attention mechanism of Transformers, we obtain performance competitive with state-of-the-art Slot Attention in several domains.
Inverted-Attention Transformers can Learn Object Representations: Insights from Slot Attention
[ "Yi-Fu Wu", "Klaus Greff", "Gamaleldin Fathy Elsayed", "Michael Curtis Mozer", "Thomas Kipf", "Sjoerd van Steenkiste" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=WcfVyzzJOS
@inproceedings{ tucker2023increasing, title={Increasing Brain-{LLM} Alignment via Information-Theoretic Compression}, author={Mycal Tucker and Greta Tuckute}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=WcfVyzzJOS} }
Recent work has discovered similarities between learned representations in large language models (LLMs) and human brain activity during language processing. However, it remains unclear what information LLM and brain representations share. In this work, inspired by a notion that brain data may include information not captured by LLMs, we apply an information bottleneck method to generate compressed representations of fMRI data. For certain brain regions in the frontal cortex, we find that compressing brain representations by a small amount increases their similarity to both BERT and GPT2 embeddings. Thus, our method not only improves LLM-brain alignment scores but also suggests important characteristics about the amount of information captured by each representation scheme.
Increasing Brain-LLM Alignment via Information-Theoretic Compression
[ "Mycal Tucker", "Greta Tuckute" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=TOp8uT3DZ9
@inproceedings{ sabae2023noposeneus, title={NoPose-NeuS: Jointly Optimizing Camera Poses with Neural Implicit Surfaces for Multi-view Reconstruction}, author={Mohamed Shawky Sabae and Hoda A. Baraka and Mayada Hadhoud}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=TOp8uT3DZ9} }
Learning neural implicit surfaces from volume rendering has become popular for multi-view reconstruction. Neural surface reconstruction approaches can recover complex 3D geometry that are difficult for classical Multi-view Stereo (MVS) approaches, such as non-Lambertian surfaces and thin structures. However, one key assumption for these methods is knowing accurate camera parameters for the input multi-view images, which are not always available. In this paper, we present NoPose-NeuS, a neural implicit surface reconstruction method that extends NeuS to jointly optimize camera poses with the geometry and color networks. We encode the camera poses as a multi-layer perceptron (MLP) and introduce two additional losses, which are multi-view feature consistency and rendered depth losses, to constrain the learned geometry for better estimated camera poses and scene surfaces. Extensive experiments on the DTU dataset show that the proposed method can estimate relatively accurate camera poses, while maintaining a high surface reconstruction quality with 0.89 mean Chamfer distance.
NoPose-NeuS: Jointly Optimizing Camera Poses with Neural Implicit Surfaces for Multi-view Reconstruction
[ "Mohamed Shawky Sabae", "Hoda A. Baraka", "Mayada Hadhoud" ]
Workshop/UniReps
poster
2312.15238
[ "" ]
https://huggingface.co/papers/2312.15238
0
1
0
3
1
[]
[]
[]
null
https://openreview.net/forum?id=Qms9kWnbpP
@inproceedings{ singh2023expressivity, title={Expressivity of Spiking Neural Networks through the Spike Response Model}, author={Manjot Singh and Adalbert Fono and Gitta Kutyniok}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=Qms9kWnbpP} }
This article studies the expressive power of spiking neural networks with firing-time-based information encoding, highlighting their potential for future energy-efficient AI applications when deployed on neuromorphic hardware. The computational power of a network of spiking neurons has already been studied via their capability of approximating any continuous function. By using the Spike Response Model as a mathematical model of a spiking neuron and assuming a linear response function, we delve deeper into this analysis and prove that spiking neural networks generate continuous piecewise linear mappings. We also show that they can emulate any multi-layer (ReLU) neural network with similar complexity. Furthermore, we prove that the maximum number of linear regions generated by a spiking neuron scales exponentially with respect to the input dimension, a characteristic that distinguishes it significantly from an artificial (ReLU) neuron. Our results further extend the understanding of the approximation properties of spiking neural networks and open up new avenues where spiking neural networks can be deployed instead of artificial neural networks without any performance loss.
Expressivity of Spiking Neural Networks through the Spike Response Model
[ "Manjot Singh", "Adalbert Fono", "Gitta Kutyniok" ]
Workshop/UniReps
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=PbvPwiySXz
@inproceedings{ alt{\i}nta{\c{s}}2023disentangling, title={Disentangling Linear Mode Connectivity}, author={G{\"u}l Sena Alt{\i}nta{\c{s}} and Gregor Bachmann and Lorenzo Noci and Thomas Hofmann}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=PbvPwiySXz} }
Linear mode-connectivity (LMC) (or lack thereof) is one of the intriguing characteristics of neural network loss landscapes. While empirically well established, it unfortunately still lacks a proper theoretical understanding. Even worse, although empirical data points are abound, a systematic study of when networks exhibit LMC is largely missing in the literature. In this work we aim to close this gap. We explore how LMC is affected by three factors: (1) architecture (sparsity, weight-sharing), (2) training strategy (optimization setup) as well as (3) the underlying dataset. We place particular emphasis on minimal but non-trivial settings, removing as much unnecessary complexity as possible. We believe that our insights can guide future theoretical works on uncovering the inner workings of LMC.
Disentangling Linear Mode Connectivity
[ "Gül Sena Altıntaş", "Gregor Bachmann", "Lorenzo Noci", "Thomas Hofmann" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=Nu3t6XX103
@inproceedings{ daheim2023model, title={Model Merging by Gradient Matching}, author={Nico Daheim and Thomas M{\"o}llenhoff and Edoardo Ponti and Iryna Gurevych and Mohammad Emtiyaz Khan}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=Nu3t6XX103} }
Models trained on different datasets can be merged by a weighted-averaging of their parameters, but why does it work and when can it fail? Here, we connect the inaccuracy of weighted-averaging to mismatches in the gradients and propose a new uncertainty-based scheme to improve the performance by reducing the mismatch. The connection also reveals implicit assumptions in other schemes such as averaging, task arithmetic, and Fisher-weighted averaging.
Model Merging by Gradient Matching
[ "Nico Daheim", "Thomas Möllenhoff", "Edoardo Ponti", "Iryna Gurevych", "Mohammad Emtiyaz Khan" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=NSfecLF4Za
@inproceedings{ wu2023objectcentric, title={Object-Centric Semantic Vector Quantization}, author={Yi-Fu Wu and Minseung Lee and Sungjin Ahn}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=NSfecLF4Za} }
Neural discrete representations are crucial components of modern neural networks. However, their main limitation is that the primary strategies such as VQ-VAE can only provide representations at the patch level. Therefore, one of the main goals of representation learning, acquiring conceptual, semantic, and compositional abstractions such as the color and shape of an object, remains elusive. In this paper, we present the first approach to semantic neural discrete representation learning. The proposed model, called Semantic Vector-Quantized Variational Autoencoder (SVQ), leverages recent advances in unsupervised object-centric learning to address this limitation. Specifically, we observe that a simple approach quantizing at the object level poses a significant challenge and propose constructing scene representations hierarchically, from low-level discrete concept schemas to object representations. Additionally, we suggest a novel method for training a prior over these semantic representations, enabling the ability to generate images following the underlying data distribution, which is lacking in most object-centric models. In experiments on various 2D and 3D object-centric datasets, we find that our model achieves superior generation performance compared to non-semantic vector quantization methods such as VQ-VAE and previous object-centric generative models. Furthermore, we find that the semantic discrete representations can solve downstream scene understanding tasks that require reasoning about the properties of different objects in the scene.
Object-Centric Semantic Vector Quantization
[ "Yi-Fu Wu", "Minseung Lee", "Sungjin Ahn" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=LhV3Ex8fky
@inproceedings{ acosta2023evaluation, title={Evaluation of Representational Similarity Scores Across Human Visual Cortex}, author={Francisco Acosta and Colin Conwell and Sophia Sanborn and David A. Klindt and Nina Miolane}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=LhV3Ex8fky} }
We investigate several popular methods for quantifying the similarity between neural representations applied to a large-scale fMRI dataset of human ventral visual cortex. We focus on representational geometry as a framework for comparing various functionally-defined high-level regions of interest (ROIs) in the ventral stream. We benchmark Representational Similarity Analysis, Centered Kernel Alignment, and Generalized Shape Metrics. We explore how well the geometry implied by pairwise representational dissimilarity scores produced by each method matches the 2D anatomical geometry of visual cortex. Our results suggest that while these methods yield similar outcomes, Shape Metrics provide distances between representations whose relation to the anatomical geometry is most invariant across subjects. Our work establishes a criterion with which to compare methods for quantifying representational similarity with implications for studying the anatomical organization of high-level ventral visual cortex.
Evaluation of Representational Similarity Scores Across Human Visual Cortex
[ "Francisco Acosta", "Colin Conwell", "Sophia Sanborn", "David A. Klindt", "Nina Miolane" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=LVYEjD25tZ
@inproceedings{ lu2023supervising, title={Supervising Variational Autoencoder Latent Representations with Language}, author={Thomas Lu and Aboli Marathe and Ada Martin}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=LVYEjD25tZ} }
Supervising latent representations of data is of great interest for modern multi-modal generative machine learning. In this work, we propose two new methods to use text to condition the latent representations of a VAE, and evaluate them on a novel conditional image-generation benchmark task. We find that the applied methods can be used to generate highly accurate reconstructed images through language querying with minimal compute resources. Our methods are quantitatively successful at conforming to textually-supervised attributes of an image while keeping unsupervised attributes constant. At large, we present critical observations on disentanglement between supervised and unsupervised properties of images and identify common barriers to effective disentanglement.
Supervising Variational Autoencoder Latent Representations with Language
[ "Thomas Lu", "Aboli Marathe", "Ada Martin" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=LSSiDy7fG1
@inproceedings{ schneider2023implicit, title={Implicit Representations for Image Segmentation}, author={Jan Philipp Schneider and Mishal Fatima and Jovita Lukasik and Andreas Kolb and Margret Keuper and Michael Moeller}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=LSSiDy7fG1} }
Image segmentation has greatly advanced over the past ten years. Yet, even the most recent techniques face difficulties producing good results in challenging situations, e.g., if training data are scarce, out-of-distribution examples need to be segmented, or if objects are occluded. In such situations, the inclusion of (geometric) constraints can improve the segmentation quality significantly. In this paper, we study the constraint of the segmented region being segmented convex. Unlike prior work that encourages such a property with computationally expensive penalties on segmentation masks represented explicitly on a grid of pixels, our work is the first to consider an implicit representation. Specifically, we represent the segmentation as a parameterized function that maps spatial coordinates to the likeliness of a pixel belonging to the fore- or background. This enables us to provably ensure the convexity of the segmented regions with the help of input convex neural networks. Numerical experiments demonstrate how to encourage explicit and implicit representations to match in order to benefit from the convexity constraints in several challenging segmentation scenarios.
Implicit Representations for Image Segmentation
[ "Jan Philipp Schneider", "Mishal Fatima", "Jovita Lukasik", "Andreas Kolb", "Margret Keuper", "Michael Moeller" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=KeiQNpb7sv
@inproceedings{ nagaraj2023ecological, title={Ecological data and objectives align deep neural network representations with humans}, author={Akash Nagaraj and Alekh Karkada Ashok and Drew Linsley and Francis E Lewis and Peisen Zhou and Thomas Serre}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=KeiQNpb7sv} }
The many successes of deep neural networks (DNNs) over the past decade have largely been driven by computational scale rather than insights from biological intelligence. While DNNs have nevertheless been surprisingly adept at explaining behavioral and neural recordings from humans, there is a growing number of reports indicating that DNNs are becoming progressively worse models of human vision as they improve on standard computer vision benchmarks. Here, we provide evidence that one path towards improving the alignment of DNNs with human vision is to train them with data and objective functions that more closely resemble those relied on by brains. We find that DNNs trained to capture the causal structure of large spatiotemporal object datasets learn generalizable object representations that exhibit smooth equivariance to 3-Dimensional (out-of-plane) variations in object pose and are predictive of human decisions and reaction times on popular psychophysics stimuli. Our work identifies novel data diets and objective functions that better align DNN vision with humans and can be easily scaled to generate the next generation of DNNs that behave as humans do.
Ecological data and objectives align deep neural network representations with humans
[ "Akash Nagaraj", "Alekh Karkada Ashok", "Drew Linsley", "Francis E Lewis", "Peisen Zhou", "Thomas Serre" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=KWYQu2TxhE
@inproceedings{ gahl2023visual, title={Visual Expertise Explains Image Inversion Effects}, author={Martha Gahl and Shubham Kulkarni and Nikhil Pathak and Alex Russell and Garrison W. Cottrell}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=KWYQu2TxhE} }
We present an anatomically-inspired neurocomputational model, including a foveated retina and the log-polar mapping from the visual field to the primary visual cortex, that recreates image inversion effects long seen in psychophysical studies. We show that visual expertise, the ability to discriminate between subordinate-level categories, changes the performance of the model on inverted images. We first explore face discrimination, which, in humans, relies on configural information. The log-polar transform disrupts configural information in an inverted image and leaves featural information relatively unaffected. We suggest this is responsible for the degradation of performance with inverted faces. We then recreate the effect with other subordinate-level category discriminators and show that the inversion effect arises as a result of visual expertise, where configural information becomes relevant as more identities are learned at the subordinate-level. Our model matches the classic result: faces suffer more from inversion than mono-oriented objects, which are more disrupted than non-mono-oriented objects when objects are only familiar at a basic-level, and simultaneously shows that expert-level discrimination of other subordinate-level categories respond similarly to inversion as face experts.
Visual Expertise Explains Image Inversion Effects
[ "Martha Gahl", "Shubham Kulkarni", "Nikhil Pathak", "Alex Russell", "Garrison W. Cottrell" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=JYww68b9PA
@inproceedings{ lion2023how, title={How Good is a Single Basin?}, author={Kai Lion and Gregor Bachmann and Lorenzo Noci and Thomas Hofmann}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=JYww68b9PA} }
The multi-modal nature of neural loss landscapes is often considered to be the main driver behind the empirical success of deep ensembles. In this work, we probe this belief by constructing various "connected" ensembles which are restricted to lie in the same basin. Through our experiments, we demonstrate that increased connectivity indeed negatively impacts performance. However, when incorporating the knowledge from other basins implicitly through distillation, we show that the gap in performance can be mitigated by re-discovering (multi-basin) deep ensembles in a single basin. Thus, we conjecture that while the extra-basin knowledge is at least partially present in any given basin, it cannot be easily harnessed without learning it from other basins.
How Good is a Single Basin?
[ "Kai Lion", "Gregor Bachmann", "Lorenzo Noci", "Thomas Hofmann" ]
Workshop/UniReps
poster
2402.03187
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=JFnjdl2Vc2
@inproceedings{ bigelow2023subjective, title={Subjective Randomness and In-Context Learning}, author={Eric J Bigelow and Ekdeep Singh Lubana and Robert P. Dick and Hidenori Tanaka and Tomer Ullman}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=JFnjdl2Vc2} }
Large language models (LLMs) exhibit intricate capabilities, often achieving high performance on tasks they were not explicitly trained for. The precise nature of LLM capabilities is often unclear, with different prompts eliciting different capabilities, especially when used with in-context learning (ICL). We propose a "Cognitive Interpretability" framework that enables us to analyze ICL dynamics to understand latent concepts underlying LLMs' behavioral patterns. This provides a more nuanced understanding than posthoc evaluation benchmarks, but does not require observing model internals as a mechanistic interpretation would require. Inspired by the cognitive science of human randomness perception, we use random binary sequences as context and study dynamics of ICL by manipulating properties of context data, such as sequence length. In the latest GPT-3.5+ models, we find emergent abilities to generate pseudo-random numbers and learn basic formal languages, with striking ICL dynamics where model outputs transition sharply from pseudo-random behaviors to deterministic repetition.
Subjective Randomness and In-Context Learning
[ "Eric J Bigelow", "Ekdeep Singh Lubana", "Robert P. Dick", "Hidenori Tanaka", "Tomer Ullman" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=J1kzuH90gg
@inproceedings{ woo2023deep, title={Deep Multimodal Emotion Recognition using Modality Aware Attention Network for Unifying Representations in Neural Models}, author={Sungpil Woo and MUHAMMAD ZUBAIR and Sunhwan Lim and Daeyoung Kim}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=J1kzuH90gg} }
This paper introduces a multi-modal emotion recognition system aimed at enhancing emotion recognition by integrating representations from physiological signals. To accomplish this goal, we introduce a modality aware attention network to extract emotion-specific features by influencing and aligning the representation spaces of various modalities into a unified entity. Through a series of experiments and visualizations conducted on the AMIGO dataset, we demonstrate the efficacy of our proposed methodology for emotion classification, highlighting its capability to provide comprehensive representations of physiological signals.
Deep Multimodal Emotion Recognition using Modality Aware Attention Network for Unifying Representations in Neural Models
[ "Sungpil Woo", "MUHAMMAD ZUBAIR", "Sunhwan Lim", "Daeyoung Kim" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=I9dkBah6Z9
@inproceedings{ gedon2023on, title={On Feature Learning of Recursive Feature Machines and Automatic Relevance Determination}, author={Daniel Gedon and Amirhesam Abedsoltan and Thomas B. Sch{\"o}n and Mikhail Belkin}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=I9dkBah6Z9} }
Feature learning is a crucial element for the performance of machine learning models. Recently, the exploration of feature learning in the context of kernel methods has led to the introduction of Recursive Feature Machines (RFMs). In this work, we connect diagonal RFMs to Automatic Relevance Determination (ARD) from the Gaussian process literature. We demonstrate that diagonal RFMs, similar to ARD, serve as a weighted covariate selection technique. However, they are trained using different paradigms: RFMs use recursive iterations of the so-called Average Gradient Outer Product, while ARD employs maximum likelihood estimation. Our experiments show that while the learned features in both models correlate highly across various tabular datasets, this correlation is lower for other datasets. Furthermore, we demonstrate that the RFM effectively captures correlation between covariates, and we present instances where the RFM outperforms both ARD and diagonal RFM.
On Feature Learning of Recursive Feature Machines and Automatic Relevance Determination
[ "Daniel Gedon", "Amirhesam Abedsoltan", "Thomas B. Schön", "Mikhail Belkin" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=H228wa4pfk
@inproceedings{ marczak2023revisiting, title={Revisiting Supervision for Continual Representation Learning}, author={Daniel Marczak and Sebastian Cygert and Tomasz Trzcinski and Bart{\l}omiej Twardowski}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=H228wa4pfk} }
In the field of continual learning, models are designed to learn tasks one after the other. While most research has centered on supervised continual learning, recent studies have highlighted the strengths of self-supervised continual representation learning. The improved transferability of representations built with self-supervised methods is often associated with the role played by the multi-layer perceptron projector. In this work, we depart from this observation and reexamine the role of supervision in continual representation learning. We reckon that additional information, such as human annotations, should not deteriorate the quality of representations. Our findings show that supervised models when enhanced with a multi-layer perceptron head, can outperform self-supervised models in continual representation learning.
Revisiting Supervision for Continual Representation Learning
[ "Daniel Marczak", "Sebastian Cygert", "Tomasz Trzcinski", "Bartłomiej Twardowski" ]
Workshop/UniReps
poster
2311.13321
[ "https://github.com/danielm1405/sl-vs-ssl-cl" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=GpBQYrTEPG
@inproceedings{ zhao2023role, title={Role Taxonomy of Units in Deep Neural Networks}, author={Yang Zhao and Hao Zhang and Xiuyuan Hu}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=GpBQYrTEPG} }
Identifying the role of network units in deep neural networks (DNNs) is critical in many aspects including giving understandings on the mechanisms of DNNs and building basic connections between deep learning and neuroscience. However, there remains unclear on which roles the units in DNNs with different generalization ability could present. To this end, we give role taxonomy of units in DNNs, where units are categorized into four types in terms of their functional preference on separately the training set and testing set. We show that ratios of the four categories are highly associated with the generalization ability of DNNs from two distinct perspectives, based on which we give signs of DNNs with well generalization.
Role Taxonomy of Units in Deep Neural Networks
[ "Yang Zhao", "Hao Zhang", "Xiuyuan Hu" ]
Workshop/UniReps
poster
2011.00789
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=GIuSksGwEp
@inproceedings{ robinson2023a, title={A sparse null code emerges in deep neural networks}, author={Brian S Robinson and Nathan Drenkow and Colin Conwell and Michael Bonner}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=GIuSksGwEp} }
The internal representations of deep vision models are often assumed to encode specific image features, such as contours, textures, and object parts. However, it is possible for deep networks to learn highly abstract representations that may not be linked to any specific image feature. Here we present evidence for one such abstract representation in transformers and modern convolutional architectures that appears to serve as a null code, indicating image regions that are non-diagnostic for the object class. These null codes are both statistically and qualitatively distinct from the more commonly reported feature-related codes of vision models. Specifically, these null codes have several distinct characteristics: they are highly sparse, they have a single unique activation pattern for each network, they emerge abruptly at intermediate network depths, and they are activated in a feature-independent manner by weakly informative image regions, such as backgrounds. Together, these findings reveal a new class of highly abstract representations in deep vision models: sparse null codes that seem to indicate the absence of relevant features.
A sparse null code emerges in deep neural networks
[ "Brian S Robinson", "Nathan Drenkow", "Colin Conwell", "Michael Bonner" ]
Workshop/UniReps
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=FbFyO1r7rV
@inproceedings{ jain2023how, title={How does fine-tuning affect your model? Mechanistic analysis on procedural tasks}, author={Samyak Jain and Robert Kirk and Ekdeep Singh Lubana and Robert P. Dick and Hidenori Tanaka and Tim Rockt{\"a}schel and Edward Grefenstette and David Krueger}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=FbFyO1r7rV} }
Fine-tuning large pre-trained models has become the *de facto* strategy for developing models that are safe to deploy. However, there has been little work that explains how fine-tuning alters the underlying capabilities learnt by a model during pretraining: does fine-tuning yield entirely novel capabilities or does it just modulate existing ones? We address this question empirically in *synthetic* settings with mechanistic interpretability tools (e.g., network pruning and probing) to understand how the model's underlying capabilities are changing. Our extensive analysis of the effects of fine-tuning shows: (i) fine-tuning rarely alters the underlying model capabilities; (ii) a minimal transformation, which we call a 'wrapper', is typically learned on top of the underlying model capabilities; and (iii) further fine-tuning on a task where such wrapped capabilities are relevant leads to sample-efficient "revival'' of the capability, i.e., the model begins reusing this capability in a few gradient steps. *This indicates practitioners can unintentionally remove a model's safety wrapper by merely fine-tuning it on a superficially unrelated task.*
How does fine-tuning affect your model? Mechanistic analysis on procedural tasks
[ "Samyak Jain", "Robert Kirk", "Ekdeep Singh Lubana", "Robert P. Dick", "Hidenori Tanaka", "Tim Rocktäschel", "Edward Grefenstette", "David Krueger" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=FO646rF6MM
@inproceedings{ dhuliawala2023variational, title={Variational Classification}, author={Shehzaad Zuzar Dhuliawala and Mrinmaya Sachan and Carl Allen}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=FO646rF6MM} }
We present *variational classification* (VC), a latent variable generalisation of neural network softmax classification under cross-entropy loss. Our approach provides a novel probabilistic interpretation of the highly familiar softmax classification model, to which it relates comparably to variational vs deterministic autoencoders. We derive a training objective based on the evidence lower bound (ELBO) that is non-trivial to optimize, and an adversarial approach to maximise it. We reveal an inherent inconsistency within softmax classification that VC addresses, while also allowing flexible choices of distributions in the latent space in place of assumptions implicit in standard softmax classifiers. Empirical evaluation demonstrates that VC maintains accuracy while improving properties such as calibration and adversarial robustness, particularly under distribution shift and low data settings. By explicitly considering representations learned by supervised methods, we offer the prospect of the principled merging of supervised learning with other representation learning methods, e.g.\ contrastive learning, using a common encoder architecture.
Variational Classification
[ "Shehzaad Zuzar Dhuliawala", "Mrinmaya Sachan", "Carl Allen" ]
Workshop/UniReps
poster
2305.10406
[ "https://github.com/shehzaadzd/variational-classification" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=F0Wp4KESt7
@inproceedings{ hemker2023hybrid, title={Hybrid Early Fusion for Multi-Modal Biomedical Representations}, author={Konstantin Hemker and Nikola Simidjievski and Mateja Jamnik}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=F0Wp4KESt7} }
Technological advances in medical data collection such as high-resolution histopathology and high-throughput genomic sequencing have contributed to the rising requirement for multi-modal biomedical modelling, specifically for image, tabular, and graph data. Most multi-modal deep learning approaches use modality-specific architectures that are trained separately and cannot capture the crucial cross-modal information that motivates the integration of different data sources. This paper presents the Hybrid Early-fusion Attention Learning Network (HEALNet) – a flexible multi-modal fusion architecture, which a) preserves modality-specific structural information, b) captures the cross-modal interactions and structural information in a shared latent space, c) can effectively handle missing modalities during training and inference, and d) enables intuitive model inspection by learning on the raw data input instead of opaque embeddings. We conduct multi-modal survival analysis on Whole Slide Images and Multi-omic data on four cancer cohorts of The Cancer Genome Atlas (TCGA). HEALNet achieves state-of-the-art performance, substantially improving over both uni-modal and recent multi-modal baselines, whilst being robust in scenarios with missing modalities.
Hybrid Early Fusion for Multi-Modal Biomedical Representations
[ "Konstantin Hemker", "Nikola Simidjievski", "Mateja Jamnik" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=EHvgtRcrix
@inproceedings{ hong2023randomly, title={Randomly Weighted Neuromodulation in Neural Networks Facilitates Learning of Manifolds Common Across Tasks}, author={Jinyung Hong and Theodore P. Pavlic}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=EHvgtRcrix} }
Geometric Sensitive Hashing functions, a family of Local Sensitive Hashing functions, are neural network models that learn class-specific manifold geometry in supervised learning. However, given a set of supervised learning tasks, understanding the manifold geometries that can represent each task and the kinds of relationships between the tasks based on them has received little attention. We explore a formalization of this question by considering a generative process where each task is associated with a high-dimensional manifold, which can be done in brain-like models with neuromodulatory systems. Following this formulation, we define Task-specific Geometric Sensitive Hashing and show that a randomly weighted neural network with a neuromodulation system can realize this function.
Randomly Weighted Neuromodulation in Neural Networks Facilitates Learning of Manifolds Common Across Tasks
[ "Jinyung Hong", "Theodore P. Pavlic" ]
Workshop/UniReps
oral
2401.02437
[ "https://github.com/pavliclab/neurips2023-unireps-hong-tgsh-randomly_weighted_neuromodulation_in_neural_networks" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=C20dVLBQz0
@inproceedings{ mcdermott2023linear, title={Linear Mode Connectivity in Sparse Neural Networks}, author={Luke McDermott and Daniel Cummings}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=C20dVLBQz0} }
With the rise in interest of sparse neural networks, we study how neural network pruning with synthetic data leads to sparse networks with unique training properties. We find that distilled data, a synthetic summarization of the real data, paired with Iterative Magnitude Pruning (IMP) unveils a new class of sparse networks that are more stable to SGD noise on the real data, than either the dense model, or subnetworks found with real data in IMP. That is, synthetically chosen subnetworks often train to the same minima, or exhibit linear mode connectivity. We study this through linear interpolation, loss landscape visualizations, and measuring the diagonal of the hessian. While dataset distillation as a field is still young, we find that these properties lead to synthetic subnetworks matching the performance of traditional IMP with up to 150x less training points in settings where distilled data applies.
Linear Mode Connectivity in Sparse Neural Networks
[ "Luke McDermott", "Daniel Cummings" ]
Workshop/UniReps
poster
2310.18769
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=Bi8mfzjIJ3
@inproceedings{ bizeul2023simvae, title={Sim{VAE}: Narrowing the gap between Discriminative \& Generative Representation Learning}, author={Alice Bizeul and Carl Allen}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=Bi8mfzjIJ3} }
Self-supervised representation learning is a powerful paradigm that leverages the relationship between semantically similar data, such as augmentations, extracts of an image or sound clip, or multiple views/modalities. Recent methods, e.g. SimCLR, CLIP and DINO, have made significant strides, yielding representations that achieve state-of-the-art results on multiple downstream tasks. Though often intuitive, a comprehensive theoretical understanding of their underlying mechanisms or _what_ they learn eludes. Meanwhile, generative approaches, such as variational autoencoders (VAEs), fit a specific latent variable model and have principled appeal, but lag significantly in terms of performance. We present a theoretical analysis of self-supervised discriminative methods and a graphical model that reflects the assumptions they implicitly make and unifies these methods. We show that fitting this model under an ELBO objective improves representations over previous VAE methods on several common benchmarks, narrowing the gap to discriminative methods, and can also preserve information lost by discriminative approaches. This work brings new theoretical insight to modern machine learning practice.
SimVAE: Narrowing the gap between Discriminative Generative Representation Learning
[ "Alice Bizeul", "Carl Allen" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=BZMzFReaiK
@inproceedings{ lowet2023distributional, title={Distributional Reinforcement Learning in the Mammalian Brain}, author={Adam S Lowet and Qiao Zheng and Melissa Meng and Sara Matias and Jan Drugowitsch and Naoshige Uchida}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=BZMzFReaiK} }
Distributional reinforcement learning (dRL) — learning to predict not just the average return but the entire probability distribution of returns — has achieved impressive performance across a wide range of benchmark machine learning tasks. In vertebrates, the basal ganglia strongly encodes mean value and has long been thought to implement RL, but little is known about whether, where, and how populations of neurons in this circuit encode information about higher-order moments of reward distributions. To fill this gap, we used Neuropixels probes to acutely record striatal activity from well-trained, water-restricted mice performing a classical conditioning task. Across several measures of representational distance, odors associated with the same reward distribution were encoded more similarly to one another than to odors associated with the same mean reward but different reward variance, as predicted by dRL but not traditional RL. Optogenetic manipulations and computational modeling suggested that genetically distinct populations of neurons encoded the left and right tails of these distributions. Together, these results reveal a remarkable degree of convergence between dRL and the mammalian brain and hint at further biological specializations of the same overarching algorithm.
Distributional Reinforcement Learning in the Mammalian Brain
[ "Adam S Lowet", "Qiao Zheng", "Melissa Meng", "Sara Matias", "Jan Drugowitsch", "Naoshige Uchida" ]
Workshop/UniReps
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=BJ2SmiCZrM
@inproceedings{ khosla2023soft, title={Soft Matching Distance: A metric on neural representations that captures single-neuron tuning}, author={Meenakshi Khosla and Alex H Williams}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=BJ2SmiCZrM} }
Common measures of neural representational (dis)similarity are designed to be insensitive to rotations and reflections of the neural activation space. Motivated by the premise that the tuning of individual units may be important, there has been recent interest in developing stricter notions of representational (dis)similarity that require neurons to be individually matched across networks. When two networks have the same size (i.e. same number of neurons), a distance metric can be formulated by optimizing over neuron index permutations to maximize tuning curve alignment. However, it is not clear how to generalize this metric to measure distances between networks with different sizes. Here, we leverage a connection to optimal transport theory to derive a natural generalization based on ``soft'' permutations. The resulting metric is symmetric, satisfies the triangle inequality, and can be interpreted as a Wasserstein distance between two empirical distributions. Further, our proposed metric avoids counter-intuitive outcomes suffered by alternative approaches, and captures complementary geometric insights into neural representations that are entirely missed by rotation-invariant metrics.
Soft Matching Distance: A metric on neural representations that captures single-neuron tuning
[ "Meenakshi Khosla", "Alex H Williams" ]
Workshop/UniReps
oral
2311.09466
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=AxRD2FF7aD
@inproceedings{ scheidt2023universality, title={Universality of intrinsic dimension of latent representations across models}, author={Teresa Scheidt and Lars Kai Hansen}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=AxRD2FF7aD} }
While state-of-the-art transformer networks use several hundreds of latent variables per layer, it has been shown that these features can actually be represented by relatively low dimensional manifolds. The intrinsic dimension is a geometrical property of the manifold latent representations populate, viz., a minimal number of parameters needed to describe the representations. In this work, we compare the intrinsic dimensions of three image transformer networks for classes of the cifar10 and cifar100 dataset. We find compelling evidence that the intrinsic dimensions differ among classes but are universal across networks. This universality persists across different pretraining strategies, fine-tuning and different model sizes. Our results strengthen the hypothesis that different models learn similar representations of data and show great potential that further investigation of intrinsic dimension could lead to more insights on the universality of latent representations.
Universality of intrinsic dimension of latent representations across models
[ "Teresa Scheidt", "Lars Kai Hansen" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=AE11s22jea
@inproceedings{ masarczyk2023on, title={On consequences of finetuning on data with highly discriminative features}, author={Wojciech Masarczyk and Tomasz Trzcinski and Mateusz Ostaszewski}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=AE11s22jea} }
In the era of transfer learning, training neural networks from scratch is becoming obsolete. Transfer learning leverages prior knowledge for new tasks, conserving computational resources. While its advantages are well-documented, we uncover a notable drawback: networks tend to prioritize basic data patterns, forsaking valuable pre-learned features. We term this behavior "feature erosion" and analyze its impact on network performance and internal representations.
On consequences of finetuning on data with highly discriminative features
[ "Wojciech Masarczyk", "Tomasz Trzcinski", "Mateusz Ostaszewski" ]
Workshop/UniReps
poster
2310.19537
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=9I2jTPhCku
@inproceedings{ isik2023an, title={An Information-Theoretic Understanding of Maximum Manifold Capacity Representations}, author={Berivan Isik and Victor Lecomte and Rylan Schaeffer and Yann LeCun and Mikail Khona and Ravid Shwartz-Ziv and Sanmi Koyejo and Andrey Gromov}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=9I2jTPhCku} }
Maximum Manifold Capacity Representations (MMCR) is a recent multi-view self-supervised learning (MVSSL) method that matches or surpasses other leading MVSSL methods. MMCR is interesting for at least two reasons. Firstly, MMCR is an oddity in the zoo of MVSSL methods: it is not (explicitly) contrastive, applies no masking, performs no clustering, leverages no distillation, and does not (explicitly) reduce redundancy. Secondly, while many self-supervised learning (SSL) methods originate in information theory, MMCR distinguishes itself by claiming a different origin: a statistical mechanical characterization of the geometry of linear separability of data manifolds. However, given the rich connections between statistical mechanics and information theory, and given recent work showing how many SSL methods can be understood from an information-theoretic perspective, we conjecture that MMCR can be similarly understood from an information-theoretic perspective. In this paper, we leverage tools from high dimensional probability and information theory to demonstrate that an optimal solution to MMCR's nuclear norm-based objective function is the same optimal solution that maximizes a well-known lower bound on mutual information.
An Information-Theoretic Understanding of Maximum Manifold Capacity Representations
[ "Berivan Isik", "Victor Lecomte", "Rylan Schaeffer", "Yann LeCun", "Mikail Khona", "Ravid Shwartz-Ziv", "Sanmi Koyejo", "Andrey Gromov" ]
Workshop/UniReps
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=5PYcTwpc6Z
@inproceedings{ dominici2023sharcs, title={{SHARCS}: Shared Concept Space for{\textbackslash}{\textbackslash}Explainable Multimodal Learning}, author={Gabriele Dominici and Pietro Barbiero and Lucie Charlotte Magister and Pietro Lio and Nikola Simidjievski}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=5PYcTwpc6Z} }
Multimodal learning is an essential paradigm for addressing complex real-world problems, where individual data modalities are typically insufficient for accurately solving a given modelling task. While various deep learning approaches have successfully addressed these challenges, their reasoning process is often opaque; limiting the capabilities for a principled explainable cross-modal analysis and any domain-expert intervention. In this paper, we introduce SHARCS (SHARed Concept Space) -- a novel concept-based approach for explainable multimodal learning. SHARCS learns and maps interpretable concepts from different heterogeneous modalities into a single unified concept-manifold, which leads to an intuitive projection of semantically similar cross-modal concepts. We demonstrate that such an approach can lead to inherently explainable task predictions while also improving downstream predictive performance. Moreover, we show that SHARCS can operate and significantly outperform other approaches in practically significant scenarios, such as retrieval of missing modalities and cross-modal explanations. Our approach is model agnostic and easily applicable to different types (and number) of modalities, thus advancing the development of effective, interpretable, and trustworthy multimodal approaches.
SHARCS: Shared Concept Space for Explainable Multimodal Learning
[ "Gabriele Dominici", "Pietro Barbiero", "Lucie Charlotte Magister", "Pietro Lio", "Nikola Simidjievski" ]
Workshop/UniReps
poster
[ "https://github.com/gabriele-dominici/SHARCS" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=4M0hhzUGxp
@inproceedings{ krishna2023sufficient, title={Sufficient conditions for offline reactivation in recurrent neural networks}, author={Nanda H Krishna and Colin Bredenberg and Daniel Levenstein and Blake Aaron Richards and Guillaume Lajoie}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=4M0hhzUGxp} }
During periods of quiescence, such as sleep, neural activity in many brain circuits resembles that observed during periods of task engagement. However, the precise conditions under which task-optimized networks can autonomously reactivate the same network states responsible for online behavior is poorly understood. In this study, we develop a mathematical framework that outlines sufficient conditions for the emergence of neural reactivation in circuits that encode features of smoothly varying stimuli. We demonstrate mathematically that noisy recurrent networks optimized to track environmental state variables using change-based sensory information naturally develop denoising dynamics, which, in the absence of input, cause the network to revisit state configurations observed during periods of online activity. We validate our findings using numerical experiments on two canonical neuroscience tasks: spatial position estimation based on self-motion cues, and head direction estimation based on angular velocity cues. Overall, our work provides theoretical support for modeling offline reactivation as an emergent consequence of task optimization in noisy neural circuits.
Sufficient conditions for offline reactivation in recurrent neural networks
[ "Nanda H Krishna", "Colin Bredenberg", "Daniel Levenstein", "Blake Aaron Richards", "Guillaume Lajoie" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=3KfD8vShwF
@inproceedings{ guth2023on, title={On the universality of neural codes in vision}, author={Florentin Guth and Brice M{\'e}nard}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=3KfD8vShwF} }
A high level of similarity between neural codes of natural images has been reported for both biological and artificial brains. These observations beg the question whether this similarity of representations stems from a more fundamental similarity between neural coding strategies. In this paper, we show that neural networks trained on different image classification datasets learn similar weight summary statistics. Our results reveal the existence of a universal neural code for natural images.
On the universality of neural codes in vision
[ "Florentin Guth", "Brice Ménard" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=2XoCArKGj1
@inproceedings{ jeong2023eventbased, title={Event-Based Contrastive Learning for Medical Time Series}, author={Hyewon Jeong and Nassim Oufattole and Aparna Balagopalan and Matthew B.A. McDermott and Payal Chandak and Marzyeh Ghassemi and Collin Stultz}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=2XoCArKGj1} }
In clinical practice, one often needs to identify whether a patient is at high risk of adverse outcomes after some key medical event; e.g., the short-term risk of death after an admission for heart failure. This task, however, remains challenging due to the complexity, variability, and heterogeneity of longitudinal medical data, especially for individuals suffering from chronic diseases like heart failure. In this paper, we introduce Event-Based Contrastive Learning (EBCL) - a method for learning embeddings of heterogeneous patient data that preserves temporal information before and after key index events. We demonstrate that EBCL produces models that yield better fine-tuning performance on critical downstream tasks including 30-day readmission, 1-year mortality, and 1-week length of stay relative to other representation learning methods that do not exploit temporal information surrounding key medical events.
Event-Based Contrastive Learning for Medical Time Series
[ "Hyewon Jeong", "Nassim Oufattole", "Aparna Balagopalan", "Matthew B.A. McDermott", "Payal Chandak", "Marzyeh Ghassemi", "Collin Stultz" ]
Workshop/UniReps
poster
2312.10308
[ "https://github.com/mit-ccrg/ebcl" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=2ApmouuUOC
@inproceedings{ masset2023multitimescale, title={Multi-timescale reinforcement learning in the brain}, author={Paul Masset and Pablo Tano and HyungGoo Kim and Athar N. Malik and Alexandre Pouget and Naoshige Uchida}, booktitle={UniReps: the First Workshop on Unifying Representations in Neural Models}, year={2023}, url={https://openreview.net/forum?id=2ApmouuUOC} }
To thrive in complex environments, animals and artificial agents must learn to act adaptively to maximize fitness and rewards. Such adaptive behavior can be learned through reinforcement learning1, a class of algorithms that has been successful at training artificial agents and at characterizing the firing of dopamine neurons in the midbrain. In classical reinforcement learning, agents discount future rewards exponentially according to a single time scale, known as the discount factor. This strategy is at the odds with the empirical observation that humans and animals use non-exponential discounts in many situations. Here, we explore the presence of multiple timescales in biological reinforcement learning. We first show that reinforcement agents learning at a multitude of timescales possess distinct computational benefits. Next, we report that dopamine neurons in mice performing two behavioral tasks encode reward prediction error with a diversity of discount time constants. Our model explains the heterogeneity of temporal discounting in both cue-evoked transient responses and slower timescale fluctuations known as dopamine ramps. Crucially, the measured discount factor of individual neurons is correlated across the two tasks suggesting that it is a cell-specific property. Together, our results provide a new paradigm to understand functional heterogeneity in dopamine neurons, and open new avenues for the design of more efficient reinforcement learning algorithms.
Multi-timescale reinforcement learning in the brain
[ "Paul Masset", "Pablo Tano", "HyungGoo Kim", "Athar N. Malik", "Alexandre Pouget", "Naoshige Uchida" ]
Workshop/UniReps
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=w1Vx3iQonv
@inproceedings{ guo2023inference, title={Inference analysis of optical transformers}, author={Xianxin Guo and Chenchen Wang and Djamshid Damry}, booktitle={Machine Learning with New Compute Paradigms}, year={2023}, url={https://openreview.net/forum?id=w1Vx3iQonv} }
This paper explores the utilization of optical computing for accelerating inference in transformer models, which have demonstrated substantial success in various applications. Optical computing offers ultra-fast computation and ultra-high energy efficiency compared to conventional electronics. Our findings suggest that optical implementation has the potential to achieve a significant 10-100 times improvement in the inference throughput of compute-limited transformer models.
Inference analysis of optical transformers
[ "Xianxin Guo", "Chenchen Wang", "Djamshid Damry" ]
Workshop/MLNCP
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=to9NJEMGCC
@inproceedings{ filipovich2023diffractive, title={Diffractive Optical Neural Networks with Arbitrary Spatial Coherence}, author={Matthew J. Filipovich and Aleksei Malyshev and Alexander Lvovsky}, booktitle={Machine Learning with New Compute Paradigms}, year={2023}, url={https://openreview.net/forum?id=to9NJEMGCC} }
Diffractive optical neural networks (DONNs) have emerged as a promising optical hardware platform for ultra-fast and energy-efficient signal processing. However, previous experimental demonstrations of DONNs have only been performed using coherent light, which is not present in the natural world. Here, we study the role of spatial optical coherence in DONN operation. We propose a numerical approach to efficiently simulate DONNs under input illumination with arbitrary spatial coherence and discuss the corresponding computational complexity using coherent, partially coherent, and incoherent light. We also investigate the expressive power of DONNs and examine how coherence affects their performance. We show that under fully incoherent illumination, the DONN performance cannot surpass that of a linear model. As a demonstration, we train and evaluate simulated DONNs on the MNIST dataset using light with varying spatial coherence.
Diffractive Optical Neural Networks with Arbitrary Spatial Coherence
[ "Matthew J. Filipovich", "Aleksei Malyshev", "Alexander Lvovsky" ]
Workshop/MLNCP
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=rabZPXiHGI
@inproceedings{ meech2023the, title={The Data Conversion Bottleneck in Analog Computing Accelerators}, author={James Meech and Vasileios Tsoutsouras and Phillip Stanley-Marbell}, booktitle={Machine Learning with New Compute Paradigms}, year={2023}, url={https://openreview.net/forum?id=rabZPXiHGI} }
Most modern computing tasks have digital electronic input and output data. Due to these constraints imposed by real-world use cases of computer systems, any analog computing accelerator, whether analog electronic or optical, must perform an analog-to-digital conversion on its input data and a subsequent digital-to-analog conversion on its output data. The energy and latency costs incurred by data conversion place performance limits on analog computing accelerators. To avoid this overhead, analog hardware must replace the full functionality of traditional digital electronic computer hardware. This is not currently possible for optical computing accelerators due to limitations in gain, input-output isolation, and information storage in optical hardware. This article presents a case study that profiles 27 benchmarks for an analog optical Fourier transform and convolution accelerator which we designed and built. The case study shows that an ideal optical Fourier transform and convolution accelerator can produce an average speedup of $9.4 \times$ and a median speedup of $1.9 \times$ for the set of benchmarks. The optical Fourier transform and convolution accelerator only produces significant speedup for pure Fourier transform ($45.3 \times$) and convolution ($159.4 \times$) applications.
The Data Conversion Bottleneck in Analog Computing Accelerators
[ "James Meech", "Vasileios Tsoutsouras", "Phillip Stanley-Marbell" ]
Workshop/MLNCP
poster
2308.01719
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=n0HMpICxpU
@inproceedings{ kobayashi2023hierarchy, title={Hierarchy of the echo state property in quantum reservoir computing}, author={Shumpei Kobayashi and Quoc Hoan Tran and Kohei Nakajima}, booktitle={Machine Learning with New Compute Paradigms}, year={2023}, url={https://openreview.net/forum?id=n0HMpICxpU} }
The echo state property (ESP) represents a fundamental concept in the reservoir computing framework that ensures stable output-only training of reservoir networks. However, the conventional definition of ESP does not aptly describe possibly non-stationary systems, where statistical properties evolve. To address this issue, we introduce two new categories of ESP: $\textit{non-stationary ESP}$ designed for possibly non-stationary systems, and $\textit{subspace/subset ESP}$ designed for systems whose subsystems have ESP. Following the definitions, we numerically demonstrate the correspondence between non-stationary ESP in the quantum reservoir computer (QRC) framework with typical Hamiltonian dynamics and input encoding methods using nonlinear autoregressive moving-average (NARMA) tasks. These newly defined properties present a new understanding toward the practical design of QRC and other possibly non-stationary RC systems.
Hierarchy of the echo state property in quantum reservoir computing
[ "Shumpei Kobayashi", "Quoc Hoan Tran", "Kohei Nakajima" ]
Workshop/MLNCP
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=lTYd1pa8bO
@inproceedings{ stern2023contrastive, title={Contrastive power-efficient physical learning in resistor networks}, author={Menachem Stern and Sam Dillavou and Dinesh Jayaraman and Douglas Durian and Andrea Liu}, booktitle={Machine Learning with New Compute Paradigms}, year={2023}, url={https://openreview.net/forum?id=lTYd1pa8bO} }
The prospect of substantial reductions in the power consumption of AI is a major motivation for the development of neuromorphic hardware. Less attention has been given to the complementary research of power-efficient learning rules for such systems. Here we study self-learning physical systems trained by local learning rules based on contrastive learning. We show how the physical learning rule can be biased toward finding power-efficient solutions to learning problems, and demonstrate in simulations and laboratory experiments the emergence of a trade-off between power-efficiency and task performance.
Contrastive power-efficient physical learning in resistor networks
[ "Menachem Stern", "Sam Dillavou", "Dinesh Jayaraman", "Douglas Durian", "Andrea Liu" ]
Workshop/MLNCP
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=jey92TQfUG
@inproceedings{ scellier2023energybased, title={Energy-Based Learning Algorithms for Analog Computing: A Comparative Study}, author={Benjamin Scellier and Maxence Ernoult and Jack Kendall and Suhas Kumar}, booktitle={Machine Learning with New Compute Paradigms}, year={2023}, url={https://openreview.net/forum?id=jey92TQfUG} }
This work compares seven energy-based learning algorithms, namely contrastive learning (CL), equilibrium propagation (EP), coupled learning (CpL) and different variants of these algorithms depending on the type of perturbation used. The algorithms are compared on deep convolutional Hopfield networks (DCHNs) and evaluated on five vision tasks (MNIST, Fashion-MNIST, SVHN, CIFAR-10 and CIFAR-100). The results reveal that while all algorithms perform similarly on the simplest task (MNIST), differences in performance become evident as task complexity increases. Perhaps surprisingly, we find that negative perturbations yield significantly better results than positive ones, and the centered variant of EP emerges as the top-performing algorithm. Lastly, we report new state-of-the-art DCHN simulations on all five datasets (both in terms of speed and accuracy), achieving a 13.5x speedup compared to Laborieux et al. (2021).
Energy-Based Learning Algorithms for Analog Computing: A Comparative Study
[ "Benjamin Scellier", "Maxence Ernoult", "Jack Kendall", "Suhas Kumar" ]
Workshop/MLNCP
poster
2312.15103
[ "https://github.com/rain-neuromorphics/energy-based-learning" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=iwYCqNb0B9
@inproceedings{ plagge2023expanding, title={Expanding Spiking Neural Networks With Dendrites for Deep Learning}, author={Mark Plagge and Suma G Cardwell and Frances S. Chance}, booktitle={Machine Learning with New Compute Paradigms}, year={2023}, url={https://openreview.net/forum?id=iwYCqNb0B9} }
As deep learning networks increase in size and performance, so do associated computational costs, approaching prohibitive levels. Dendrites offer powerful nonlinear ``on-the-wire'' computational capabilities, increasing the expressivity of the point neuron while preserving many of the advantages of SNNs. We seek to demonstrate the potential of dendritic computations by combining them with the low-power event-driven computation of Spiking Neural Networks (SNNs) for deep learning applications. To this end, we have developed a library that adds dendritic computation to SNNs within the PyTorch framework, enabling complex deep learning networks that still retain the low power advantages of SNNs. Our library leverages a dendrite CMOS hardware model to inform the software model, which enables nonlinear computation integrated with snnTorch at scale. Finally, we discuss potential deep learning applications in the context of current state-of-the-art deep learning methods and energy-efficient neuromorphic hardware.
Expanding Spiking Neural Networks With Dendrites for Deep Learning
[ "Mark Plagge", "Suma G Cardwell", "Frances S. Chance" ]
Workshop/MLNCP
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=hL4zZlCvWv
@inproceedings{ mukherji2023activity, title={Activity Sparsity Complements Weight Sparsity for Efficient {RNN} Inference}, author={Rishav Mukherji and Mark Sch{\"o}ne and Khaleelulla Khan Nazeer and Christian Mayr and Anand Subramoney}, booktitle={Machine Learning with New Compute Paradigms}, year={2023}, url={https://openreview.net/forum?id=hL4zZlCvWv} }
Artificial neural networks open up unprecedented machine learning capabilities at the cost of seemingly ever growing computational requirements. Concurrently, the field of neuromorphic computing develops biologically inspired spiking neural networks and hardware platforms with the goal of bridging the efficiency-gap between biological brains and deep learning systems. Yet, spiking neural networks often times fall behind deep learning systems on many machine learning tasks. In this work, we demonstrate that the reduction factor of sparsely activated recurrent neural networks multiplies with the reduction factor of sparse weights. Our model achieves up to $20\times$ reduction of operations while maintaining perplexities below $60$ on the Penn Treebank language modeling task. This reduction factor has not be achieved with solely sparsely connected LSTMs, and the language modeling performance of our model has not been achieved with sparsely activated spiking neural networks. Our results suggest to further drive convergence of methods from deep learning and neuromorphic computing for efficient machine learning.
Activity Sparsity Complements Weight Sparsity for Efficient RNN Inference
[ "Rishav Mukherji", "Mark Schöne", "Khaleelulla Khan Nazeer", "Christian Mayr", "Anand Subramoney" ]
Workshop/MLNCP
poster
2311.07625
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=fm8p9MDkd9
@inproceedings{ anderson2023scaling, title={Scaling of Optical Transformers}, author={Maxwell Anderson and Shi-Yuan Ma and Tianyu Wang and Logan G. Wright and Peter McMahon}, booktitle={Machine Learning with New Compute Paradigms}, year={2023}, url={https://openreview.net/forum?id=fm8p9MDkd9} }
The rapidly increasing size of deep-learning models has renewed interest in alternatives to digital-electronic computers as a means to dramatically reduce the inference energy cost of running state-of-the-art neural networks. Optical matrix-vector multipliers are best suited to performing computations with very large operands, which suggests that large Transformer models could be a good target for them. However, the ability of optical accelerators to run efficiently depends on the model being run, and if the model can be run at all when subject to the noise, error, and low precision of analog-optical hardware. Here we investigate whether Transformers meet the criteria to be efficient when running optically, what benefits can be had for doing so, and how worthwhile it is at scale. We found using small-scale experiments on and simulation of a prototype hardware accelerator that Transformers may run on optical hardware, and that elements of their design --- the ability to parallel-process data using the same weights, and trends in scaling them to enormous widths --- allow them to achieve an asymptotic energy-efficiency advantage running optically compared to on digital hardware. Based on a model of a full optical accelerator system, we predict that well-engineered, large-scale optical hardware should be able to achieve a 100× energy-efficiency advantage over current digital-electronic processors in running some of the largest current Transformer models, and if both the models and the optical hardware are scaled to the quadrillion-parameter regime, optical accelerators could have a > 8,000× energy-efficiency advantage.
Scaling of Optical Transformers
[ "Maxwell Anderson", "Shi-Yuan Ma", "Tianyu Wang", "Logan G. Wright", "Peter McMahon" ]
Workshop/MLNCP
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=fLfYeAfiCq
@inproceedings{ komatsu2023algebraic, title={Algebraic Design of Physical Computing System for Time-Series Generation}, author={Mizuka Komatsu and Takaharu Yaguchi and Kohei Nakajima}, booktitle={Machine Learning with New Compute Paradigms}, year={2023}, url={https://openreview.net/forum?id=fLfYeAfiCq} }
Recently, computational techniques that employ physical systems (physical computing systems) have been developed. To utilize physical computing systems, their design strategy is important. Although there are practical learning based methods and theoretical approaches, no general method exists that provides specific design guidelines for given systems with rigorous theoretical support. In this paper, we propose a novel algebraic design framework for a physical computing system for time-series generation, which is capable of extracting specific design guidelines. Our approach describes input-output relationships algebraically and relates them to this task. We present two theorems and the results of experiments. The first theorem offers a basic strategy for algebraic design. The second theorem explores the ``replaceability" of such systems.
Algebraic Design of Physical Computing System for Time-Series Generation
[ "Mizuka Komatsu", "Takaharu Yaguchi", "Kohei Nakajima" ]
Workshop/MLNCP
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=dz27xn3dBt
@inproceedings{ bonnet2023bayesian, title={Bayesian Metaplasticity from Synaptic Uncertainty}, author={Djohan Bonnet and Tifenn HIRTZLIN and Tarcisius Januel and Thomas Dalgaty and Damien Querlioz and Elisa Vianello}, booktitle={Machine Learning with New Compute Paradigms}, year={2023}, url={https://openreview.net/forum?id=dz27xn3dBt} }
Catastrophic forgetting remains a challenge for neural networks, especially in lifelong learning scenarios. In this study, we introduce MEtaplasticity from Synaptic Uncertainty (MESU), inspired by metaplasticity and Bayesian inference principles. MESU harnesses synaptic uncertainty to retain information over time, with its update rule closely approximating the diagonal Newton's method for synaptic updates. Through continual learning experiments on permuted MNIST tasks, we demonstrate MESU's remarkable capability to maintain learning performance across 100 tasks without the need of explicit task boundaries.
Bayesian Metaplasticity from Synaptic Uncertainty
[ "Djohan Bonnet", "Tifenn HIRTZLIN", "Tarcisius Januel", "Thomas Dalgaty", "Damien Querlioz", "Elisa Vianello" ]
Workshop/MLNCP
oral
2312.10153
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=aqsFLVg0tY
@inproceedings{ momeni2023phyff, title={Phy{FF}: Physical forward forward algorithm for in-hardware training and inference}, author={Ali Momeni and Babak Rahmani and Matthieu Mall{\'e}jac and Philipp del Hougne and Romain Fleury}, booktitle={Machine Learning with New Compute Paradigms}, year={2023}, url={https://openreview.net/forum?id=aqsFLVg0tY} }
Training of digital deep learning models primarily relies on backpropagation, which poses challenges for physical implementation due to its dependency on precise knowledge of computations performed in the forward pass of the neural network. To address this issue, we propose a physical forward forward training algorithm (phyFF) that is inspired by the original forward forward algorithm. This novel approach facilitates direct training of deep physical neural networks comprising layers of diverse physical nonlinear systems, without the need for the complete knowledge of the underlying physics. We demonstrate the superiority of this method over current hardware-aware training techniques. The proposed method achieves faster training speeds, reduces digital computational requirements, and lowers training's power consumption in physical systems.
PhyFF: Physical forward forward algorithm for in-hardware training and inference
[ "Ali Momeni", "Babak Rahmani", "Matthieu Malléjac", "Philipp del Hougne", "Romain Fleury" ]
Workshop/MLNCP
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=aaJ5wbrff5
@inproceedings{ zhang2023a, title={A Green Granular Convolutional Neural Network with Software-{FPGA} Co-designed Learning}, author={Yanqing Zhang and Huaiyuan Chu}, booktitle={Machine Learning with New Compute Paradigms}, year={2023}, url={https://openreview.net/forum?id=aaJ5wbrff5} }
Different from traditional tedious CPU-GPU-based training algorithms using gradient descent methods, the software-FPGA co-designed learning algorithm is created to quickly solve a system of linear equations to directly calculate optimal values of hyperparameters of the green granular neural network (GGNN). To reduce both $CO_2$ emissions and energy consumption effectively, a novel green granular convolutional neural network (GGCNN) is developed by using a new classifier that uses GGNNs as building blocks with new fast software-FPGA co-designed learning. Initial simulation results indicates that the FPGA equation solver code ran faster than the Python equation solver code. Therefore, implementing the GGCNN with software-FPGA co-designed learning is feasible. In the future, The GGCNN will be evaluated by comparing with a convolutional neural network (CNN) with the traditional software-CPU-GPU-based learning in terms of speeds, model sizes, accuracy, $CO_2$ emissions and energy consumption by using popular datasets. New algorithms will be created to divide the inputs to different input groups that will be used to build different small-size GGNNs to solve the curse of dimensionality.
A Green Granular Convolutional Neural Network with Software-FPGA Co-designed Learning
[ "Yanqing Zhang", "Huaiyuan Chu" ]
Workshop/MLNCP
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ZwQzLpDlQA
@inproceedings{ zhao2023realtime, title={Real-Time {FJ}/{MAC} {PDE} Solvers via Tensorized, Back-Propagation-Free Optical {PINN} Training}, author={Yequan Zhao and Xian Xiao and Xinling Yu and Ziyue Liu and Zhixiong Chen and Geza Kurczveil and Raymond G Beausoleil and Zheng Zhang}, booktitle={Machine Learning with New Compute Paradigms}, year={2023}, url={https://openreview.net/forum?id=ZwQzLpDlQA} }
Solving partial differential equations (PDEs) numerically often requires huge computing time, energy cost, and hardware resources in practical applications. This has limited their applications in many scenarios (e.g., autonomous systems, supersonic flows) that have a limited energy budget and require near real-time response. Leveraging optical/photonic computing, this paper develops an on-chip training framework for physics-informed neural networks (PINNs), aiming to solve high-dimensional PDEs with fJ/MAC power consumption and ultra-low latency. Despite the ultra-high speed of optical neural networks, training a PINN on an optical chip is hard due to (1) the large size of photonic devices, and (2) the lack of scalable optical memory devices to store the intermediate results of back-propagation (BP). To enable realistic optical PINN training, this paper presents a scalable method to avoid the BP process. We also employ a tensor-compressed approach to improve the convergence and scalability of our optical PINN training. This training framework is designed with tensorized optical neural networks (TONN) for scalable inference acceleration and MZI phase-domain tuning for \textit{in-situ} optimization. Our simulation results of a 20-dim HJB PDE show that our photonic accelerator can reduce the number of MZIs by a factor of 1.17$\times 10^3$, with only 1.36 J and 1.15 s to solve this equation. This is the first real-size optical PINN training framework that can be applied to solve high-dimensional PDEs.
Real-Time FJ/MAC PDE Solvers via Tensorized, Back-Propagation-Free Optical PINN Training
[ "Yequan Zhao", "Xian Xiao", "Xinling Yu", "Ziyue Liu", "Zhixiong Chen", "Geza Kurczveil", "Raymond G Beausoleil", "Zheng Zhang" ]
Workshop/MLNCP
poster
2401.00413
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=YpXkH90wK0
@inproceedings{ dalgaty2023scalingup, title={Scaling-up Memristor Monte Carlo with magnetic domain-wall physics}, author={Thomas Dalgaty and Shogo Yamada and Anca Molnos and Eiji Kawasaki and Thomas Mesquida and Rummens Fran{\c{c}}ois and TATSUO SHIBATA and Yukihiro Urakawa and Yukio Terasaki and Tomoyuki Sasaki and Marc Duranton}, booktitle={Machine Learning with New Compute Paradigms}, year={2023}, url={https://openreview.net/forum?id=YpXkH90wK0} }
By exploiting the intrinsic random nature of nanoscale devices, Memristor Monte Carlo (MMC) is a promising enabler of edge learning systems. However, due to multiple algorithmic and device-level limitations, existing demonstrations have been restricted to very small neural network models and datasets. We discuss these limitations, and describe how they can be overcome, by mapping the stochastic gradient Langevin dynamics (SGLD) algorithm onto the physics of magnetic domain-wall Memristors to scale-up MMC models by five orders of magnitude. We propose the push-pull pulse programming method that realises SGLD in-physics, and use it to train a domain-wall based ResNet18 on the CIFAR-10 dataset. On this task, we observe no performance degradation relative to a floating point model down to an update precision of between 6 and 7-bits, indicating we have made a step towards a large-scale edge learning system leveraging noisy analogue devices.
Scaling-up Memristor Monte Carlo with magnetic domain-wall physics
[ "Thomas Dalgaty", "Shogo Yamada", "Anca Molnos", "Eiji Kawasaki", "Thomas Mesquida", "Rummens François", "TATSUO SHIBATA", "Yukihiro Urakawa", "Yukio Terasaki", "Tomoyuki Sasaki", "Marc Duranton" ]
Workshop/MLNCP
oral
2312.02771
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=YmzE3h5cGB
@inproceedings{ wang2023enhancing, title={Enhancing Low-Precision Sampling via Stochastic Gradient Hamiltonian Monte Carlo}, author={Ziyi Wang and Yujie Chen and Ruqi Zhang and Qifan Song}, booktitle={Machine Learning with New Compute Paradigms}, year={2023}, url={https://openreview.net/forum?id=YmzE3h5cGB} }
Low-precision training has emerged as a promising low-cost technique to enhance the training efficiency of deep neural networks without sacrificing much accuracy. Its Bayesian counterpart can further provide uncertainty quantification and improved generalization accuracy. This paper investigates low-precision samplers via Stochastics Gradient Hamiltonian Monte Carlo (SGHMC) with low-precision and full-precision gradients accumulators for both strongly log-concave and non-log-concave distributions. Theoretically, our results show that, to achieve $\epsilon$-error in the 2-Wasserstein distance for non-log-concave distributions, low-precision SGHMC achieves quadratic improvement ($\tilde{\mathcal{O}}\left({\epsilon^{-2}{\mu^*}^{-2}\log^2\left({\epsilon^{-1}}\right)}\right)$) compared to the state-of-the-art low-precision sampler, Stochastic Gradient Langevin Dynamics (SGLD) ($\tilde{\mathcal{O}}\left({{\epsilon}^{-4}{\lambda^{*}}^{-1}\log^5\left({\epsilon^{-1}}\right)}\right)$). Moreover, we prove that low-precision SGHMC is more robust to the quantization error compared to low-precision SGLD due to the robustness of the momentum-based update w.r.t. gradient noise. Empirically, we conduct experiments on synthetic and MNIST, CIFAR-10 \& CIFAR-100 datasets which successfully validate our theoretical findings. Our study highlights the potential of low-precision SGHMC as an efficient and accurate sampling method for large-scale and resource-limited deep learning.
Enhancing Low-Precision Sampling via Stochastic Gradient Hamiltonian Monte Carlo
[ "Ziyi Wang", "Yujie Chen", "Ruqi Zhang", "Qifan Song" ]
Workshop/MLNCP
poster
2310.16320
[ "https://github.com/comeusr/low-precisionsghmc" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=SMs10BPy8D
@inproceedings{ humes2023squeezed, title={Squeezed Edge {YOLO}: Onboard Object Detection on Edge Devices}, author={Edward Steven Humes and Mozhgan Navardi and Tinoosh Mohsenin}, booktitle={Machine Learning with New Compute Paradigms}, year={2023}, url={https://openreview.net/forum?id=SMs10BPy8D} }
Demand for efficient onboard object detection is increasing due to its key role in autonomous navigation. However, deploying object detection models such as YOLO on resource constrained edge devices is challenging due to the high computational requirements of such models. In this paper, a Squeezed Edge YOLO is proposed which is compressed and optimized to kilobytes of parameters in order to fit onboard such edge devices. To evaluate the proposed Squeezed Edge YOLO, two use cases - human and shape detection - are used to show the model accuracy and performance. Moreover, the proposed model is deployed onboard a GAP8 processor with 8 RISC-V cores and an NVIDIA Jetson Nano with 4GB of memory. Experimental results shows the proposed Squeezed Edge YOLO model size is optimized by a factor of 8x which leads to 76\% improvements in energy efficiency and 3.3x faster throughout.
Squeezed Edge YOLO: Onboard Object Detection on Edge Devices
[ "Edward Steven Humes", "Mozhgan Navardi", "Tinoosh Mohsenin" ]
Workshop/MLNCP
poster
2312.11716
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=SL8VJMjOHU
@inproceedings{ jong2023virtual, title={Virtual reservoir acceleration for {CPU} and {GPU}: Case study for coupled spin-torque oscillator reservoir}, author={Thomas De Jong and Nozomi Akashi and Tomohiro Taniguchi and Hirofumi Notsu and Kohei Nakajima}, booktitle={Machine Learning with New Compute Paradigms}, year={2023}, url={https://openreview.net/forum?id=SL8VJMjOHU} }
We provide high-speed implementations for simulating reservoirs described by $N$-coupled spin-torque oscillators. Here $N$ also corresponds to the number of reservoir nodes. We benchmark a variety of implementations based on CPU and GPU. Our new methods are at least 2.6 times quicker than the baseline for $N$ in range $1$ to $10^4$. More specifically, over all implementations the best factor is 78.9 for $N=1$ which decreases to 2.6 for $N=10^3$ and finally increases to 23.8 for $N=10^4$. GPU outperforms CPU significantly at $N=2500$. Our results show that GPU implementations should be tested for reservoir simulations. The implementations considered here can be used for any reservoir with evolution that can be approximated using an explicit method.
Virtual reservoir acceleration for CPU and GPU: Case study for coupled spin-torque oscillator reservoir
[ "Thomas De Jong", "Nozomi Akashi", "Tomohiro Taniguchi", "Hirofumi Notsu", "Kohei Nakajima" ]
Workshop/MLNCP
poster
2312.01121
[ "https://github.com/mathowl/reservoir_acceleration" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=QjdzXNzyrL
@inproceedings{ watfa2023adjoint, title={Adjoint Method: The Connection between Analog-based Equilibrium Propagation Architectures and Neural {ODE}s}, author={Mohamed Watfa and Alberto Garcia-Ortiz}, booktitle={Machine Learning with New Compute Paradigms}, year={2023}, url={https://openreview.net/forum?id=QjdzXNzyrL} }
Analog neural networks (ANNs) hold significant potential for substantial reductions in power consumption in modern neural networks, particularly when employing the increasingly popular Energy-Based Models (EBMs) in tandem with the local Equilibrium Propagation (EP) training algorithm. This paper analyzes the relationship between this family of ANNs and the concept of Neural Ordinary Differential Equations (Neural ODEs). Using the adjoint method, we formally demonstrate that ANN-EP can be derived from Neural ODEs by constraining the differential equations to those with a steady-state response. This finding opens avenues for the ANN-EP community to extend ANNs to non-steady-state scenarios. Additionally, it provides an efficient setting for NN-ODEs that significantly reduces the training cost.
Adjoint Method: The Connection between Analog-based Equilibrium Propagation Architectures and Neural ODEs
[ "Mohamed Watfa", "Alberto Garcia-Ortiz" ]
Workshop/MLNCP
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=PkduOOJOZA
@inproceedings{ syed2023beyond, title={Beyond Digital: Harnessing Analog Hardware for Machine Learning}, author={Marvin Syed and Kirill Kalinin and Natalia Berloff}, booktitle={Machine Learning with New Compute Paradigms}, year={2023}, url={https://openreview.net/forum?id=PkduOOJOZA} }
A remarkable surge in utilizing large deep-learning models yields state-of-the-art results in a variety of tasks. Recent model sizes often exceed billions of parameters, underscoring the importance of fast and energy-efficient processing. The significant costs associated with training and inference primarily stem from the constrained memory bandwidth of current hardware and the computationally intensive nature of these models. Historically, the design of machine learning models has predominantly been guided by the operational parameters of classical digital devices. In contrast, analog computations have the potential to offer vastly improved power efficiency for both inference and training tasks. This work details several machine-learning methodologies that could leverage existing analog hardware infrastructures. To foster the development of analog hardware-aware machine learning techniques, we explore both optical and electronic hardware configurations suitable for executing the fundamental mathematical operations inherent to these models. Integrating analog hardware with innovative machine learning approaches may pave the way for cost-effective AI systems at scale.
Beyond Digital: Harnessing Analog Hardware for Machine Learning
[ "Marvin Syed", "Kirill Kalinin", "Natalia Berloff" ]
Workshop/MLNCP
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=LdPkWxTjiW
@inproceedings{ h{\o}ier2023a, title={A Lagrangian Perspective on Dual Propagation}, author={Rasmus H{\o}ier and Christopher Zach}, booktitle={Machine Learning with New Compute Paradigms}, year={2023}, url={https://openreview.net/forum?id=LdPkWxTjiW} }
The search for "biologically plausible" learning algorithms has converged on the idea of representing gradients as activity differences. However, most approaches require a high degree of synchronization (distinct phases during learning) and introduce high computational overhead, which raises doubt regarding their biological plausibility as well as their potential usefulness for neuromorphic computing. Furthermore, they commonly rely on applying infinitesimal perturbations (nudges) to output units, which is impractical in noisy environments. Recently it has been shown that by modelling artificial neurons as dyads with two oppositely nudged compartments, it is possible for a fully local learning algorithm to bridge the performance gap to backpropagation, without requiring separate learning phases, while also being compatible with significant levels of nudging. However, the algorithm, called dual propagation, has the drawback that convergence of its inference method relies on symmetric nudging of the output units, which may be infeasible in biological and analog implementations. Starting from a modified version of LeCun's Lagrangian approach to backpropagation, we derive a slightly altered variant of dual propagation, which is robust to asymmetric nudging.
A Lagrangian Perspective on Dual Propagation
[ "Rasmus Høier", "Christopher Zach" ]
Workshop/MLNCP
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=KAiPD1OwvF
@inproceedings{ gonzalez2023spinnaker, title={Spi{NN}aker2: A Large-Scale Neuromorphic System for Event-Based and Asynchronous Machine Learning}, author={Hector Andres Gonzalez and Jiaxin Huang and Florian Kelber and Khaleelulla Khan Nazeer and Tim Hauke Langer and Chen Liu and Matthias Aleander Lohrmann and Amirhossein Rostami and Mark Sch{\"o}ne and Bernhard Vogginger and Timo Wunderlich and Yexin Yan and Mahmoud Akl and Christian Mayr}, booktitle={Machine Learning with New Compute Paradigms}, year={2023}, url={https://openreview.net/forum?id=KAiPD1OwvF} }
The joint progress of artificial neural networks (ANNs) and domain specific hardware accelerators such as GPUs and TPUs took over many domains of machine learning research. This development is accompanied by a rapid growth of the required computational demands for larger models and more data. Concurrently, emerging properties of foundation models such as in-context learning drive new opportunities for machine learning applications. However, the computational cost of such applications is a limiting factor of the technology in data centers, and more importantly in mobile devices and edge systems. To mediate the energy footprint and non-trivial latency of contemporary systems, neuromorphic computing systems deeply integrate computational principles of neurobiological systems by leveraging low-power analog and digital technologies. SpiNNaker2 is a digital neuromorphic chip developed for scalable machine learning. The event-based and asynchronous design of SpiNNaker2 allows the composition of large-scale systems involving thousands of chips. This work features the operating principles of SpiNNaker2 systems, outlining the prototype of novel machine learning applications. These applications range from ANNs over bio-inspired spiking neural networks to generalized event-based neural networks. With the successful development and deployment of SpiNNaker2, we aim to facilitate the advancement of event-based asynchronous algorithms for future generations of machine learning systems.
SpiNNaker2: A Large-Scale Neuromorphic System for Event-Based and Asynchronous Machine Learning
[ "Hector Andres Gonzalez", "Jiaxin Huang", "Florian Kelber", "Khaleelulla Khan Nazeer", "Tim Hauke Langer", "Chen Liu", "Matthias Aleander Lohrmann", "Amirhossein Rostami", "Mark Schöne", "Bernhard Vogginger", "Timo Wunderlich", "Yexin Yan", "Mahmoud Akl", "Christian Mayr" ]
Workshop/MLNCP
oral
2401.04491
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=IuN2WXtFSY
@inproceedings{ schreiber2023biologicallyplausible, title={Biologically-plausible hierarchical chunking on mixed-signal neuromorphic hardware}, author={Atilla Schreiber and Shuchen Wu and Chenxi Wu and Giacomo Indiveri and Eric Schulz}, booktitle={Machine Learning with New Compute Paradigms}, year={2023}, url={https://openreview.net/forum?id=IuN2WXtFSY} }
Humans seamlessly group perceptual sequences in units of chunks, parsed and memorized as separate entities. Chunking is a computational principle essential for memory compression, structural decomposition, and predictive processing. How can this ability be accomplished in a neural system? On an algorithmic level, computational models such as the Hierarchical Chunking Model (HCM) propose grouping frequently occurring proximal observational units as chunks. Chunks, once learned, are stored as separate entities in memory, ready for reuse and recombination. In doing so, the HCM learns an interpretable and hierarchical representation that resembles human chunk learning, without the need for gradient based training. In this work, we propose a biologically plausible and highly efficient implementation of the HCM with spiking neurons: the neuromorphic HCM (nHCM). When parsing through perceptual sequences, the nHCM uses sparsely connected spiking neurons to construct hierarchical chunk representations. Simulation on a standard computer showed remarkable improvement of nHCM in speed, power consumption, and memory usage compared to its original counterpart. Taking it one step further, we validate the model on mixed-signal neuromorphic hardware DYNAP-SE 2, which uses analog spiking neurons in an event-driven way to imitate biological computation. The transistors in the neural cores are run in sub-threshold, reducing energy requirements by more than one thousand times. We verified this implementation’s robust computing properties, overcoming the analog circuits’ heterogeneity, variability, and low precision. This work demonstrates cognitively plausible sequence learning in energy-efficient dedicated neural computing electronic processing systems.
Biologically-plausible hierarchical chunking on mixed-signal neuromorphic hardware
[ "Atilla Schreiber", "Shuchen Wu", "Chenxi Wu", "Giacomo Indiveri", "Eric Schulz" ]
Workshop/MLNCP
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=HXSgtSD1x1
@inproceedings{ anisetti2023frequency, title={Frequency propagation: Multi-mechanism learning in nonlinear physical networks}, author={Vidyesh Rao Anisetti and Ananth Kandala and Benjamin Scellier and J. M. Schwarz}, booktitle={Machine Learning with New Compute Paradigms}, year={2023}, url={https://openreview.net/forum?id=HXSgtSD1x1} }
We introduce frequency propagation, a learning algorithm for nonlinear physical networks. In a resistive electrical circuit with variable resistors, an activation current is applied at a set of input nodes at one frequency, and an error current is applied at a set of output nodes at another frequency. The voltage response of the circuit to these boundary currents is the superposition of an 'activation signal' and an 'error signal' whose coefficients can be read in different frequencies of the frequency domain. Each conductance is updated proportionally to the product of the two coefficients. The learning rule is local and proved to perform gradient descent on a loss function. We argue that frequency propagation is an instance of a multi-mechanism learning strategy for physical networks, be it resistive, elastic, or flow networks. Multi-mechanism learning strategies incorporate at least two physical quantities, potentially governed by independent physical mechanisms, to act as activation and error signals in the training process. Locally available information about these two signals is then used to update the trainable parameters to perform gradient descent. We demonstrate how earlier work implementing learning via chemical signaling in flow networks [1] also falls under the rubric of multi-mechanism learning. [1] - V. Anisetti, B. Scellier, and J. M. Schwarz, “Learning by non-interfering feedback chemical signaling235 in physical networks,” arXiv preprint arXiv:2203.12098, 2022.
Frequency propagation: Multi-mechanism learning in nonlinear physical networks
[ "Vidyesh Rao Anisetti", "Ananth Kandala", "Benjamin Scellier", "J. M. Schwarz" ]
Workshop/MLNCP
poster
2208.08862
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=Fik4cO7FXd
@inproceedings{ laydevant2023the, title={The Benefits of Self-Supervised Learning for Training Physical Neural Networks}, author={Jeremie Laydevant and Peter McMahon and Davide Venturelli and Paul Aaron Lott}, booktitle={Machine Learning with New Compute Paradigms}, year={2023}, url={https://openreview.net/forum?id=Fik4cO7FXd} }
Physical Neural Networks (PNNs) are energy-efficient alternatives to their digital counterparts. Because they are inherently variable, noisy and hardly differentiable, PNNs require tailored trainign methods. Additionally, while the properties of PNNs make them good candidates for edge computing, where memory and computational ressources are constrained, most of the training algorithms developed for training PNNs focus on supervised learning, though labeled data could not be accessible on the edge. Here, we propose to use Self-Supervised Learning (SSL) as an ideal framework for training PNNs (we focus here on computer vision tasks) : 1. SSL globally eliminates the reliance on labeled data and 2. as SSL enforces the network to extract high-level concepts, networks trained with SSL should result in high robustness to noise and device variability. We investigate and show with simulations that the later properties effectively emerge when a network is trained on MNIST in the SSL settings while it does not when trained supervisely. We also explore and show empirically that we can optimize layer-wise SSL objectives rather than a single global one while still achieving the performance of the global optimization on MNIST and CIFAR-10. This could allow local learning without backpropagation at all, especially in the scheme we propose with stochastic optimization. We expect this preliminary work, based on simulations, to pave the way of a robust paradigm for training PNNs and hope to stimulate interest in the community of unconventional computing and beyond.
The Benefits of Self-Supervised Learning for Training Physical Neural Networks
[ "Jeremie Laydevant", "Peter McMahon", "Davide Venturelli", "Paul Aaron Lott" ]
Workshop/MLNCP
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=FGduPFaCIa
@inproceedings{ banerjee2023towards, title={Towards low power cognitive load analysis using {EEG} signal: A neuromorphic computing approach}, author={Dighanchal Banerjee and Sounak Dey and Debatri Chatterjee and Arpan Pal}, booktitle={Machine Learning with New Compute Paradigms}, year={2023}, url={https://openreview.net/forum?id=FGduPFaCIa} }
Real-time on-device cognitive load assessment using EEG is very useful for applications like brain-computer interfaces, robotics, adaptive learning etc. Existing deep learning based models can achieve high accuracy, but due to large memory and energy requirement, those models can not be implemented on battery driven low-compute, low-memory edge devices such as wearable EEG devices. In this paper, we have used brain-inspired spiking neural networks and neuromorphic computing paradigms, that promises at least $10^4$ times less energy requirement compared to existing solutions. We have designed two different spiking network architectures and tested on two publicly available cognitive load datasets (EEGMAT \& STEW). We achieved comparable accuracy with existing arts, without performing any artifact removal from EEG signal. Our model offers $\sim8\times$ less memory requirement, $\sim10^3\times$ less computational cost and consumes maximum 0.33 $\mu$J energy per inference.
Towards low power cognitive load analysis using EEG signal: A neuromorphic computing approach
[ "Dighanchal Banerjee", "Sounak Dey", "Debatri Chatterjee", "Arpan Pal" ]
Workshop/MLNCP
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=EMAfVgNVrC
@inproceedings{ vineyard2023neuromorphic, title={Neuromorphic Co-Design as a Game}, author={Craig Vineyard and William Severa and James Aimone}, booktitle={Machine Learning with New Compute Paradigms}, year={2023}, url={https://openreview.net/forum?id=EMAfVgNVrC} }
Co-design is a prominent topic presently in computing, speaking to the mutual benefit of coordinating design choices of several layers in the technology stack. For example, this may be designing algorithms which can most efficiently take advantage of the acceleration properties of a given architecture, while simultaneously designing the hardware to support the structural needs of a class of computation. The implications of these design decisions are influential enough to be deemed a lottery, enabling an idea to win out over others irrespective of the individual merits. Coordination is a well studied topic in the mathematics of game theory, where in many cases without a coordination mechanism the outcome is sub-optimal. Here we consider what insights game theoretic analysis can offer for computer architecture co-design. In particular, we consider the interplay between algorithm and architecture advances in the field of neuromorphic computing. Analyzing developments of spiking neural network algorithms and neuromorphic hardware as a co-design game we use the Stag Hunt model to illustrate challenges for spiking algorithms or architectures to advance the field independently and advocate for a strategic pursuit to advance neuromorphic computing.
Neuromorphic Co-Design as a Game
[ "Craig Vineyard", "William Severa", "James Aimone" ]
Workshop/MLNCP
poster
2312.14954
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=BtC95j83bY
@inproceedings{ schuman2023device, title={Device Codesign using Reinforcement Learning and Evolutionary Optimization}, author={Catherine Schuman and Suma G Cardwell and Karan P. Patel and J. Darby Smith and Jared Arzate and Andrew Maicke and Samuel Liu and Jaesuk Kwon and Jean Anne Incorvia}, booktitle={Machine Learning with New Compute Paradigms}, year={2023}, url={https://openreview.net/forum?id=BtC95j83bY} }
Device discovery and circuit modeling for emerging devices, such as magnetic tunnel junctions, require detailed and time-consuming device and circuit simulations. In this work, we propose using AI-guided techniques such as reinforcement learning and evolutionary optimization to accelerate device discovery, creativity of solutions, and automate optimization to design true random number generators for a given distribution. We present preliminary results designing true random number generators using magnetic tunnel junctions optimized for performance.
Device Codesign using Reinforcement Learning and Evolutionary Optimization
[ "Catherine Schuman", "Suma G Cardwell", "Karan P. Patel", "J. Darby Smith", "Jared Arzate", "Andrew Maicke", "Samuel Liu", "Jaesuk Kwon", "Jean Anne Incorvia" ]
Workshop/MLNCP
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=92YxqzuGSu
@inproceedings{ chowdhury2023meanfield, title={Mean-Field Assisted Deep Boltzmann Learning with Probabilistic Computers}, author={Shuvro Chowdhury and Shaila Niazi and Kerem Camsari}, booktitle={Machine Learning with New Compute Paradigms}, year={2023}, url={https://openreview.net/forum?id=92YxqzuGSu} }
Despite their appeal as physics-inspired, energy-based and generative nature, general Boltzmann Machines (BM) are considered intractable to train. This belief led to simplified models of BMs with restricted intralayer connections or layer-by-layer training of deep BMs. Recent developments in domain-specific hardware -- specifically probabilistic computers (p-computer) with probabilistic bits (p-bit) -- may change established wisdom on the tractability of deep BMs. In this paper, we show that deep and unrestricted BMs can be trained using p-computers generating hundreds of billions of Markov Chain Monte Carlo (MCMC) samples per second, on sparse networks developed originally for use in D-Wave's annealers. To maximize the efficiency of learning the p-computer, we introduce two families of Mean-Field Theory assisted learning algorithms, or xMFTs (x = Naive and Hierarchical). The xMFTs are used to estimate the averages and correlations during the positive phase of the contrastive divergence (CD) algorithm and our custom-designed p-computer is used to estimate the averages and correlations in the negative phase. A custom Field-Programmable-Gate Array (FPGA) emulation of the p-computer architecture takes up to 45 billion flips per second, allowing the implementation of CD-$n$ where $n$ can be of the order of millions, unlike RBMs where $n$ is typically 1 or 2. Experiments on the full MNIST dataset with the combined algorithm show that the positive phase can be efficiently computed by xMFTs without much degradation when the negative phase is computed by the p-computer. Our algorithm can be used in other scalable Ising machines and its variants can be used to train BMs, previously thought to be intractable.
Mean-Field Assisted Deep Boltzmann Learning with Probabilistic Computers
[ "Shuvro Chowdhury", "Shaila Niazi", "Kerem Camsari" ]
Workshop/MLNCP
poster
2401.01996
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=8fngInns3W
@inproceedings{ anisetti2023emergent, title={Emergent learning in physical systems as feedback-based aging in a glassy landscape}, author={Vidyesh Rao Anisetti and Ananth Kandala and Jennifer Schwarz}, booktitle={Machine Learning with New Compute Paradigms}, year={2023}, url={https://openreview.net/forum?id=8fngInns3W} }
By training linear physical networks to learn linear transformations, we discern how their physical properties evolve due to weight update rules. Our findings highlight a striking similarity between the learning behaviors of such networks and the processes of aging and memory formation in disordered and glassy systems. We show that the learning dynamics resembles an aging process, where the system relaxes in response to repeated application of the feedback boundary forces in presence of an input force, thus encoding a memory of the input-output relationship. With this relaxation comes an increase in the correlation length, which is indicated by the two-point correlation function for the components of the network. We also observe that the square root of the mean-squared error as a function of epoch takes on a non-exponential form, which is a typical feature of glassy systems. This physical interpretation suggests that by encoding more detailed information into input and feedback boundary forces, the process of emergent learning can be rather ubiquitous and, thus, serve as a very early physical mechanism, from an evolutionary standpoint, for learning in biological systems.
Emergent learning in physical systems as feedback-based aging in a glassy landscape
[ "Vidyesh Rao Anisetti", "Ananth Kandala", "Jennifer Schwarz" ]
Workshop/MLNCP
poster
2309.04382
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=7YUt7pQVFg
@inproceedings{ zhao2023unleashing, title={Unleashing Hyperdimensional Computing with Nystr\"om Method based Encoding}, author={Quanling Zhao and Anthony Hitchcock Thomas and Xiaofan Yu and Tajana Rosing}, booktitle={Machine Learning with New Compute Paradigms}, year={2023}, url={https://openreview.net/forum?id=7YUt7pQVFg} }
Hyperdimensional computing (HDC) is an approach for solving cognitive information processing tasks using data represented as high dimensional, low-precision, vectors. The technique has a rigorous mathematical backing, and is easy to implement in energy-efficient and highly parallelizable hardware like FPGAs and ``in-memory'' architectures. The success of HDC based machine learning approaches is heavily dependent on the mapping from raw data to high-dimensional space. In this work, we propose a new method for constructing this mapping that is based on the Nyström method from the literature on kernel approximation. Our approach provides a simple recipe to turn any user-defined positive-semidefinite similarity function into an equivalent mapping in HDC. There is a vast literature on the design of such functions for learning problems, and our approach provides a mechanism to import them into the HDC setting, potentially expanding the types of problems that can be tackled using HDC. An empirical comparison of our approach against existing HDC encoding methods on a variety of classification tasks shows that we can achieve 10%-37% and 3%-18% better classification accuracy on graph and string datasets respectively.
Unleashing Hyperdimensional Computing with Nyström Method based Encoding
[ "Quanling Zhao", "Anthony Hitchcock Thomas", "Xiaofan Yu", "Tajana Rosing" ]
Workshop/MLNCP
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=6flkWTzK2H
@inproceedings{ coles2023thermodynamic, title={Thermodynamic {AI} and Thermodynamic Linear Algebra}, author={Patrick J. Coles and Maxwell Aifer and Kaelan Donatella and Denis Melanson and Max Hunter Gordon and Thomas Dybdahl Ahle and Daniel Simpson and Gavin Crooks and Antonio J Martinez and Faris Mouti Sbahi}, booktitle={Machine Learning with New Compute Paradigms}, year={2023}, url={https://openreview.net/forum?id=6flkWTzK2H} }
Many Artificial Intelligence (AI) algorithms are inspired by physics and employ stochastic fluctuations, such as generative diffusion models, Bayesian neural networks, and Monte Carlo inference. These algorithms are currently run on digital hardware, ultimately limiting their scalability and overall potential. Here, we propose a novel computing device, called Thermodynamic AI hardware, that could accelerate such algorithms. Thermodynamic AI hardware can be viewed as a novel form of computing, since it uses novel fundamental building blocks, called stochastic units (s-units), which naturally evolve over time via stochastic trajectories. In addition to these s-units, Thermodynamic AI hardware employs a Maxwell's demon device that guides the system to produce non-trivial states. We provide a few simple physical architectures for building these devices, such as RC electrical circuits. Moreover, we show that this same hardware can be used to accelerate various linear algebra primitives. We present simple thermodynamic algorithms for (1) solving linear systems of equations, (2) computing matrix inverses, (3) computing matrix determinants, and (4) solving Lyapunov equations. Under reasonable assumptions, we rigorously establish asymptotic speedups for our algorithms, relative to digital methods, that scale linearly in dimension. Numerical simulations also suggest a speedup is achievable in practical scenarios.
Thermodynamic AI and Thermodynamic Linear Algebra
[ "Patrick J. Coles", "Maxwell Aifer", "Kaelan Donatella", "Denis Melanson", "Max Hunter Gordon", "Thomas Dybdahl Ahle", "Daniel Simpson", "Gavin Crooks", "Antonio J Martinez", "Faris Mouti Sbahi" ]
Workshop/MLNCP
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=4P65OwisG9
@inproceedings{ dillavou2023nonlinear, title={Nonlinear Classification Without a Processor}, author={Sam Dillavou and Benjamin Beyer and Menachem Stern and Marc Miskin and Andrea Liu and Douglas Durian}, booktitle={Machine Learning with New Compute Paradigms}, year={2023}, url={https://openreview.net/forum?id=4P65OwisG9} }
Computers, as well as most neuromorphic hardware systems, use central processing and top-down algorithmic control to train for machine learning tasks. In contrast, brains are ensembles of 100 billion neurons working in tandem, giving them tremendous advantages in power efficiency and speed. Many physical systems `learn' through history dependence, but training a physical system to perform arbitrary nonlinear tasks without a processor has not been possible. Here we demonstrate the successful implementation of such a system - a learning meta-material. This nonlinear analog circuit is comprised of identical copies of a single simple element, each following the same local update rule. By applying voltages to our system (inputs), inference is performed by physics in microseconds. When labels are properly enforced (also via voltages), the system's internal state evolves in time, approximating gradient descent. Our system $\textit{learns on its own}$; it requires no processor. Once trained, it performs inference passively, requiring approximately 100~$\mu$W of total power dissipation across its edges. We demonstrate the flexibility and power efficiency of our system by solving nonlinear 2D classification tasks. Learning meta-materials have immense potential as fast, efficient, robust learning systems for edge computing, from smart sensors to medical devices to robotic control.
Nonlinear Classification Without a Processor
[ "Sam Dillavou", "Benjamin Beyer", "Menachem Stern", "Marc Miskin", "Andrea Liu", "Douglas Durian" ]
Workshop/MLNCP
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=3W3Qo3arAG
@inproceedings{ taassob2023neural, title={Neural Deep Operator Networks representation of Coherent Ising Machine Dynamics}, author={Arsalan Taassob and Davide Venturelli and Paul Aaron Lott}, booktitle={Machine Learning with New Compute Paradigms}, year={2023}, url={https://openreview.net/forum?id=3W3Qo3arAG} }
Coherent Ising Machines (CIMs) are optical devices that employ parametric oscillators to tackle binary optimization problems, whose simplified dynamics are described by a series of coupled ordinary differential equations (ODEs). In this study, we setup a proof-of-concept experiment to learn the deterministic dynamics of CIMs via the use of neural Deep Operator Networks (DeepONet). After training successfully the systems over multiple initial conditions and problem instances, we benchmark the comparative performance of the neural network versus the simulated ODEs on solving fully-connected quadratic binary optimization problems. In our tests, the network is capable of delivering solutions to the optimization problems of comparative quality to the exact dynamics up to 175 spins. The CIM model used is very simple with respect to the state-of-art, but we do not identify roadblocks to go further: given sufficient training resources more sophisticated CIM solvers could successfully be represented by a neural network at a large scale.
Neural Deep Operator Networks representation of Coherent Ising Machine Dynamics
[ "Arsalan Taassob", "Davide Venturelli", "Paul Aaron Lott" ]
Workshop/MLNCP
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=2zXPCHKt6C
@inproceedings{ chen2023exploiting, title={Exploiting Symmetric Temporally Sparse {BPTT} for Efficient {RNN} Training}, author={Xi Chen and Chang Gao and Zuowen Wang and Longbiao Cheng and Sheng Zhou and Shih-Chii Liu and Tobi Delbruck}, booktitle={Machine Learning with New Compute Paradigms}, year={2023}, url={https://openreview.net/forum?id=2zXPCHKt6C} }
Recurrent Neural Networks (RNNs) are useful in temporal sequence tasks. However, training RNNs involves dense matrix multiplications which require hardware that can support a large number of arithmetic operations and memory accesses. Implementing online training of RNNs on the edge calls for optimized algorithms for an efficient deployment on hardware. Inspired by the spiking neuron model, the Delta RNN exploits temporal sparsity during inference by skipping over the update of hidden states from those inactivated neurons whose change of activation across two timesteps is below a defined threshold. This work describes a training algorithm for Delta RNNs that exploits temporal sparsity in the backward propagation phase to reduce computational requirements for training on the edge. Due to the symmetric computation graphs of forward and backward propagation during training, the gradient computation of inactivated neurons can be skipped. Results show a reduction of ∼80% in matrix operations for training a 56k parameter Delta LSTM on the Fluent Speech Commands dataset with negligible accuracy loss. Logic simulations of a hardware accelerator designed for the training algorithm show 2-10X speedup in matrix computations for an activation sparsity range of 50%-90%. Additionally, we show that our training algorithm will be useful for online incremental learning on edge devices with limited computing resources.
Exploiting Symmetric Temporally Sparse BPTT for Efficient RNN Training
[ "Xi Chen", "Chang Gao", "Zuowen Wang", "Longbiao Cheng", "Sheng Zhou", "Shih-Chii Liu", "Tobi Delbruck" ]
Workshop/MLNCP
poster
2312.09391
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=zQTa2XdPnP
@inproceedings{ inanc2023library, title={{AI4HPC}: Library to Train {AI} Models on {HPC} Systems using {CFD} Datasets}, author={Eray Inanc and Rakesh Sarma and Marcel Aach and Rocco Sedona and Andreas Lintermann}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=zQTa2XdPnP} }
This paper introduces AI4HPC, an open-source library designed to integrate Artificial Intelligence (AI) models and workflows in High-Performance Computing (HPC) systems for Computational Fluid Dynamics (CFD)-based applications. Developed by CoE RAISE, AI4HPC addresses not only challenges in handling intricate CFD datasets, model complexity, and scalability but also includes extensive code optimizations to improve performance. Furthermore, the library encompasses data manipulation, specialized ML architectures, distributed training, hyperparameter optimization, and performance monitoring. Integrating AI and CFD in AI4HPC empowers efficient analysis of extensive and large-scale datasets. This paper outlines the architecture, components, and potential of AI4HPC to accelerate and augment data-driven fluid dynamics simulations and beyond, demonstrated by showing the scaling results of this library up to 3,664 GPUs.
AI4HPC: Library to Train AI Models on HPC Systems using CFD Datasets
[ "Eray Inanc", "Rakesh Sarma", "Marcel Aach", "Rocco Sedona", "Andreas Lintermann" ]
Workshop/WANT
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=xINTMAvPQA
@inproceedings{ gray2023efficient, title={Efficient and Approximate Per-Example Gradient Norms for Gradient Noise Scale}, author={Gavia Gray and Anshul Samar and Joel Hestness}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=xINTMAvPQA} }
Gradient Noise Scale (GNS) is valuable to compute because it provides a suggestion for a compute efficient batch size during training: small enough to be compute efficient and large enough to take advantage of parallelism. While it can be a valuable tool, computing GNS is often cumbersome or expensive due to the difficulty of obtaining gradient norms over a small batch of examples (smaller than the training batch used). An existing trick for collecting “efficient” per-example gradient norms is inefficient in transformer or convolutional models. By assuming activations are normally distributed, we compute an approximate per-example gradient norm that tracks the true per-example gradient norm in practical settings. Using this approximation, we construct a Scaled Output Gradient Noise Scale (SOGNS) that is generally applicable at negligible cost and provides additional feedback to the practitioner during training.
Efficient and Approximate Per-Example Gradient Norms for Gradient Noise Scale
[ "Gavia Gray", "Anshul Samar", "Joel Hestness" ]
Workshop/WANT
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=wMbe8fVjgf
@inproceedings{ blumenfeld2023towards, title={Towards Cheaper Inference in Deep Networks with Lower Bit-Width Accumulators}, author={Yaniv Blumenfeld and Itay Hubara and Daniel Soudry}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=wMbe8fVjgf} }
The majority of the research on the quantization of Deep Neural Networks (DNNs) is focused on reducing the precision of tensors visible by high-level frameworks (e.g., weights, activations, and gradients). However, current hardware still relies on high-accuracy core operations. Most significant is the operation of accumulating products. This high-precision accumulation operation is gradually becoming the main computational bottleneck. This is because, so far, the usage of low-precision accumulators led to a significant degradation in performance. In this work, we present a simple method to train and fine-tune high-end DNNs, to allow, for the first time, utilization of cheaper, $12$-bits accumulators, with no significant degradation in accuracy. Lastly, we show that as we decrease the accumulation precision further, using fine-grained gradient approximations can improve the DNN accuracy.
Towards Cheaper Inference in Deep Networks with Lower Bit-Width Accumulators
[ "Yaniv Blumenfeld", "Itay Hubara", "Daniel Soudry" ]
Workshop/WANT
poster
2401.14110
[ "" ]
https://huggingface.co/papers/2401.14110
2
0
0
3
1
[]
[]
[]
null
https://openreview.net/forum?id=v9xkSeDxaq
@inproceedings{ doshi2023reffakd, title={Reff{AKD}: Resource-efficient Autoencoder-based Knowledge Distillation}, author={Divyang Doshi and Jung-Eun Kim}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=v9xkSeDxaq} }
In this research, we propose an innovative method to boost Knowledge Distillation efficiency without the need for resource-heavy teacher models. Knowledge Distillation trains a smaller "student" model with guidance from a larger "teacher" model, which is computationally costly. However, the main benefit comes from the soft labels provided by the teacher, helping the student grasp nuanced class similarities. In our work, we propose an efficient method for generating these soft labels, thereby eliminating the need for a large teacher model. We employ a compact autoencoder to extract essential features and calculate similarity scores between different classes. Afterward, we apply the softmax function to these similarity scores to obtain a soft probability vector. This vector serves as valuable guidance during the training of the student model. Our extensive experiments on various datasets, including CIFAR-100, Tiny Imagenet, and Fashion MNIST, demonstrate the superior resource efficiency of our approach compared to traditional knowledge distillation methods that rely on large teacher models. Importantly, our approach consistently achieves similar or even superior performance in terms of model accuracy. We also perform a comparative study with various techniques recently developed for knowledge distillation showing our approach achieves competitive performance with using significantly less resources. We also show that our approach can be easily added to any logit based knowledge distillation method. This research contributes to making knowledge distillation more accessible and cost-effective for practical applications, making it a promising avenue for improving the efficiency of model training.
ReffAKD: Resource-efficient Autoencoder-based Knowledge Distillation
[ "Divyang Doshi", "Jung-Eun Kim" ]
Workshop/WANT
poster
2404.09886
[ "https://github.com/jekimlab/reffakd" ]
-1
-1
-1
-1
0
[]
[]
[]